Science.gov

Sample records for multi-dimensional scaling analysis

  1. An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling

    NASA Astrophysics Data System (ADS)

    Bradley, C.; Massof, R. W.

    2016-11-01

    Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.

  2. The multi-dimensional model of Māori identity and cultural engagement: item response theory analysis of scale properties.

    PubMed

    Sibley, Chris G; Houkamau, Carla A

    2013-01-01

    We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for Māori peoples, is the Multi-Dimensional Model of Māori Identity and Cultural Engagement (MMM-ICE). Māori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as Māori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 Māori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of Māori identification tended to be higher among older Māori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in Māori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included.

  3. Multi-Dimensional Shallow Landslide Stability Analysis Suitable for Application at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Milledge, D.; Bellugi, D.; McKean, J. A.; Dietrich, W.

    2012-12-01

    The infinite slope model is the basis for almost all watershed scale slope stability models. However, it assumes that a potential landslide is infinitely long and wide. As a result, it cannot represent resistance at the margins of a potential landslide (e.g. from lateral roots), and is unable to predict the size of a potential landslide. Existing three-dimensional models generally require computationally expensive numerical solutions and have previously been applied only at the hillslope scale. Here we derive an alternative analytical treatment that accounts for lateral resistance by representing the forces acting on each margin of an unstable block. We apply 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure using a log-spiral method on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion is an exponential function of soil depth. We benchmark this treatment against other more complete approaches (Finite Element (FE) and closed form solutions) and find that our model: 1) converges on the infinite slope predictions as length / depth and width / depth ratios become large; 2) agrees with the predictions from state-of-the-art FE models to within +/- 30% error, for the specific cases in which these can be applied. We then test our model's ability to predict failure of an actual (mapped) landslide where the relevant parameters are relatively well constrained. We find that our model predicts failure at the observed location with a nearly identical shape and predicts that larger or smaller shapes conformal to the observed shape are indeed more stable. Finally, we perform a sensitivity analysis using our model to show that lateral reinforcement sets a minimum landslide size, while the additional strength at the downslope boundary means that the optimum shape for a given size is longer in a downslope direction. However, reinforcement effects cannot fully explain the size or shape

  4. Large Scale Asynchronous and Distributed Multi-Dimensional Replica Exchange Molecular Simulations and Efficiency Analysis

    PubMed Central

    Xia, Junchao; Flynn, William F.; Gallicchio, Emilio; Zhang, Bin W.; He, Peng; Tan, Zhiqiang; Levy, Ronald M.

    2015-01-01

    We describe methods to perform replica exchange molecular dynamics (REMD) simulations asynchronously (ASyncRE). The methods are designed to facilitate large scale REMD simulations on grid computing networks consisting of heterogeneous and distributed computing environments as well as on homogeneous high performance clusters. We have implemented these methods on NSF XSEDE clusters and BOINC distributed computing networks at Temple University, and Brooklyn College at CUNY. They are also being implemented on the IBM World Community Grid. To illustrate the methods we have performed extensive (more than 60 microseconds in aggregate) simulations for the beta-cyclodextrin-heptanoate host-guest system in the context of one and two dimensional ASyncRE and we used the results to estimate absolute binding free energies using the Binding Energy Distribution Analysis Method (BEDAM). We propose ways to improve the efficiency of REMD simulations: these include increasing the number of exchanges attempted after a specified MD period up to the fast exchange limit, and/or adjusting the MD period to allow sufficient internal relaxation within each thermodynamic state. Although ASyncRE simulations generally require long MD periods (> picoseconds) per replica exchange cycle to minimize the overhead imposed by heterogeneous computing networks, we found that it is possible to reach an efficiency similar to conventional synchronous REMD, by optimizing the combination of the MD period and the number of exchanges attempted per cycle. PMID:26149645

  5. Exploring perceptually similar cases with multi-dimensional scaling

    NASA Astrophysics Data System (ADS)

    Wang, Juan; Yang, Yongyi; Wernick, Miles N.; Nishikawa, Robert M.

    2014-03-01

    Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.

  6. Development of a Multi-Dimensional Scale for PDD and ADHD

    ERIC Educational Resources Information Center

    Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya

    2011-01-01

    A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…

  7. Coupling visualization and data analysis for knowledge discovery from multi-dimensional scientific data

    PubMed Central

    Rübel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V. E.; Knowles, David W.; Hendriks, Cris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2013-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies —such as efficient data management— supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach. PMID:23762211

  8. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    SciTech Connect

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-06-08

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  9. Coupling visualization and data analysis for knowledge discovery from multi-dimensional scientific data.

    PubMed

    Rübel, Oliver; Ahern, Sean; Bethel, E Wes; Biggin, Mark D; Childs, Hank; Cormier-Michel, Estelle; Depace, Angela; Eisen, Michael B; Fowlkes, Charless C; Geddes, Cameron G R; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V E; Knowles, David W; Hendriks, Cris L Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H; Wu, Kesheng

    2010-05-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  10. Comprehensive multi-dimensional liquid chromatographic separation in biomedical and pharmaceutical analysis: a review.

    PubMed

    Dixon, Steven P; Pitfield, Ian D; Perrett, David

    2006-01-01

    'Multi-dimensional' liquid separations have a history almost as long as chromatography. In multi-dimensional chromatography the sample is subjected to more than one separation mechanism; each mechanism is considered an independent separation dimension. The separations can be carried out either offline via fraction collection, or directly coupled online. Early multi-dimensional separations using combinations of paper chromatography, electrophoresis and gels, in both planar and columnar modes are reviewed. Developments in HPLC have increased the number of measurable analytes in ever more complex matrices, and this has led to the concept of 'global metabolite profiling'. This review focuses on the theory and practice of modern 'comprehensive' multi-dimensional liquid chromatography when applied to biomedical and pharmaceutical analysis.

  11. How Fitch-Margoliash Algorithm can Benefit from Multi Dimensional Scaling

    PubMed Central

    Lespinats, Sylvain; Grando, Delphine; Maréchal, Eric; Hakimi, Mohamed-Ali; Tenaillon, Olivier; Bastien, Olivier

    2011-01-01

    Whatever the phylogenetic method, genetic sequences are often described as strings of characters, thus molecular sequences can be viewed as elements of a multi-dimensional space. As a consequence, studying motion in this space (ie, the evolutionary process) must deal with the amazing features of high-dimensional spaces like concentration of measured phenomenon. To study how these features might influence phylogeny reconstructions, we examined a particular popular method: the Fitch-Margoliash algorithm, which belongs to the Least Squares methods. We show that the Least Squares methods are closely related to Multi Dimensional Scaling. Indeed, criteria for Fitch-Margoliash and Sammon’s mapping are somewhat similar. However, the prolific research in Multi Dimensional Scaling has definitely allowed outclassing Sammon’s mapping. Least Square methods for tree reconstruction can now take advantage of these improvements. However, “false neighborhood” and “tears” are the two main risks in dimensionality reduction field: “false neighborhood” corresponds to a widely separated data in the original space that are found close in representation space, and neighbor data that are displayed in remote positions constitute a “tear”. To address this problem, we took advantage of the concepts of “continuity” and “trustworthiness” in the tree reconstruction field, which limit the risk of “false neighborhood” and “tears”. We also point out the concentration of measured phenomenon as a source of error and introduce here new criteria to build phylogenies with improved preservation of distances and robustness. The authors and the Evolutionary Bioinformatics Journal dedicate this article to the memory of Professor W.M. Fitch (1929–2011). PMID:21697992

  12. Theme section: Multi-dimensional modelling, analysis and visualization

    NASA Astrophysics Data System (ADS)

    Guilbert, Éric; Çöltekin, Arzu; Castro, Francesc Antón; Pettit, Chris

    2016-07-01

    Spatial data are now collected and processed in larger amounts, and used by larger populations than ever before. While most geospatial data have traditionally been recorded as two-dimensional data, the evolution of data collection methods and user demands have led to data beyond the two dimensions describing complex multidimensional phenomena. An example of the relevance of multidimensional modelling is seen with the development of urban modelling where several dimensions have been added to the traditional 2D map representation (Sester et al., 2011). These include obviously the third spatial dimension (Biljecki et al., 2015) as well as the temporal, but also the scale dimension (Van Oosterom and Stoter, 2010) or, as mentioned by (Lu et al., 2016), multi-spectral and multi-sensor data. Such a view provides an organisation of multidimensional data around these different axes and it is time to explore each axis as the availability of unprecedented amounts of new data demands new solutions. The availability of such large amounts of data induces an acute need for developing new approaches to assist with their dissemination, visualisation, and analysis by end users. Several issues need to be considered in order to provide a meaningful representation and assist in data visualisation and mining, modelling and analysis; such as data structures allowing representation at different scales or in different contexts of thematic information.

  13. A Multi-Dimensional Cognitive Analysis of Undergraduate Physics Students' Understanding of Heat Conduction

    ERIC Educational Resources Information Center

    Chiou, Guo-Li; Anderson, O. Roger

    2010-01-01

    This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…

  14. Semantic Differential Scale Method Can Reveal Multi-Dimensional Aspects of Mind Perception.

    PubMed

    Takahashi, Hideyuki; Ban, Midori; Asada, Minoru

    2016-01-01

    As humans, we tend to perceive minds in both living and non-living entities, such as robots. From a questionnaire developed in a previous mind perception study, authors found that perceived minds could be located on two dimensions "experience" and "agency." This questionnaire allowed the assessment of how we perceive minds of various entities from a multi-dimensional point of view. In this questionnaire, subjects had to evaluate explicit mental capacities of target characters (e.g., capacity to feel hunger). However, we sometimes perceive minds in non-living entities, even though we cannot attribute these evidently biological capacities to the entity. In this study, we performed a large-scale web survey to assess mind perception by using the semantic differential scale method. We revealed that two mind dimensions "emotion" and "intelligence," respectively, corresponded to the two mind dimensions (experience and agency) proposed in a previous mind perception study. We did this without having to ask about specific mental capacities. We believe that the semantic differential scale is a useful method to assess the dimensions of mind perception especially for non-living entities that are hard to be attributed to biological capacities.

  15. Semantic Differential Scale Method Can Reveal Multi-Dimensional Aspects of Mind Perception

    PubMed Central

    Takahashi, Hideyuki; Ban, Midori; Asada, Minoru

    2016-01-01

    As humans, we tend to perceive minds in both living and non-living entities, such as robots. From a questionnaire developed in a previous mind perception study, authors found that perceived minds could be located on two dimensions “experience” and “agency.” This questionnaire allowed the assessment of how we perceive minds of various entities from a multi-dimensional point of view. In this questionnaire, subjects had to evaluate explicit mental capacities of target characters (e.g., capacity to feel hunger). However, we sometimes perceive minds in non-living entities, even though we cannot attribute these evidently biological capacities to the entity. In this study, we performed a large-scale web survey to assess mind perception by using the semantic differential scale method. We revealed that two mind dimensions “emotion” and “intelligence,” respectively, corresponded to the two mind dimensions (experience and agency) proposed in a previous mind perception study. We did this without having to ask about specific mental capacities. We believe that the semantic differential scale is a useful method to assess the dimensions of mind perception especially for non-living entities that are hard to be attributed to biological capacities. PMID:27853445

  16. Empirical Analysis of Various Multi-Dimensional Knapsack Heuristics

    DTIC Science & Technology

    2002-03-01

    1987) and Glover(1977) use a multiplier method and surrogate constraints to transform the MKP into a knapsack problem whose solution provides a bound ...solution, they generate logic cuts based on analysis before solving the problem using branch -and- bound . The dual surrogate constraint provides a useful...

  17. Map Building By Multi-dimensional Scaling of Co-visibility Data

    NASA Astrophysics Data System (ADS)

    Yairi, Takehisa; Maeno, Toshiaki

    Covisibility-based mapping is a paradigm for robotic map building research in which a mobile robot estimates multiple object positions only from ``covisibility'' information, i.e., ``which objects were recognized at a time''. In previous studies on this problem, a solution based on a combination of heuristics - ``closely located objects are likely to be seen simultaneously more often than distant objects'' and Multi-Dimensional Scaling (MDS) was proposed, and it was shown that qualitative spatial relationships among objects are learned with high accuracy by this method. However, theoretical validity of the heuristics has not been sufficiently discussed in these studies. Besides, the existing method has a defect that the quantitative accuracy of built maps is very low. In this paper, we first prove that the heuristics is generally valid in a certain condition, and then present several enhancements to the original method in order to improve the quantitative accuracy of the maps. In the experiments, it was found these enhacements are quite effective.

  18. A multi scale multi-dimensional thermo electrochemical modelling of high capacity lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tourani, Abbas; White, Peter; Ivey, Paul

    2014-06-01

    Lithium iron phosphate (LFP) and lithium manganese oxide (LMO) are competitive and complementary to each other as cathode materials for lithium-ion batteries, especially for use in electric vehicles. A multi scale multi-dimensional physic-based model is proposed in this paper to study the thermal behaviour of the two lithium-ion chemistries. The model consists of two sub models, a one dimensional (1D) electrochemical sub model and a two dimensional (2D) thermo-electric sub model, which are coupled and solved concurrently. The 1D model predicts the heat generation rate (Qh) and voltage (V) of the battery cell through different load cycles. The 2D model of the battery cell accounts for temperature distribution and current distribution across the surface of the battery cell. The two cells are examined experimentally through 90 h load cycles including high/low charge/discharge rates. The experimental results are compared with the model results and they are in good agreement. The presented results in this paper verify the cells temperature behaviour at different operating conditions which will lead to the design of a cost effective thermal management system for the battery pack.

  19. Method of multi-dimensional moment analysis for the characterization of signal peaks

    SciTech Connect

    Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

    2012-10-23

    A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

  20. On-the-fly analysis of multi-dimensional rasters in a GIS

    NASA Astrophysics Data System (ADS)

    Abdul-Kadar, F.; Xu, H.; Gao, P.

    2016-04-01

    Geographic Information Systems and other mapping applications that specialize in image analysis routinely process high-dimensional gridded rasters as multivariate data cubes. Frameworks responsible for processing image data within these applications suffer from a combination of key shortcomings: inefficiencies stemming from intermediate results being stored on disk, the lack of versatility from disparate tools that don't work in unison, or the poor scalability with increasing volume and dimensionality of the data. We present raster functions as a powerful mechanism for processing and analyzing multi-dimensional rasters designed to overcome these crippling issues. A raster function accepts multivariate hypercubes and processing parameters as input and produces one output raster. Function chains and their parameterized form, function templates, represent a complex image processing operation constructed by composing simpler raster functions. We discuss extensibility of the framework via Python, portability of templates via XML, and dynamic filtering of data cubes using SQL. This paper highlights how ArcGIS employs raster functions in its mission to build actionable information from science and geographic data—by shrinking the lag between the acquisition of raw multi-dimensional raster data and the ultimate dissemination of derived image products. ArcGIS has a mature raster I/O pipeline based on GDAL, and it manages gridded multivariate multi-dimensional cubes in mosaic datasets stored within a geodatabase atop an RDBMS. Bundled with raster functions, we show those capabilities make possible up-to-date maps that are driven by distributed geoanalytics and powerful visualizations against large volumes of near real-time gridded data.

  1. Overview of Computer-Aided Engineering of Batteries and Introduction to Multi-Scale, Multi-Dimensional Modeling of Li-Ion Batteries (Presentation)

    SciTech Connect

    Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

    2012-05-01

    This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

  2. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  3. Assessing Children's Homework Performance: Development of Multi-Dimensional, Multi-Informant Rating Scales.

    PubMed

    Power, Thomas J; Dombrowski, Stefan C; Watkins, Marley W; Mautone, Jennifer A; Eagle, John W

    2007-06-01

    Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students attending two school districts in the Northeast. Parents completed the 36-item Homework Performance Questionnaire - Parent Scale (HPQ-PS). Teachers completed the 22-item teacher scale (HPQ-TS) for each student for whom the HPQ-PS had been completed. A common factor analysis with principal axis extraction and promax rotation was used to analyze the findings. The results of the factor analysis of the HPQ-PS revealed three salient and meaningful factors: student task orientation/efficiency, student competence, and teacher support. The factor analysis of the HPQ-TS uncovered two salient and substantive factors: student responsibility and student competence. The findings of this study suggest that the HPQ is a promising set of measures for assessing student homework functioning and contextual factors that may influence performance. Directions for future research are presented.

  4. Effective use of metadata in the integration and analysis of multi-dimensional optical data

    NASA Astrophysics Data System (ADS)

    Pastorello, G. Z.; Gamon, J. A.

    2012-12-01

    Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The

  5. Seismic fragility analysis of highway bridges considering multi-dimensional performance limit state

    NASA Astrophysics Data System (ADS)

    Wang, Qi'ang; Wu, Ziyan; Liu, Shukui

    2012-03-01

    Fragility analysis for highway bridges has become increasingly important in the risk assessment of highway transportation networks exposed to seismic hazards. This study introduces a methodology to calculate fragility that considers multi-dimensional performance limit state parameters and makes a first attempt to develop fragility curves for a multispan continuous (MSC) concrete girder bridge considering two performance limit state parameters: column ductility and transverse deformation in the abutments. The main purpose of this paper is to show that the performance limit states, which are compared with the seismic response parameters in the calculation of fragility, should be properly modeled as randomly interdependent variables instead of deterministic quantities. The sensitivity of fragility curves is also investigated when the dependency between the limit states is different. The results indicate that the proposed method can be used to describe the vulnerable behavior of bridges which are sensitive to multiple response parameters and that the fragility information generated by this method will be more reliable and likely to be implemented into transportation network loss estimation.

  6. Fast Multi-dimensional Ensemble Empirical Mode Decomposition for the analysis of Big Spatiotemporal Data Sets

    NASA Astrophysics Data System (ADS)

    Wu, Z.

    2015-12-01

    In this big data era, it is more urgent than ever to solve two major issues: (1) fast data transmission method that can facilitate access to data from non-local sources, and (2) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and fast algorithm. In this paper, we introduce the recently developed adaptive and spatiotemporally local analysis method, namely the fast multi-dimensional ensemble empirical mode decomposition (MEEMD), for the analysis of large spatiotemporal dataset. The original MEEMD uses ensemble empirical mode decomposition (EEMD) to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking the advantage of the high efficiency of the principle component analysis/empirical orthogonal function (PCA/EOF) expression for spatiotemporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. In addition to that, we also explain the basic principles behind the fast MEEMD through decomposing PCs instead of original grid-wise time series to speedup computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (1) compress data with a compression rate of one to two orders; (2) speed up the MEEMD algorithm by one to two orders.

  7. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling

    SciTech Connect

    Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus

    2014-03-01

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.

  8. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling.

    PubMed

    Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus

    2013-07-11

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our bi-scale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.

  9. Multi-dimensional reliability assessment of fractal signature analysis in an outpatient sports medicine population.

    PubMed

    Jarraya, Mohamed; Guermazi, Ali; Niu, Jingbo; Duryea, Jeffrey; Lynch, John A; Roemer, Frank W

    2015-11-01

    The aim of this study has been to test reproducibility of fractal signature analysis (FSA) in a young, active patient population taking into account several parameters including intra- and inter-reader placement of regions of interest (ROIs) as well as various aspects of projection geometry. In total, 685 patients were included (135 athletes and 550 non-athletes, 18-36 years old). Regions of interest (ROI) were situated beneath the medial tibial plateau. The reproducibility of texture parameters was evaluated using intraclass correlation coefficients (ICC). Multi-dimensional assessment included: (1) anterior-posterior (A.P.) vs. posterior-anterior (P.A.) (Lyon-Schuss technique) views on 102 knees; (2) unilateral (single knee) vs. bilateral (both knees) acquisition on 27 knees (acquisition technique otherwise identical; same A.P. or P.A. view); (3) repetition of the same image acquisition on 46 knees (same A.P. or P.A. view, and same unitlateral or bilateral acquisition); and (4) intra- and inter-reader reliability with repeated placement of the ROIs in the subchondral bone area on 99 randomly chosen knees. ICC values on the reproducibility of texture parameters for A.P. vs. P.A. image acquisitions for horizontal and vertical dimensions combined were 0.72 (95% confidence interval (CI) 0.70-0.74) ranging from 0.47 to 0.81 for the different dimensions. For unilateral vs. bilateral image acquisitions, the ICCs were 0.79 (95% CI 0.76-0.82) ranging from 0.55 to 0.88. For the repetition of the identical view, the ICCs were 0.82 (95% CI 0.80-0.84) ranging from 0.67 to 0.85. Intra-reader reliability was 0.93 (95% CI 0.92-0.94) and inter-observer reliability was 0.96 (95% CI 0.88-0.99). A decrease in reliability was observed with increasing voxel sizes. Our study confirms excellent intra- and inter-reader reliability for FSA, however, results seem to be affected by acquisition technique, which has not been previously recognized.

  10. Similarity from Multi-Dimensional Scaling: Solving the Accuracy and Diversity Dilemma in Information Filtering

    PubMed Central

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  11. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    PubMed

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term.

  12. Nitrogen deposition and multi-dimensional plant diversity at the landscape scale

    PubMed Central

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat

    2015-01-01

    Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1 km2 plots covering most habitat types of Central Europe and an elevational range of 2900 m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640

  13. The multi-dimensional neighbourhood and health: a cross-sectional analysis of the Scottish Household Survey, 2001.

    PubMed

    Parkes, Alison; Kearns, Ade

    2006-03-01

    Neighbourhoods may influence the health of individual residents in different ways: via the social and physical environment, as well as through facilities and services. Not all factors may be equally important for all population subgroups. A cross-sectional analysis of the Scottish Household Survey 2001 examined a range of neighbourhood factors for links with three health outcomes and two health-related behaviours. The results support the hypothesis that the neighbourhood has a multi-dimensional impact on health. There was also some evidence that the relationship between neighbourhood factors and health varied according to the population subgroup, although not in a consistent manner.

  14. Predicting the redox state and secondary structure of cysteine residues using multi-dimensional classification analysis of NMR chemical shifts.

    PubMed

    Wang, Ching-Cheng; Lai, Wen-Chung; Chuang, Woei-Jer

    2016-09-01

    A tool for predicting the redox state and secondary structure of cysteine residues using multi-dimensional analyses of different combinations of nuclear magnetic resonance (NMR) chemical shifts has been developed. A data set of cysteine [Formula: see text], (13)C(α), (13)C(β), (1)H(α), (1)H(N), and (15)N(H) chemical shifts was created, classified according to redox state and secondary structure, using a library of 540 re-referenced BioMagResBank (BMRB) entries. Multi-dimensional analyses of three, four, five, and six chemical shifts were used to derive rules for predicting the structural states of cysteine residues. The results from 60 BMRB entries containing 122 cysteines showed that four-dimensional analysis of the C(α), C(β), H(α), and N(H) chemical shifts had the highest prediction accuracy of 100 and 95.9 % for the redox state and secondary structure, respectively. The prediction of secondary structure using 3D, 5D, and 6D analyses had the accuracy of ~90 %, suggesting that H(N) and [Formula: see text] chemical shifts may be noisy and made the discrimination worse. A web server (6DCSi) was established to enable users to submit NMR chemical shifts, either in BMRB or key-in formats, for prediction. 6DCSi displays predictions using sets of 3, 4, 5, and 6 chemical shifts, which shows their consistency and allows users to draw their own conclusions. This web-based tool can be used to rapidly obtain structural information regarding cysteine residues directly from experimental NMR data.

  15. magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation

    NASA Astrophysics Data System (ADS)

    Angleraud, Christophe

    2014-06-01

    The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.

  16. Pedagogic discourse in introductory classes: Multi-dimensional analysis of textbooks and lectures in biology and macroeconomics

    NASA Astrophysics Data System (ADS)

    Carkin, Susan

    The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being

  17. Multi-dimensional TOF-SIMS analysis for effective profiling of disease-related ions from the tissue surface

    NASA Astrophysics Data System (ADS)

    Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol

    2015-06-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.

  18. Multi-dimensional TOF-SIMS analysis for effective profiling of disease-related ions from the tissue surface.

    PubMed

    Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol

    2015-06-05

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.

  19. Contrastive analysis of three parallel modes in multi-dimensional dynamic programming and its application in cascade reservoirs operation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanke; Jiang, Zhiqiang; Ji, Changming; Sun, Ping

    2015-10-01

    The "curse of dimensionality" of dynamic programming (DP) has always been a great challenge to the cascade reservoirs operation optimization (CROO) because computer memory and computational time increase exponentially with the increasing number of reservoirs. It is an effective measure to combine DP with the parallel processing technology to improve the performance. This paper proposes three parallel modes for multi-dimensional dynamic programming (MDP) based on .NET4 Parallel Extensions, i.e., the stages parallel mode, state combinations parallel mode and hybrid parallel mode. A cascade reservoirs of Li Xiangjiang River in China is used as the study instance in this paper, and a detailed contrastive analysis of the three parallel modes on run-time, parallel acceleration ratio, parallel efficiency and memory usage has been implemented based on the parallel computing results. Results show that all the three parallel modes can effectively shorten the run-time so that to alleviate the "curse of dimensionality" of MDP, but relatively, the state combinations parallel mode is the optimal, the hybrid parallel is suboptimal and the stages parallel mode is poor.

  20. Multi-Dimensional Combustion Instability Analysis of Solid Propellant Rocket Motors.

    DTIC Science & Technology

    2014-09-26

    STANDARDS MlICROCOPY RESOLUTION TEST CHART 0 0 0 03 V.%% f iSR.TR. 85-0567 NULTI-DIMNSIONAL COMBUSTION INSTABILITY ANALYSIS OF SOLID PROPELLANT ROCKET ...analysis of solid propellant rocket motors. This research was motivated by the need for im- provement of the current practice in combustion instability...ANALYSIS OF SOLID PROPELLANT ROCKET MOTORS ST. 3. Chung, Ph.D. Department of Mechanical Engineering The University of Alabama in Huntsville Huntsville

  1. A multi-dimensional analysis of cue-elicited craving in heavy smokers and tobacco chippers

    PubMed Central

    SAYETTE, MICHAEL A.; MARTIN, CHRISTOPHER S.; WERTZ, JOAN M.; SHIFFMAN, SAUL; PERROTT, MICHAEL A.

    2009-01-01

    Aims This research examined the performance of a broad range of measures posited to relate to smoking craving. Design Heavy smokers and tobacco chippers, who were either deprived of smoking or not for 7 hours, were exposed to both smoking (a lit cigarette) and control cues. Participants Smokers not currently interested in trying to quit smoking (n = 127) were recruited. Heavy smokers (n = 67) averaged smoking at least 21 cigarettes/day and tobacco chippers (n = 60) averaged 1–5 cigarettes on at least 2 days/week. Measurements Measures included urge rating scales and magnitude estimations, a rating of affective valence, a behavioral choice task that assessed perceived reinforcement value of smoking, several smoking-related judgement tasks and a measure of cognitive resource allocation. Findings Results indicated that both deprivation state and smoker type tended to affect responses across these measurement domains. Conclusions Findings support the use of several novel measures of craving-related processes in smokers. PMID:11571061

  2. A multi-dimensional functional principal components analysis of EEG data.

    PubMed

    Hasenstab, Kyle; Scheffler, Aaron; Telesca, Donatello; Sugar, Catherine A; Jeste, Shafali; DiStefano, Charlotte; Şentürk, Damla

    2017-01-10

    The electroencephalography (EEG) data created in event-related potential (ERP) experiments have a complex high-dimensional structure. Each stimulus presentation, or trial, generates an ERP waveform which is an instance of functional data. The experiments are made up of sequences of multiple trials, resulting in longitudinal functional data and moreover, responses are recorded at multiple electrodes on the scalp, adding an electrode dimension. Traditional EEG analyses involve multiple simplifications of this structure to increase the signal-to-noise ratio, effectively collapsing the functional and longitudinal components by identifying key features of the ERPs and averaging them across trials. Motivated by an implicit learning paradigm used in autism research in which the functional, longitudinal, and electrode components all have critical interpretations, we propose a multidimensional functional principal components analysis (MD-FPCA) technique which does not collapse any of the dimensions of the ERP data. The proposed decomposition is based on separation of the total variation into subject and subunit level variation which are further decomposed in a two-stage functional principal components analysis. The proposed methodology is shown to be useful for modeling longitudinal trends in the ERP functions, leading to novel insights into the learning patterns of children with Autism Spectrum Disorder (ASD) and their typically developing peers as well as comparisons between the two groups. Finite sample properties of MD-FPCA are further studied via extensive simulations.

  3. Documentation, Multi-scale and Multi-dimensional Representation of Cultural Heritage for the Policies of Redevelopment, Development and Regeneration

    NASA Astrophysics Data System (ADS)

    De Masi, A.

    2015-09-01

    The paper describes reading criteria for the documentation for important buildings in Milan, Italy, as a case study of the research on the integration of new technologies to obtain 3D multi-scale representation architectures. In addition, affords an overview of the actual optical 3D measurements sensors and techniques used for surveying, mapping, digital documentation and 3D modeling applications in the Cultural Heritage field. Today new opportunities for an integrated management of data are given by multiresolution models, that can be employed for different scale of representation. The goal of multi-scale representations is to provide several representations where each representation is adapted to a different information density with several degrees of detail. The Digital Representation Platform, along with the 3D City Model, are meant to be particularly useful to heritage managers who are developing recording, documentation, and information management strategies appropriate to territories, sites and monuments. Digital Representation Platform and 3D City Model are central activities in a the decision-making process for heritage conservation management and several urban related problems. This research investigates the integration of the different level-of-detail of a 3D City Model into one consistent 4D data model with the creation of level-of-detail using algorithms from a GIS perspective. In particular, such project is based on open source smart systems, and conceptualizes a personalized and contextualized exploration of the Cultural Heritage through an experiential analysis of the territory.

  4. Advanced multi-dimensional deterministic transport computational capability for safety analysis of pebble-bed reactors

    NASA Astrophysics Data System (ADS)

    Tyobeka, Bismark Mzubanzi

    A coupled neutron transport thermal-hydraulics code system with both diffusion and transport theory capabilities is presented. At the heart of the coupled code is a powerful neutronics solver, based on a neutron transport theory approach, powered by the time-dependent extension of the well known DORT code, DORT-TD. DORT-TD uses a fully implicit time integration scheme and is coupled via a general interface to the thermal-hydraulics code THERMIX-DIREKT, an HTR-specific two dimensional core thermal-hydraulics code. Feedback is accounted for by interpolating multigroup cross sections from pre-generated libraries which are structured for user specified discrete sets of thermal-hydraulic parameters e.g. fuel and moderator temperatures. The coupled code system is applied to two HTGR designs, the PBMR 400MW and the PBMR 268MW. Steady-state and several design basis transients are modeled in an effort to discern with the adequacy of using neutron diffusion theory as against the more accurate but yet computationally expensive neutron transport theory. It turns out that there are small but significant differences in the results from using either of the two theories. It is concluded that diffusion theory can be used with a higher degree of confidence in the PBMR as long as more than two energy groups are used and that the result must be checked against lower order transport solution, especially for safety analysis purposes. The end product of this thesis is a high fidelity, state-of-the-art computer code system, with multiple capabilities to analyze all PBMR safety related transients in an accurate and efficient manner.

  5. Impurity analysis of pure aldrin using heart-cut multi-dimensional gas chromatography-mass spectrometry.

    PubMed

    Li, Xiaomin; Dai, Xinhua; Yin, Xiong; Li, Ming; Zhao, Yingchen; Zhou, Jian; Huang, Ting; Li, Hongmei

    2013-02-15

    Identification and quantification of related-structure impurity is a research focus in the purity assessment of organic compounds. Determination of the purity value and uncertainty assessment are also important in the metrological research. A method for the determination of related-structure impurity in pure aldrin sample has been developed by using heart-cut multi-dimensional gas chromatography-mass spectrometry (MDGC/MS). Compared to the traditional one-dimensional (1-D) GC system, the two separated columns in the MDGC/MS system can effectively reduce co-elution, enhance separation capability, and thus improve detectability of the trace-level impurities. In addition, MDGC/MS system was simultaneously equipped with flame ionization detector (FID) or electron capture detector (ECD) in the first GC unit and mass spectrometry (MS) detector in the second GC unit. Therefore, accurate quantitative results of the trace-level impurities can be easily achieved by isolation of principal component to the second dimension column using "heart-cut" process. The mass fraction of related-structure impurities in aldrin samples obtained using MDGC/MS system ranged from 6.8×10⁻³ mg g⁻¹ to 26.47 mg g⁻¹ with five orders of magnitude, which is hard to be realized by mean of the 1-D GC. Excellent linearity with correlation coefficients of above 0.999 was achieved for each impurity analysis over a wide range of concentrations. Limits of quantification (LOQ) varied from 250 ng g⁻¹ to 330 ng g⁻¹ for FID, and from 1.0 ng g⁻¹ to 2.0 ng g⁻¹ detected by ECD. The combined standard uncertainty (u(c)) was lower than 0.37 mg g⁻¹ and 0.040 mg g⁻¹ detected using FID and ECD, respectively. Therefore, performance characterization of MDGC/MS used in the study is fit for quantification analysis of trace-level impurity. These results demonstrate that the MDGC/MS is extremely suitable for the purity assessment of organic compounds with medium structural complexity and low

  6. Multi-dimensional edge detection operators

    NASA Astrophysics Data System (ADS)

    Youn, Sungwook; Lee, Chulhee

    2014-05-01

    In remote sensing, modern sensors produce multi-dimensional images. For example, hyperspectral images contain hundreds of spectral images. In many image processing applications, segmentation is an important step. Traditionally, most image segmentation and edge detection methods have been developed for one-dimensional images. For multidimensional images, the output images of spectral band images are typically combined under certain rules or using decision fusions. In this paper, we proposed a new edge detection algorithm for multi-dimensional images using secondorder statistics. First, we reduce the dimension of input images using the principal component analysis. Then we applied multi-dimensional edge detection operators that utilize second-order statistics. Experimental results show promising results compared to conventional one-dimensional edge detectors such as Sobel filter.

  7. Strong relaxation limit of multi-dimensional isentropic Euler equations

    NASA Astrophysics Data System (ADS)

    Xu, Jiang

    2010-06-01

    This paper is devoted to study the strong relaxation limit of multi-dimensional isentropic Euler equations with relaxation. Motivated by the Maxwell iteration, we generalize the analysis of Yong (SIAM J Appl Math 64:1737-1748, 2004) and show that, as the relaxation time tends to zero, the density of a certain scaled isentropic Euler equations with relaxation strongly converges towards the smooth solution to the porous medium equation in the framework of Besov spaces with relatively lower regularity. The main analysis tool used is the Littlewood-Paley decomposition.

  8. Analysis on the multi-dimensional spectrum of the thrust force for the linear motor feed drive system in machine tools

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Lu, Dun; Ma, Chengfang; Zhang, Jun; Zhao, Wanhua

    2017-01-01

    The motor thrust force has lots of harmonic components due to the nonlinearity of drive circuit and motor itself in the linear motor feed drive system. What is more, in the motion process, these thrust force harmonics may vary with the position, velocity, acceleration and load, which affects the displacement fluctuation of the feed drive system. Therefore, in this paper, on the basis of the thrust force spectrum obtained by the Maxwell equation and the electromagnetic energy method, the multi-dimensional variation of each thrust harmonic is analyzed under different motion parameters. Then the model of the servo system is established oriented to the dynamic precision. The influence of the variation of the thrust force spectrum on the displacement fluctuation is discussed. At last the experiments are carried out to verify the theoretical analysis above. It can be found that the thrust harmonics show multi-dimensional spectrum characteristics under different motion parameters and loads, which should be considered to choose the motion parameters and optimize the servo control parameters in the high-speed and high-precision machine tools equipped with the linear motor feed drive system.

  9. Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis

    SciTech Connect

    Jeong, Myong K; Kong, Seong G; Omitaomu, Olufemi A

    2008-09-01

    Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.

  10. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and

  11. The story of DB4GeO - A service-based geo-database architecture to support multi-dimensional data analysis and visualization

    NASA Astrophysics Data System (ADS)

    Breunig, Martin; Kuper, Paul V.; Butwilowski, Edgar; Thomsen, Andreas; Jahn, Markus; Dittrich, André; Al-Doori, Mulhim; Golovko, Darya; Menninghaus, Mathias

    2016-07-01

    Multi-dimensional data analysis and visualization need efficient data handling to archive original data, to reproduce results on large data sets, and to retrieve space and time partitions just in time. This article tells the story of more than twenty years research resulting in the development of DB4GeO, a web service-based geo-database architecture for geo-objects to support the data handling of 3D/4D geo-applications. Starting from the roots and lessons learned, the concepts and implementation of DB4GeO are described in detail. Furthermore, experiences and extensions to DB4GeO are presented. Finally, conclusions and an outlook on further research also considering 3D/4D geo-applications for DB4GeO in the context of Dubai 2020 are given.

  12. Disentangling the health benefits of walking from increased exposure to falls in older people using remote gait monitoring and multi-dimensional analysis.

    PubMed

    Brodie, Matthew A; Okubo, Yoshiro; Annegarn, Janneke; Wieching, Rainer; Lord, Stephen R; Delbaere, Kim

    2017-01-01

    Falls and physical deconditioning are two major health problems for older people. Recent advances in remote physiological monitoring provide new opportunities to investigate why walking exercise, with its many health benefits, can both increase and decrease fall rates in older people. In this paper we combine remote wearable device monitoring of daily gait with non-linear multi-dimensional pattern recognition analysis; to disentangle the complex associations between walking, health and fall rates. One week of activities of daily living (ADL) were recorded with a wearable device in 96 independent living older people prior to completing 6 months of exergaming interventions. Using the wearable device data; the quantity, intensity, variability and distribution of daily walking patterns were assessed. At baseline, clinical assessments of health, falls, sensorimotor and physiological fall risks were completed. At 6 months, fall rates, sensorimotor and physiological fall risks were re-assessed. A non-linear multi-dimensional analysis was conducted to identify risk-groups according to their daily walking patterns. Four distinct risk-groups were identified: The Impaired (93% fallers), Restrained (8% fallers), Active (50% fallers) and Athletic (4% fallers). Walking was strongly associated with multiple health benefits and protective of falls for the top performing Athletic risk-group. However, in the middle of the spectrum, the Active risk-group, who were more active, younger and healthier were 6.25 times more likely to be fallers than their Restrained counterparts. Remote monitoring of daily walking patterns may provide a new way to distinguish Impaired people at risk of falling because of frailty from Active people at risk of falling from greater exposure to situations were falls could occur, but further validation is required. Wearable device risk-profiling could help in developing more personalised interventions for older people seeking the health benefits of walking

  13. Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

  14. A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system

    NASA Astrophysics Data System (ADS)

    Mix, Ken

    The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979

  15. Perceptual evaluation of multi-dimensional spatial audio reproduction

    NASA Astrophysics Data System (ADS)

    Guastavino, Catherine; Katz, Brian F. G.

    2004-08-01

    Perceptual differences between sound reproduction systems with multiple spatial dimensions have been investigated. Two blind studies were performed using system configurations involving 1-D, 2-D, and 3-D loudspeaker arrays. Various types of source material were used, ranging from urban soundscapes to musical passages. Experiment I consisted in collecting subjects' perceptions in a free-response format to identify relevant criteria for multi-dimensional spatial sound reproduction of complex auditory scenes by means of linguistic analysis. Experiment II utilized both free response and scale judgments for seven parameters derived form Experiment I. Results indicated a strong correlation between the source material (sound scene) and the subjective evaluation of the parameters, making the notion of an ``optimal'' reproduction method difficult for arbitrary source material.

  16. ICM: a web server for integrated clustering of multi-dimensional biomedical data

    PubMed Central

    He, Song; He, Haochen; Xu, Wenjian; Huang, Xin; Jiang, Shuai; Li, Fei; He, Fuchu; Bo, Xiaochen

    2016-01-01

    Large-scale efforts for parallel acquisition of multi-omics profiling continue to generate extensive amounts of multi-dimensional biomedical data. Thus, integrated clustering of multiple types of omics data is essential for developing individual-based treatments and precision medicine. However, while rapid progress has been made, methods for integrated clustering are lacking an intuitive web interface that facilitates the biomedical researchers without sufficient programming skills. Here, we present a web tool, named Integrated Clustering of Multi-dimensional biomedical data (ICM), that provides an interface from which to fuse, cluster and visualize multi-dimensional biomedical data and knowledge. With ICM, users can explore the heterogeneity of a disease or a biological process by identifying subgroups of patients. The results obtained can then be interactively modified by using an intuitive user interface. Researchers can also exchange the results from ICM with collaborators via a web link containing a Project ID number that will directly pull up the analysis results being shared. ICM also support incremental clustering that allows users to add new sample data into the data of a previous study to obtain a clustering result. Currently, the ICM web server is available with no login requirement and at no cost at http://biotech.bmi.ac.cn/icm/. PMID:27131784

  17. Psychometric Properties and Validity of a Multi-dimensional Risk Perception Scale Developed in the Context of a Microbicide Acceptability Study

    PubMed Central

    Fava, Joseph L.; Severy, Lawrence; Rosen, Rochelle K.; Salomon, Liz; Shulman, Lawrence; Guthrie, Kate Morrow

    2015-01-01

    Currently available risk perception scales tend to focus on risk behaviors and overall risk (vs partner-specific risk). While these types of assessments may be useful in clinical contexts, they may be inadequate for understanding the relationship between sexual risk and motivations to engage in safer sex or one’s willingness to use prevention products during a specific sexual encounter. We present the psychometric evaluation and validation of a scale that includes both general and specific dimensions of sexual risk perception. A one-time, audio computer-assisted self-interview was administered to 531 women aged 18–55 years. Items assessing sexual risk perceptions, both in general and in regards to a specific partner, were examined in the context of a larger study of willingness to use HIV/STD prevention products and preferences for specific product characteristics. Exploratory and confirmatory factor analyses yielded two sub-scales: general perceived risk and partner-specific perceived risk. Validity analyses demonstrated that the two subscales were related to many sociodemographic and relationship factors. We suggest that this risk perception scale may be useful in research settings where the outcomes of interest are related to motivations to use HIV and STD prevention products and/ or product acceptability. Further, we provide specific guidance on how this risk perception scale might be utilized to understand such motivations with one or more specific partners. PMID:26621151

  18. Femtosecond laser induced surface deformation in multi-dimensional data storage

    NASA Astrophysics Data System (ADS)

    Hu, Yanlei; Chen, Yuhang; Li, Jiawen; Hu, Daqiao; Chu, Jiaru; Zhang, Qijin; Huang, Wenhao

    2012-12-01

    We investigate the surface deformation in two-photon induced multi-dimensional data storage. Both experimental evidence and theoretical analysis are presented to demonstrate the surface characteristics and formation mechanism in azo-containing material. The deformation reveals strong polarization dependence and has a topographic effect on multi-dimensional encoding. Different stages of data storage process are finally discussed taking into consideration the surface deformation formation.

  19. Progress in Multi-Dimensional Upwind Differencing

    DTIC Science & Technology

    1992-09-01

    advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results ...as 1983 by Phil Roe [1]. A study of discrete multi-dimensional wave models by Roe followed in 1985 (ICASE Report 85-18, also [21), but it took until...consider the numerical results shown in Figure :3 and 4, taken from [:34] and [35], respectively. In Figure 3a the exact and discrete Mach-number

  20. Spatial Indexing and Visualization of Large Multi-Dimensional Databases

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Csabai, I.; Trencséni, M.; Herczegh, G.; Józsa, P.; Purger, N.

    2007-10-01

    Scientific endeavors such as large astronomical surveys generate databases on the terabyte scale. These usually multi-dimensional databases must be visualized and mined in order to find interesting objects or to extract meaningful and qualitatively new relationships. Many statistical algorithms required for these tasks run reasonably fast when operating on small sets of in-memory data, but take noticeable performance hits when operating on large databases that do not fit into memory. We utilize new software technologies to develop and evaluate fast multi-dimensional, spatial indexing schemes that inherently follow the underlying highly non-uniform distribution of the data: one of them is hierarchical binary space partitioning; the other is sampled flat Voronoi partitioning of the data. Our working database is the 5-dimensional magnitude space of the Sloan Digital Sky Survey with more than 250 million data points. We show that these techniques can dramatically speed up data mining operations such as finding similar objects by example, classifying objects or comparing extensive simulation sets with observations. We are also developing tools to interact with the spatial database and visualize the data real-time at multiple resolutions at different zoom levels in an adaptive manner.

  1. Extended Darknet: Multi-Dimensional Internet Threat Monitoring System

    NASA Astrophysics Data System (ADS)

    Shimoda, Akihiro; Mori, Tatsuya; Goto, Shigeki

    Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, light-weight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multi-dimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.

  2. Multi-dimensional MHD simple waves

    SciTech Connect

    Webb, G. M.; Ratkiewicz, R.; Brio, M.; Zank, G. P.

    1996-07-20

    In this paper we consider a formalism for multi-dimensional simple MHD waves using ideas developed by Boillat. For simple wave solutions one assumes that all the physical variables (the density {rho}, gas pressure p, fluid velocity u, gas entropy S, and magnetic induction B in the MHD case) depend on a single phase function {phi}(r,t). The simple wave solution ansatz and the MHD equations then require that the phase function {phi} satisfies an implicit equation of the form f({phi})=r{center_dot}n({phi})-{lambda}({phi})t, where n({phi})={nabla}{phi}/|{nabla}{phi}| is the wave normal, {lambda}({phi})={omega}/k=-{phi}{sub t}/|{nabla}{phi}| is the normal speed of the wave front, and f({phi}) is an arbitrary differentiable function of {phi}. The formalism allows for more general simple waves than that usually dealt with in which n({phi}) is a constant unit vector that does not vary along the wave front. The formalism has implications for shock formation and wave breaking for multi-dimensional waves.

  3. Multi-dimensional MHD simple waves

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Ratkiewicz, R.; Brio, M.; Zank, G. P.

    1995-01-01

    In this paper we consider a formalism for multi-dimensional simple MHD waves using ideas developed by Boillat. For simple wave solutions one assumes that all the physical variables (the density rho, gas pressure p, fluid velocity V, gas entropy S, and magnetic induction B in the MHD case) depend on a single phase function phi(r,t). The simple wave solution ansatz and the MHD equations then require that the phase function has the form phi = r x n(phi) - lambda(phi)t, where = n(phi) = Delta phi / (absolute value of Delta phi) is the wave normal and lambda(phi) = omega/k = -phi t / (absolute value of Delta phi) is the normal speed of the wave front. The formalism allows for more general simple waves than that usually dealt with in which n(phi) is a constant unit vector that does not vary along the wave front. The formalism has implications for shock formation for multi-dimensional waves.

  4. The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Gaffney, R. L.

    2007-01-01

    The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

  5. Continuous Energy, Multi-Dimensional Transport Calculations for Problem Dependent Resonance Self-Shielding

    SciTech Connect

    T. Downar

    2009-03-31

    The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.

  6. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the

  7. Development of a rapid method for the quantitative analysis of four methoxypyrazines in white and red wine using multi-dimensional Gas Chromatography-Mass Spectrometry.

    PubMed

    Botezatu, Andreea; Pickering, Gary J; Kotseridis, Yorgos

    2014-10-01

    Alkyl-methoxypyrazines (MPs) are important odour-active constituents of many grape cultivars and their wines. Recently, a new MP - 2,5-dimethyl-3-methoxypyrazine (DMMP) - has been reported as a possible constituent of wine. This study sought to develop a rapid and reliable method for quantifying DMMP, isopropyl methoxypyrazine (IPMP), secbutyl methoxypyrazine (SBMP) and isobutyl methoxypyrazine (IBMP) in wine. The proposed method is able to rapidly and accurately resolve all 4 MPs in a range of wine styles, with limits of detection between 1 and 2 ng L(-1) for IPMP, SBMP and IBMP and 5 ng L(-1) for DMMP. Analysis of a set of 11 commercial wines agrees with previously published values for IPMP, SBMP and IBMP, and shows for the first time that DMMP may be an important and somewhat common odorant in red wines. To our knowledge, this is the first analytical method developed for the quantification of DMMP in wine.

  8. Vlasov multi-dimensional model dispersion relation

    SciTech Connect

    Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia

    2014-07-15

    A hybrid model of the Vlasov equation in multiple spatial dimension D > 1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like θ{sup N}, where θ is the polar angle and flows are arranged uniformly over the azimuthal angle.

  9. Anonymous voting for multi-dimensional CV quantum system

    NASA Astrophysics Data System (ADS)

    Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee

    2016-06-01

    We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).

  10. Continuous energy, multi-dimensional discrete ordinates transport calculations for problem dependent resonance treatment

    NASA Astrophysics Data System (ADS)

    Zhong, Zhaopeng

    In the past twenty 20 years considerable progress has been made in developing new methods for solving the multi-dimensional transport problem. However the effort devoted to the resonance self-shielding calculation has lagged, and much less progress has been made in enhancing resonance-shielding techniques for generating problem-dependent multi-group cross sections (XS) for the multi-dimensional transport calculations. In several applications, the error introduced by self-shielding methods exceeds that due to uncertainties in the basic nuclear data, and often they can be the limiting factor on the accuracy of the final results. This work is to improve the accuracy of the resonance self-shielding calculation by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. A new method has been developed, it can calculate the continuous-energy neutron fluxes for the whole two-dimensional domain, which can be utilized as weighting function to process the self-shielded multi-group cross sections for reactor analysis and criticality calculations, and during this process, the two-dimensional heterogeneous effect in the resonance self-shielding calculation can be fully included. A new code, GEMINEWTRN (Group and Energy-Pointwise Methodology Implemented in NEWT for Resonance Neutronics) has been developed in the developing version of SCALE [1], it combines the energy pointwise (PW) capability of the CENTRM [2] with the two-dimensional discrete ordinates transport capability of lattice physics code NEWT [14]. Considering the large number of energy points in the resonance region (typically more than 30,000), the computational burden and memory requirement for GEMINEWTRN is tremendously large, some efforts have been performed to improve the computational efficiency, parallel computation has been implemented into GEMINEWTRN, which can save the computation and memory requirement a lot; some energy points reducing

  11. Multi-Dimensional Construct of Self-Esteem: Tools for Developmental Counseling.

    ERIC Educational Resources Information Center

    Norem-Hebeisen, Ardyth A.

    A multi-dimensional construct of self-esteem has been proposed and subjected to initial testing through design of a self-report instrument. Item clusters derived from Rao's canonical and principal axis factor analyses are consistent with the hypothesized construct and have substantial internal reliability. Factor analysis of item clusters produced…

  12. Computer Aided Data Analysis in Sociometry

    ERIC Educational Resources Information Center

    Langeheine, Rolf

    1978-01-01

    A computer program which analyzes sociometric data is presented. The SDAS program provides classical sociometric analysis. Multi-dimensional scaling and cluster analysis techniques may be combined with the MSP program. (JKS)

  13. Multi-dimensional assessment of soccer coaching course effectiveness.

    PubMed

    Hammond, J; Perry, J

    The purpose of this study was to determine the relationship between the aims of course providers and events during the delivery of two soccer coaching accreditation courses. A secondary purpose was to evaluate performance-analysis methods for assessing the course instructor's performance. A case analysis approach was developed to evaluate the courses and the data-gathering process. This research approach was chosen to amalgamate the sources of evidence, providing a multi-dimensional view of course delivery. Data collection methods included simple hand notation and computer logging of events, together with video analysis. The hand notation and video analysis were employed for the first course with the hand notation being replaced with computer event logging for the second course. Questionnaires, focusing on course quality, were administered to participants. Interviews and document analysis provided the researchers with the instructors' main aims and priorities for course delivery. Results of the video analysis suggest a difference between these aims and the events of the courses. Analysis of the questionnaires indicated favourable perceptions of course content and delivery. This evidence is discussed in relation to intent and practice in coach education and the efficiency of employing performance-analysis techniques in logging instructional events.

  14. Towards a genuinely multi-dimensional upwind scheme

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Vanleer, Bram; Roe, Philip L.

    1990-01-01

    Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.

  15. [Research on monitoring mechanical wear state based on oil spectrum multi-dimensional time series model].

    PubMed

    Xu, Chao; Zhang, Pei-lin; Ren, Guo-quan; Li, Bing; Yang, Ning

    2010-11-01

    A new method using oil atomic spectrometric analysis technology to monitor the mechanical wear state was proposed. Multi-dimensional time series model of oil atomic spectrometric data of running-in period was treated as the standard model. Residues remained after new data were processed by the standard model. The residues variance matrix was selected as the features of the corresponding wear state. Then, high dimensional feature vectors were reduced through the principal component analysis and the first three principal components were extracted to represent the wear state. Euclidean distance was computed for feature vectors to classify the testing samples. Thus, the mechanical wear state was identified correctly. The wear state of a specified track vehicle engine was effectively identified, which verified the validity of the proposed method. Experimental results showed that introducing the multi-dimensional time series model to oil spectrometric analysis can fuse the spectrum data and improve the accuracy of monitoring mechanical wear state.

  16. QED multi-dimensional vacuum polarization finite-difference solver

    NASA Astrophysics Data System (ADS)

    Carneiro, Pedro; Grismayer, Thomas; Silva, Luís; Fonseca, Ricardo

    2015-11-01

    The Extreme Light Infrastructure (ELI) is expected to deliver peak intensities of 1023 - 1024 W/cm2 allowing to probe nonlinear Quantum Electrodynamics (QED) phenomena in an unprecedented regime. Within the framework of QED, the second order process of photon-photon scattering leads to a set of extended Maxwell's equations [W. Heisenberg and H. Euler, Z. Physik 98, 714] effectively creating nonlinear polarization and magnetization terms that account for the nonlinear response of the vacuum. To model this in a self-consistent way, we present a multi dimensional generalized Maxwell equation finite difference solver with significantly enhanced dispersive properties, which was implemented in the OSIRIS particle-in-cell code [R.A. Fonseca et al. LNCS 2331, pp. 342-351, 2002]. We present a detailed numerical analysis of this electromagnetic solver. As an illustration of the properties of the solver, we explore several examples in extreme conditions. We confirm the theoretical prediction of vacuum birefringence of a pulse propagating in the presence of an intense static background field [arXiv:1301.4918 [quant-ph

  17. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  18. Fast Packet Classification Using Multi-Dimensional Encoding

    NASA Astrophysics Data System (ADS)

    Huang, Chi Jia; Chen, Chien

    Internet routers need to classify incoming packets quickly into flows in order to support features such as Internet security, virtual private networks and Quality of Service (QoS). Packet classification uses information contained in the packet header, and a predefined rule table in the routers. Packet classification of multiple fields is generally a difficult problem. Hence, researchers have proposed various algorithms. This study proposes a multi-dimensional encoding method in which parameters such as the source IP address, destination IP address, source port, destination port and protocol type are placed in a multi-dimensional space. Similar to the previously best known algorithm, i.e., bitmap intersection, multi-dimensional encoding is based on the multi-dimensional range lookup approach, in which rules are divided into several multi-dimensional collision-free rule sets. These sets are then used to form the new coding vector to replace the bit vector of the bitmap intersection algorithm. The average memory storage of this encoding is Θ (L · N · log N) for each dimension, where L denotes the number of collision-free rule sets, and N represents the number of rules. The multi-dimensional encoding practically requires much less memory than bitmap intersection algorithm. Additionally, the computation needed for this encoding is as simple as bitmap intersection algorithm. The low memory requirement of the proposed scheme means that it not only decreases the cost of packet classification engine, but also increases the classification performance, since memory represents the performance bottleneck in the packet classification engine implementation using a network processor.

  19. The Multi-Dimensional Demands of Reading in the Disciplines

    ERIC Educational Resources Information Center

    Lee, Carol D.

    2014-01-01

    This commentary addresses the complexities of reading comprehension with an explicit focus on reading in the disciplines. The author proposes reading as entailing multi-dimensional demands of the reader and posing complex challenges for teachers. These challenges are intensified by restrictive conceptions of relevant prior knowledge and experience…

  20. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  1. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  2. Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets.

    PubMed

    Kim, Han Suk; Schulze, Jürgen P; Cone, Angela C; Sosinsky, Gina E; Martone, Maryann E

    2010-09-21

    The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets.

  3. Towards Semantic Web Services on Large, Multi-Dimensional Coverages

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2009-04-01

    Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it

  4. Multi-Dimensional Perception of Parental Involvement

    ERIC Educational Resources Information Center

    Fisher, Yael

    2016-01-01

    The main purpose of this study was to define and conceptualize the term parental involvement. A questionnaire was administrated to parents (140), teachers (145), students (120) and high ranking civil servants in the Ministry of Education (30). Responses were analyzed through Smallest Space Analysis (SSA). The SSA solution among all groups rendered…

  5. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  6. Advanced numerics for multi-dimensional fluid flow calculations

    NASA Technical Reports Server (NTRS)

    Vanka, S. P.

    1984-01-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  7. Advanced numerics for multi-dimensional fluid flow calculations

    SciTech Connect

    Vanka, S.P.

    1984-04-01

    In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

  8. Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus

    SciTech Connect

    Weizhen Mao; Jie Chen; William Watson

    2005-11-30

    Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.

  9. Portable laser synthesizer for high-speed multi-dimensional spectroscopy

    DOEpatents

    Demos, Stavros G [Livermore, CA; Shverdin, Miroslav Y [Sunnyvale, CA; Shirk, Michael D [Brentwood, CA

    2012-05-29

    Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.

  10. Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kwak, Dochan (Technical Monitor)

    1995-01-01

    The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.

  11. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  12. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  13. A multi-dimensional sampling method for locating small scatterers

    NASA Astrophysics Data System (ADS)

    Song, Rencheng; Zhong, Yu; Chen, Xudong

    2012-11-01

    A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method.

  14. Flexible multi-dimensional modulation method for elastic optical networks

    NASA Astrophysics Data System (ADS)

    He, Zilong; Liu, Wentao; Shi, Sheping; Shen, Bailin; Chen, Xue; Gao, Xiqing; Zhang, Qi; Shang, Dongdong; Ji, Yongning; Liu, Yingfeng

    2016-01-01

    We demonstrate a flexible multi-dimensional modulation method for elastic optical networks. We compare the flexible multi-dimensional modulation formats PM-kSC-mQAM with traditional modulation formats PM-mQAM using numerical simulations in back-to-back and wavelength division multiplexed (WDM) transmission (50 GHz-spaced) scenarios at the same symbol rate of 32 Gbaud. The simulation results show that PM-kSC-QPSK and PM-kSC-16QAM can achieve obvious back-to-back sensitivity gain with respect to PM-QPSK and PM-16QAM at the expense of spectral efficiency reduction. And the WDM transmission simulation results show that PM-2SC-QPSK can achieve 57.5% increase in transmission reach compared to PM-QPSK, and 48.5% increase for PM-2SC-16QAM over PM-16QAM. Furthermore, we also experimentally investigate the back to back performance of PM-2SC-QPSK, PM-4SC-QPSK, PM-2SC-16QAM and PM-3SC-16QAM, and the experimental results agree well with the numerical simulations.

  15. Multi-Dimensional Damage Detection for Surfaces and Structures

    NASA Technical Reports Server (NTRS)

    Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah

    2013-01-01

    Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or

  16. Multi-Dimensional Structure of Crystalline Chiral Condensates in Quark Matter

    NASA Astrophysics Data System (ADS)

    Lee, Tong-Gyu; Nishiyama, Kazuya; Yasutake, Nobutoshi; Maruyama, Toshiki; Tatsumi, Toshitaka

    We explore the multi-dimensional structure of inhomogeneous chiral condensates in quark matter. For a one-dimensional structure, the system becomes unstable at finite temperature due to the Nambu-Goldstone excitations. However, inhomogeneous chiral condensates with multi-dimensional modulations may be realized as a true long-range order at any temperature, as inferred from the Landau-Peierls theorem. We here present some possible strategies for searching the multi-dimensional structure of chiral crystals.

  17. Surface extraction from multi-field particle volume data using multi-dimensional cluster visualization.

    PubMed

    Linsen, Lars; Van Long, Tran; Rosenthal, Paul; Rosswog, Stephan

    2008-01-01

    Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations.

  18. Importance of multi-dimensional morphodynamics for habitat evolution: Danube River 1715-2006

    NASA Astrophysics Data System (ADS)

    Hohensinner, Severin; Jungwirth, Mathias; Muhar, Susanne; Schmutz, Stefan

    2014-06-01

    Human-unimpaired braided and anabranched river systems are characterized by manifold multi-dimensional exchange processes. The intensity of hydrological surface/subsurface connectivity of riverine habitats depends on more than regular or episodic water level fluctuations due to the hydrological regime. Morphodynamic changes are also a basic underlying factor. In order to provide new insights into the long-term habitat configuration of large rivers prior to channelization, this study discusses the hydromorphological alterations of an alluvial section of the Austrian Danube based on historical records from 1715 to 2006. The study combines the analysis of habitat patterns and intensity of hydrological connectivity over the long term with the reconstruction of short-term morphodynamic processes between 1812 and 1821. The main research questions are (1) whether the intensive morphodynamics prior to channelization are reflected by a marked variation in habitat patterns or whether the variation remained within a small range, and (2) which fluvial processes contributed to the evolution of the habitat configuration identified. The study reveals that the mean variations in the habitat patterns and the intensity of hydrological connectivity were only between 3% and 10% before 1821, although the river landscape was subject to intensive fluvial disturbances. An exception was the expansion of aquatic habitats between low and mean flow, which deviated by 15%. Habitat evolution was affected by morphodynamic processes occurring across different temporal scales. Both gradual channel changes such as incision or migration and sudden processes such as avulsions (cut-offs) contributed to the patterns identified. Locally, sudden channel changes extensively altered the habitat conditions with regard to hydrological surface/subsurface connectivity. Such alterations foster or restrain the potential evolution and the ecological succession of the riparian vegetation at the respective sites

  19. A Multi-Dimensional Instrument for Evaluating Taiwanese High School Students' Science Learning Self-Efficacy in Relation to Their Approaches to Learning Science

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tsai, Chin-Chung

    2013-01-01

    In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the previous…

  20. Multi-dimensional structure of accreting young stars

    NASA Astrophysics Data System (ADS)

    Geroux, C.; Baraffe, I.; Viallet, M.; Goffrey, T.; Pratt, J.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.

    2016-04-01

    This work is the first attempt to describe the multi-dimensional structure of accreting young stars based on fully compressible time implicit multi-dimensional hydrodynamics simulations. One major motivation is to analyse the validity of accretion treatment used in previous 1D stellar evolution studies. We analyse the effect of accretion on the structure of a realistic stellar model of the young Sun. Our work is inspired by the numerical work of Kley & Lin (1996, ApJ, 461, 933) devoted to the structure of the boundary layer in accretion disks, which provides the outer boundary conditions for our simulations. We analyse the redistribution of accreted material with a range of values of specific entropy relative to the bulk specific entropy of the material in the accreting object's convective envelope. Low specific entropy accreted material characterises the so-called cold accretion process, whereas high specific entropy is relevant to hot accretion. A primary goal is to understand whether and how accreted energy deposited onto a stellar surface is redistributed in the interior. This study focusses on the high accretion rates characteristic of FU Ori systems. We find that the highest entropy cases produce a distinctive behaviour in the mass redistribution, rms velocities, and enthalpy flux in the convective envelope. This change in behaviour is characterised by the formation of a hot layer on the surface of the accreting object, which tends to suppress convection in the envelope. We analyse the long-term effect of such a hot buffer zone on the structure and evolution of the accreting object with 1D stellar evolution calculations. We study the relevance of the assumption of redistribution of accreted energy into the stellar interior used in the literature. We compare results obtained with the latter treatment and those obtained with a more physical accretion boundary condition based on the formation of a hot surface layer suggested by present multi-dimensional

  1. Multi-dimensional hydrodynamics of core-collapse supernovae

    NASA Astrophysics Data System (ADS)

    Murphy, Jeremiah W.

    Core-collapse supernovae are some of the most energetic events in the Universe, they herald the birth of neutron stars and black holes, are a major site for nucleosynthesis, influence galactic hydrodynamics, and trigger further star formation. As such, it is important to understand the mechanism of explosion. Moreover, observations imply that asymmetries are, in the least, a feature of the mechanism, and theory suggests that multi-dimensional hydrodynamics may be crucial for successful explosions. In this dissertation, we present theoretical investigations into the multi-dimensional nature of the supernova mechanism. It had been suggested that nuclear reactions might excite non-radial g-modes (the [straight epsilon]-mechanism) in the cores of progenitors, leading to asymmetric explosions. We calculate the eigenmodes for a large suite of progenitors including excitation by nuclear reactions and damping by neutrino and acoustic losses. Without exception, we find unstable g-modes for each progenitor. However, the timescales for growth are at least an order of magnitude longer than the time until collapse. Thus, the [straight epsilon]- mechanism does not provide appreciable amplification of non-radial modes before the core undergoes collapse. Regardless, neutrino-driven convection, the standing accretion shock instability, and other instabilities during the explosion provide ample asymmetry. To adequately simulate these, we have developed a new hydrodynamics code, BETHE-hydro that uses the Arbitrary Lagrangian-Eulerian (ALE) approach, includes rotational terms, solves Poisson's equation for gravity on arbitrary grids, and conserves energy and momentum in its basic implementation. By using time-dependent arbitrary grids that can adapt to the numerical challenges of the problem, this code offers unique flexibility in simulating astrophysical phenomena. Finally, we use BETHE-hydro to investigate the conditions and criteria for supernova explosions by the neutrino

  2. A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

    SciTech Connect

    Ramakrishnan, Lavanya; Plale, Beth

    2010-04-05

    Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.

  3. Active control of multi-dimensional random sound in ducts

    NASA Technical Reports Server (NTRS)

    Silcox, R. J.; Elliott, S. J.

    1990-01-01

    Previous work has demonstrated how active control may be applied to the control of random noise in ducts. These implementations, however, have been restricted to frequencies where only plane waves are propagating in the duct. In spite of this, the need for this technology at low frequencies has progressed to the point where commercial products that apply these concepts are currently available. Extending the frequency range of this technology requires the extension of current single channel controllers to multi-variate control systems as well as addressing the problems inherent in controlling higher order modes. The application of active control in the multi-dimensional propagation of random noise in waveguides is examined. An adaptive system is implemented using measured system frequency response functions. Experimental results are presented illustrating attained suppressions of 15 to 30 dB for random noise propagating in multiple modes.

  4. The Multi-Dimensional Character of Core-Collapse Supernovae

    SciTech Connect

    Hix, William Raphael; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, Anthony; Messer, Bronson; Endeve, Eirik; Blondin, J. M.; Harris, James Austin; Marronetti, Pedro; Yakunin, Konstantin N

    2016-01-01

    Core-collapse supernovae, the culmination of massive stellar evolution, are spectacular astronomical events and the principle actors in the story of our elemental origins. Our understanding of these events, while still incomplete, centers around a neutrino-driven central engine that is highly hydrodynamically unstable. Increasingly sophisticated simulations reveal a shock that stalls for hundreds of milliseconds before reviving. Though brought back to life by neutrino heating, the development of the supernova explosion is inextricably linked to multi-dimensional fluid flows. In this paper, the outcomes of three-dimensional simulations that include sophisticated nuclear physics and spectral neutrino transport are juxtaposed to learn about the nature of the three dimensional fluid flow that shapes the explosion. Comparison is also made between the results of simulations in spherical symmetry from several groups, to give ourselves confidence in the understanding derived from this juxtaposition.

  5. Advanced Concepts in Multi-Dimensional Radiation Detection and Imaging

    NASA Astrophysics Data System (ADS)

    Vetter, Kai; Haefner, Andy; Barnowski, Ross; Pavlovsky, Ryan; Torii, Tatsuo; Sanada, Yukihisa; Shikaze, Yoshiaki

    Recent developments in the detector fabrication, signal readout, and data processing enable new concepts in radiation detection that are relevant for applications ranging from fundamental physics to medicine as well as nuclear security and safety. We present recent progress in multi-dimensional radiation detection and imaging in the Berkeley Applied Nuclear Physics program. It is based on the ability to reconstruct scenes in three dimensions and fuse it with gamma-ray image information. We are using the High-Efficiency Multimode Imager HEMI in its Compton imaging mode and combining it with contextual sensors such as the Microsoft Kinect or visual cameras. This new concept of volumetric imaging or scene data fusion provides unprecedented capabilities in radiation detection and imaging relevant for the detection and mapping of radiological and nuclear materials. This concept brings us one step closer to the seeing the world with gamma-ray eyes.

  6. Detecting Shielded Special Nuclear Materials Using Multi-Dimensional Neutron Source and Detector Geometries

    NASA Astrophysics Data System (ADS)

    Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard

    2016-10-01

    A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.

  7. On Multi-Dimensional Vocabulary Teaching Mode for College English Teaching

    ERIC Educational Resources Information Center

    Zhou, Li-na

    2010-01-01

    This paper analyses the major approaches in EFL (English as a Foreign Language) vocabulary teaching from historical perspective and puts forward multi-dimensional vocabulary teaching mode for college English. The author stresses that multi-dimensional approaches of communicative vocabulary teaching, lexical phrase teaching method, the grammar…

  8. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements.

    PubMed

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained.

  9. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements

    NASA Astrophysics Data System (ADS)

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained.

  10. Development of a Scale Measuring Trait Anxiety in Physical Education

    ERIC Educational Resources Information Center

    Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos

    2012-01-01

    The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…

  11. Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity

    PubMed Central

    Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke

    2016-01-01

    Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain. PMID:26834608

  12. Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity.

    PubMed

    Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke

    2015-01-01

    Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain.

  13. An information model for managing multi-dimensional gridded data in a GIS

    NASA Astrophysics Data System (ADS)

    Xu, H.; Abdul-Kadar, F.; Gao, P.

    2016-04-01

    Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.

  14. Multi-Dimensional Analysis of Dynamic Human Information Interaction

    ERIC Educational Resources Information Center

    Park, Minsoo

    2013-01-01

    Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…

  15. Profile Analysis: Multidimensional Scaling Approach.

    ERIC Educational Resources Information Center

    Ding, Cody S.

    2001-01-01

    Outlines an exploratory multidimensional scaling-based approach to profile analysis called Profile Analysis via Multidimensional Scaling (PAMS) (M. Davison, 1994). The PAMS model has the advantages of being applied to samples of any size easily, classifying persons on a continuum, and using person profile index for further hypothesis studies, but…

  16. Multi-scanning mechanism enabled rapid non-mechanical multi-dimensional KTN beam deflector

    NASA Astrophysics Data System (ADS)

    Zhu, Wenbin; Chao, Ju-Hung; Chen, Chang-Jiang; Yin, Shizhuo; Hoffman, Robert C.

    2016-09-01

    In this paper, a multi-dimensional KTN beam deflector is presented. The multi-scanning mechanisms, including space-charge- controlled beam deflection, composition gradient-induced beam deflection, and temperature gradient-induced beam deflection are harnessed. Since multi-dimensional scanning can be realized in a single KTN crystal, it represents a compact and cost-effective approach to realize multi-dimensional scanning, which can be very useful for many applications, including high speed, high resolution imaging, and rapid 3D printing.

  17. Stochastic Modeling of Multi-Dimensional Precipitation Fields.

    NASA Astrophysics Data System (ADS)

    Yoo, Chulsang

    1995-01-01

    A new multi-dimensional stochastic precipitation model is proposed with major emphasis on its spectral structure. As a hyperbolic type of stochastic partial differential equation, this model is characterized by having a small set of parameters, which could be easily estimated. These characteristics are similar to those of the noise forced diffusive precipitation model, but representation of the physics and statistical features of the precipitation field is better as in the WGR precipitation model. The model derivation was based on the AR (Auto Regressive) process considering advection and diffusion, the dominant statistical and physical characteristics of the precipitation field propagation. The model spectrum showed a good match for the GATE spectrum developed by Nakamoto et al. (1990). This model was also compared with the WGR model and the noise forced diffusive precipitation model analytically and through applications such as the sampling error estimation from space-borne sensors and raingages, and the ground-truth problem. The sampling error from space-borne sensors based on the proposed model was similar to that of the noise forced diffusive precipitation model but much smaller than that of the WGR model. Similar result was also obtained in the estimation of the sampling error from raingages. The dimensionless root mean square error of the proposed model in the ground-truth problem was in between those of the WGR model and the noise forced diffusive precipitation model, even though the difference was very small. Simulation study of the realistic precipitation field showed the effect of the variance of the noise forcing term on the life time of a storm event.

  18. Chemistry and Transport in a Multi-Dimensional Model

    NASA Technical Reports Server (NTRS)

    Yung, Yuk L.

    2004-01-01

    Our work has two primary scientific goals, the interannual variability (IAV) of stratospheric ozone and the hydrological cycle of the upper troposphere and lower stratosphere. Our efforts are aimed at integrating new information obtained by spacecraft and aircraft measurements to achieve a better understanding of the chemical and dynamical processes that are needed for realistic evaluations of human impact on the global environment. A primary motivation for studying the ozone layer is to separate the anthropogenic perturbations of the ozone layer from natural variability. Using the recently available merged ozone data (MOD), we have carried out an empirical orthogonal function EOF) study of the temporal and spatial patterns of the IAV of total column ozone in the tropics. The outstanding problem about water in the stratosphere is its secular increase in the last few decades. The Caltech/PL multi-dimensional chemical transport model (CTM) photochemical model is used to simulate the processes that control the water vapor and its isotopic composition in the stratosphere. Datasets we will use for comparison with model results include those obtained by the Total Ozone Mapping Spectrometer (TOMS), the Solar Backscatter Ultraviolet (SBUV and SBUV/2), Stratosphere Aerosol and Gas Experiment (SAGE I and II), the Halogen Occultation Experiment (HALOE), the Atmospheric Trace Molecular Spectroscopy (ATMOS) and those soon to be obtained by the Cirrus Regional Study of Tropical Anvils and Cirrus Layers Florida Area Cirrus Experiment (CRYSTAL-FACE) mission. The focus of the investigations is the exchange between the stratosphere and the troposphere, and between the troposphere and the biosphere.

  19. Rasch analysis of the Chedoke-McMaster Attitudes towards Children with Handicaps scale.

    PubMed

    Armstrong, Megan; Morris, Christopher; Tarrant, Mark; Abraham, Charles; Horton, Mike C

    2017-02-01

    Aim To assess whether the Chedoke-McMaster Attitudes towards Children with Handicaps (CATCH) 36-item total scale and subscales fit the unidimensional Rasch model. Method The CATCH was administered to 1881 children, aged 7-16 years in a cross-sectional survey. Data were used from a random sample of 416 for the initial Rasch analysis. The analysis was performed on the 36-item scale and then separately for each subscale. The analysis explored fit to the Rasch model in terms of overall scale fit, individual item fit, item response categories, and unidimensionality. Item bias for gender and school level was also assessed. Revised scales were then tested on an independent second random sample of 415 children. Results Analyses indicated that the 36-item overall scale was not unidimensional and did not fit the Rasch model. Two scales of affective attitudes and behavioural intention were retained after four items were removed from each due to misfit to the Rasch model. Additionally, the scaling was improved when the two most negative response categories were aggregated. There was no item bias by gender or school level on the revised scales. Items assessing cognitive attitudes did not fit the Rasch model and had low internal consistency as a scale. Conclusion Affective attitudes and behavioural intention CATCH sub-scales should be treated separately. Caution should be exercised when using the cognitive subscale. Implications for Rehabilitation The 36-item Chedoke-McMaster Attitudes towards Children with Handicaps (CATCH) scale as a whole did not fit the Rasch model; thus indicating a multi-dimensional scale. Researchers should use two revised eight-item subscales of affective attitudes and behavioural intentions when exploring interventions aiming to improve children's attitudes towards disabled people or factors associated with those attitudes. Researchers should use the cognitive subscale with caution, as it did not create a unidimensional and internally consistent scale

  20. Spiritual Competency Scale: Further Analysis

    ERIC Educational Resources Information Center

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  1. Design and Analysis of a Formation Flying System for the Cross-Scale Mission Concept

    NASA Technical Reports Server (NTRS)

    Cornara, Stefania; Bastante, Juan C.; Jubineau, Franck

    2007-01-01

    The ESA-funded "Cross-Scale Technology Reference Study has been carried out with the primary aim to identify and analyse a mission concept for the investigation of fundamental space plasma processes that involve dynamical non-linear coupling across multiple length scales. To fulfill this scientific mission goal, a constellation of spacecraft is required, flying in loose formations around the Earth and sampling three characteristic plasma scale distances simultaneously, with at least two satellites per scale: electron kinetic (10 km), ion kinetic (100-2000 km), magnetospheric fluid (3000-15000 km). The key Cross-Scale mission drivers identified are the number of S/C, the space segment configuration, the reference orbit design, the transfer and deployment strategy, the inter-satellite localization and synchronization process and the mission operations. This paper presents a comprehensive overview of the mission design and analysis for the Cross-Scale concept and outlines a technically feasible mission architecture for a multi-dimensional investigation of space plasma phenomena. The main effort has been devoted to apply a thorough mission-level trade-off approach and to accomplish an exhaustive analysis, so as to allow the characterization of a wide range of mission requirements and design solutions.

  2. The relationship between multi-dimensional self-compassion and fetal-maternal attachment in prenatal period in referred women to Mashhad Health Center

    PubMed Central

    Mohamadirizi, Soheila; Kordi, Masoumeh

    2016-01-01

    Background: Multi-dimensional self-compassion is one of the important factors predicting fetal-maternal attachment which vary among different cultures and countries. So the aim of this study was to determine the relationship between multi-dimensional, self-compassion, and fetal-maternal attachment in the prenatal period. Subjects and Methods: This cross-sectional study was carried on 394 primigravida women to Mashhad Health Care Centers in with two stage sampling method (cluster-convenience) in the year 2014. Demographic/prenatal characteristics, multi-dimensional self-compassion (26Q) with five dimension (including self-kindness, self-judgment, common humanity, isolation items, mindfulness, over-identified), and fatal-maternal attachment (21Q) were completed by the participants. The statistical analysis was performed with various statistical tests such as Pearson correlation coefficient, t-test, one-way ANOVA, and linear regression using SPSS statistical software (version 14). Results: Based on the findings, the mean (standard deviation) value for multi-dimensional self-compassion was 59.81 (6.4) and for fatal-maternal attachment was 81.63 (9.5). There was a positive correlation between fatal-maternal attachment and total self-compassion (P = 0.005, r = 0.30) and its dimension including self-kindness (P = 0.003, r = 0.24), self-judgment (P = 0.001, r = 0.18), common humanity (P = 0.004, r = 0.28), isolation items (P = 0.006, r = 0.17), mindfulness (P = 0.002, r = 0.15), over-identified (P = 0.001, r = 0.15). Conclusions: There was a correlation between the multi-dimensional self-compassion and fetal-maternal attachment in pregnant women. Hence, educating people like caregivers by community health midwives regarding psychological problems in during pregnancy can be effective in early diagnosing and identifying such disorders. PMID:27500174

  3. Multi-dimensional Modeling of Fullerene (C60) Nanoparticle Transport in the Subsurface Environment

    NASA Astrophysics Data System (ADS)

    Bai, C.; Li, Y.

    2011-12-01

    The escalating production and consumption of engineered nanomaterials may lead to increased release into groundwater. A number of studies have revealed the potential human health effects and aquatic toxicity of nanomaterials. Understanding the fate and transport of engineered nanomaterials is very important for evaluating their potential risks to human and ecological health. While a lot of efforts have been put forward in this area, limited work has been conducted to evaluate engineered nanomaterial transport in multi-dimension and at field scale. In this work, we simulate the transport of fullerene aggregates (nC60), a widely used engineered nanomaterial, in a multi-dimensional environment. A Modular Three-Dimensional Multispecies Transport Model (MT3DMS) was modified to incorporate the transport and retention of nC60. The modified MT3DMS was validated by comparing with analytical solutions and one-dimensional numerical simulation results. The validated simulator was then used to simulate nC60 transport in two- and three-dimensional field sites. Hypothetical scenarios for nanomaterial entering the subsurface environment, including entering from an injection well and releasing from a waste site were investigated. Influences of injection rate, groundwater velocity, ground water recharge rate, subsurface heterogeneity, and nanomaterial size and surface property were evaluated. Insights gained from this work will be discussed.

  4. Comprehensive multi-channel multi-dimensional counter-current chromatography for separation of tanshinones from Salvia miltiorrhiza Bunge.

    PubMed

    Meng, Jie; Yang, Zhi; Liang, Junling; Zhou, Hui; Wu, Shihua

    2014-01-03

    Multi-dimensional chromatography offers the increased resolution and peak capacity by coupling of multiple columns with the same or different separation mechanisms. In this work, a novel multi-channel multi-dimensional counter-current chromatography (CCC) has been successfully constructed and used for several two-dimensional (2D) and three-dimensional (3D) CCC separations including 2D A×B/A×C, A×B-C and A-B×C, and 3D A×B×C systems. These 2D and 3D CCC systems were further applied to separate the bioactive tanshinones from the extract of Tanshen (or Danshen, Salvia miltiorrhiza Bunge), a famous Traditional Chinese Medicine (TCM). As a result, the developed 2D and 3D CCC methods were successful and efficient for resolving the tanshinones from complex extracts. Compared to the 1D multiple columns CCC separation, the 2D and 3D CCC decrease analysis time, reduce solvent consumption and increase sample throughput significantly. It may be widely used for current drug development, metabolomic analysis and natural product isolation.

  5. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  6. Multi-dimensional temporal abstraction and data mining of medical time series data: trends and challenges.

    PubMed

    Catley, Christina; Stratti, Heidi; McGregor, Carolyn

    2008-01-01

    This paper presents emerging trends in the area of temporal abstraction and data mining, as applied to multi-dimensional data. The clinical context is that of Neonatal Intensive Care, an acute care environment distinguished by multi-dimensional and high-frequency data. Six key trends are identified and classified into the following categories: (1) data; (2) results; (3) integration; and (4) knowledge base. These trends form the basis of next-generation knowledge discovery in data systems, which must address challenges associated with supporting multi-dimensional and real-world clinical data, as well as null hypothesis testing. Architectural drivers for frameworks that support data mining and temporal abstraction include: process-level integration (i.e. workflow order); synthesized knowledge bases for temporal abstraction which combine knowledge derived from both data mining and domain experts; and system-level integration.

  7. Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics

    SciTech Connect

    Dai, William W; Woodward, Paul R

    2009-01-01

    An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.

  8. Towards Optimal Multi-Dimensional Query Processing with BitmapIndices

    SciTech Connect

    Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

    2005-09-30

    Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the access costs for processing multi-dimensional queries using bitmap indices with binning. Innovative features of our algorithm include (a) optimally placing the bin boundaries and (b) dynamically reordering the evaluation of the query terms. In addition, we derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

  9. Multi-dimensional Treatment Foster Care in England: differential effects by level of initial antisocial behaviour.

    PubMed

    Sinclair, Ian; Parry, Elizabeth; Biehal, Nina; Fresen, John; Kay, Catherine; Scott, Stephen; Green, Jonathan

    2016-08-01

    Multi-dimensional Treatment Foster Care (MTFC), recently renamed Treatment Foster Care Oregon for Adolescents (TFCO-A) is an internationally recognised intervention for troubled young people in public care. This paper seeks to explain conflicting results with MTFC by testing the hypotheses that it benefits antisocial young people more than others and does so through its effects on their behaviour. Hard-to-manage young people in English foster or residential homes were assessed at entry to a randomised and case-controlled trial of MTFC (n = 88) and usual care (TAU) (n = 83). Primary outcome was the Children's Global Assessment Scale (CGAS) at 12 months analysed according to high (n = 112) or low (n = 59) baseline level of antisocial behaviour on the Health of the Nation Outcome Scales for Children and Adolescents. After adjusting for covariates, there was no overall treatment effect on CGAS. However, the High Antisocial Group receiving MTFC gained more on the CGAS than the Low group (mean improvement 9.36 points vs. 5.33 points). This difference remained significant (p < 0.05) after adjusting for propensity and covariates and was statistically explained by the reduced antisocial behaviour ratings in MTFC. These analyses support the use of MTFC for youth in public care but only for those with higher levels of antisocial behaviour. Further work is needed on whether such benefits persist, and on possible negative effects of this treatment for those with low antisocial behaviour.Trial Registry Name: ISRCTNRegistry identification number: ISRCTN 68038570Registry URL: www.isrctn.com.

  10. Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy

    PubMed Central

    2011-01-01

    Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268

  11. Horn’s Curve Estimation Through Multi-Dimensional Interpolation

    DTIC Science & Technology

    2013-03-01

    paper Factor Analysis: Limitations and Alternatives by Ehrenberg & Goodhardt (1976). They provide an excellent example and thorough analysis of a...Wine+Quality Dillon, W. R., & Goldstein, M. (1984). Multivariate Analysis. New York: John Wiley & Sons. Ehrenberg , A., & Goodhardt, G. (1976). Factor

  12. Open, Modular Services for Large, Multi-Dimensional Raster Coverages: The OGC Web Coverage Service (WCS) Standards Suite

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2009-04-01

    Recent progress in hardware and software technology opens up vistas where flexible services on large, multi-dimensional coverage data become a commodity. Interactive data browsing like with Virtual Globes, selective download, and ad-hoc analysis services are about to become available routinely, as several sites already demonstrate. However, for easy access and true machine-machine communication, Semantic Web concepts as being investigated for vector and meta data, need to be extended to raster data and other coverage types. Even more will it then be important to rely on open standards for data and service interoperability. The Open GeoSpatial Consortium (OGC), following a modular approach to specifying geo service interfaces, has issued the Web Coverage Service (WCS) Implementation Standard for accessing coverages or parts thereof. In contrast to the Web Map Service (WMS), which delivers imagery, WCS preserves data semantics and, thus, allows further processing. Together with the Web Catalog Service (CS-W) and the Web Feature Service (WFS) WCS completes the classical triad of meta, vector, and raster data. As such, they represent the core data services on which other services build. The current version of WCS is 1.1 with Corrigendum 2, also referred to as WCS 1.1.2. The WCS Standards Working Group (WCS.SWG) is continuing development of WCS in various directions. One work item is to extend WCS, which currently is confined to regularly gridded data, with support for further coverage types, such as those specified in ISO 19123. Two recently released extensions to WCS are WCS-T ("T" standing for "transactional") which adds upload capabilities to coverage servers and WCPS (Web Coverage Processing Service) which offers a coverage processing language, thereby bridging the gap to the generic WPS (Web Processing Service). All this is embedded into OGC's current initiative to achieve modular topical specification suites through so-called "extensions" which add focused

  13. A Multi-Dimensional Approach to Measuring News Media Literacy

    ERIC Educational Resources Information Center

    Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa

    2015-01-01

    Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…

  14. Developing Multi-Dimensional Evaluation Criteria for English Learning Websites with University Students and Professors

    ERIC Educational Resources Information Center

    Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen

    2011-01-01

    Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…

  15. Kullback-Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Boughton, Keith A.

    2011-01-01

    This paper first discusses the relationship between Kullback-Leibler information (KL) and Fisher information in the context of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective. This explication should allow for a better understanding of the various item selection methods…

  16. A Replication Study on the Multi-Dimensionality of Online Social Presence

    ERIC Educational Resources Information Center

    Mykota, David B.

    2015-01-01

    The purpose of the present study is to conduct an external replication into the multi-dimensionality of social presence as measured by the Computer-Mediated Communication Questionnaire (Tu, 2005). Online social presence is one of the more important constructs for determining the level of interaction and effectiveness of learning in an online…

  17. Impact of Malaysian Polytechnics' Head of Department Multi-Dimensional Leadership Orientation towards Lecturers Work Commitment

    ERIC Educational Resources Information Center

    Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd

    2012-01-01

    The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…

  18. Multi-dimensional color image storage and retrieval for a normal arbitrary quantum superposition state

    NASA Astrophysics Data System (ADS)

    Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang

    2014-04-01

    Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.

  19. Evidencing Learning Outcomes: A Multi-Level, Multi-Dimensional Course Alignment Model

    ERIC Educational Resources Information Center

    Sridharan, Bhavani; Leitch, Shona; Watty, Kim

    2015-01-01

    This conceptual framework proposes a multi-level, multi-dimensional course alignment model to implement a contextualised constructive alignment of rubric design that authentically evidences and assesses learning outcomes. By embedding quality control mechanisms at each level for each dimension, this model facilitates the development of an aligned…

  20. Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter

    ERIC Educational Resources Information Center

    Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.

    2010-01-01

    We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…

  1. Higher order multi-dimensional extensions of Cesàro theorem

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Ji, Un Cig; Saitô, Kimiaki

    2015-12-01

    The Cesàro theorem is extended to the cases: (1) higher order Cesàro mean for sequence (discrete case); and (2) higher order, multi-dimensional and continuous Cesàro mean for functions. Also, we study the Cesàro theorem for the case of positive-order.

  2. A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system

    SciTech Connect

    Asadzadeh, M.; Bartoszek, K.

    2011-05-20

    We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.

  3. Pedagogical Factors Stimulating the Self-Development of Students' Multi-Dimensional Thinking in Terms of Subject-Oriented Teaching

    ERIC Educational Resources Information Center

    Andreev, Valentin I.

    2014-01-01

    The main aim of this research is to disclose the essence of students' multi-dimensional thinking, also to reveal the rating of factors which stimulate the raising of effectiveness of self-development of students' multi-dimensional thinking in terms of subject-oriented teaching. Subject-oriented learning is characterized as a type of learning where…

  4. Scaling analysis of stock markets

    NASA Astrophysics Data System (ADS)

    Bu, Luping; Shang, Pengjian

    2014-06-01

    In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.

  5. Identifying Multi-Dimensional Co-Clusters in Tensors Based on Hyperplane Detection in Singular Vector Spaces

    PubMed Central

    Liu, Xinyu; Yan, Hong

    2016-01-01

    Co-clustering, often called biclustering for two-dimensional data, has found many applications, such as gene expression data analysis and text mining. Nowadays, a variety of multi-dimensional arrays (tensors) frequently occur in data analysis tasks, and co-clustering techniques play a key role in dealing with such datasets. Co-clusters represent coherent patterns and exhibit important properties along all the modes. Development of robust co-clustering techniques is important for the detection and analysis of these patterns. In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order singular value decomposition (HOSVD) transforms a tensor into a core part and a singular vector matrix along each mode, whose row vectors can be clustered by a linear grouping algorithm (LGA). Meanwhile, hyperplanar patterns are extracted and successfully supported the identification of multi-dimensional co-clusters. To validate HDSVS, a number of synthetic and biological tensors were adopted. The synthetic tensors attested a favorable performance of this algorithm on noisy or overlapped data. Experiments with gene expression data and lineage data of embryonic cells further verified the reliability of HDSVS to practical problems. Moreover, the detected co-clusters are well consistent with important genetic pathways and gene ontology annotations. Finally, a series of comparisons between HDSVS and state-of-the-art methods on synthetic tensors and a yeast gene expression tensor were implemented, verifying the robust and stable performance of our method. PMID:27598575

  6. On the excited-state multi-dimensionality in cyanines

    NASA Astrophysics Data System (ADS)

    Dietzek, Benjamin; Brüggemann, Ben; Persson, Petter; Yartsev, Arkady

    2008-03-01

    Vibrational coherences in a photoexcited cyanine dye are preserved for the time-scale of diffusive torsional motion to the bottom of the excited-state potential. The coherently excited modes are virtually unaffected by solvent friction and thus distinct from the bond-twisting motion, which is strongly coupled to the surrounding solvent. We correlate the modes apparent in the resonance Raman and the four-wave mixing signal of 1,1'-diethyl-2,2'-cyanine with the understanding of optimal control of isomerization. In turn, the experimental results illustrate that optimal control might be used to obtain vibrational information complementary to conventional spectroscopic data.

  7. Multi-dimensional phenotyping: towards a new taxonomy for airway disease.

    PubMed

    Wardlaw, A J; Silverman, M; Siva, R; Pavord, I D; Green, R

    2005-10-01

    All the real knowledge which we possess, depends on methods by which we distinguish the similar from the dissimilar. The greater the number of natural distinctions this method comprehends the clearer becomes our idea of things. The more numerous the objects which employ our attention the more difficult it becomes to form such a method and the more necessary. Classification is a fundamental part of medicine. Diseases are often categorized according to pre-20th century descriptions and concepts of disease based on symptoms, signs and functional abnormalities rather than on underlying pathogenesis. Where the aetiology of disease has been revealed (for example in the infectious diseases) a more precise classification has become possible, but in the chronic inflammatory diseases, and in the inflammatory airway diseases in particular, where pathogenesis has been stubbornly difficult to elucidate, we still use broad descriptive terms such as asthma and chronic obstructive pulmonary disease, which defy precise definition because they encompass a wide spectrum of presentations and physiological and cellular abnormalities. It is our contention that these broad-brush terms have outlived their usefulness and that we should be looking to create a new taxonomy of airway disease-a taxonomy that more closely reflects the spectrum of phenotypes that are encompassed within the term airway inflammatory diseases, and that gives full recognition to late 20th and 21st century insights into the disordered physiology and cell biology that characterizes these conditions in the expectation that these will map more closely to both aetiology and response to treatment. Development of this taxonomy will require a much more complete and sophisticated correlation of the many variables that make up a condition than has been usual to employ in an approach that encompasses multi-dimensional phenotyping and uses complex statistical tools such as cluster analysis.

  8. Flexible optofluidic waveguide platform with multi-dimensional reconfigurability

    PubMed Central

    Parks, Joshua W.; Schmidt, Holger

    2016-01-01

    Dynamic reconfiguration of photonic function is one of the hallmarks of optofluidics. A number of approaches have been taken to implement optical tunability in microfluidic devices. However, a device architecture that allows for simultaneous high-performance microfluidic fluid handling as well as dynamic optical tuning has not been demonstrated. Here, we introduce such a platform based on a combination of solid- and liquid-core polydimethylsiloxane (PDMS) waveguides that also provides fully functioning microvalve-based sample handling. A combination of these waveguides forms a liquid-core multimode interference waveguide that allows for multi-modal tuning of waveguide properties through core liquids and pressure/deformation. We also introduce a novel lifting-gate lightvalve that simultaneously acts as a fluidic microvalve and optical waveguide, enabling mechanically reconfigurable light and fluid paths and seamless incorporation of controlled particle analysis. These new functionalities are demonstrated by an optical switch with >45 dB extinction ratio and an actuatable particle trap for analysis of biological micro- and nanoparticles. PMID:27597164

  9. Flexible optofluidic waveguide platform with multi-dimensional reconfigurability

    NASA Astrophysics Data System (ADS)

    Parks, Joshua W.; Schmidt, Holger

    2016-09-01

    Dynamic reconfiguration of photonic function is one of the hallmarks of optofluidics. A number of approaches have been taken to implement optical tunability in microfluidic devices. However, a device architecture that allows for simultaneous high-performance microfluidic fluid handling as well as dynamic optical tuning has not been demonstrated. Here, we introduce such a platform based on a combination of solid- and liquid-core polydimethylsiloxane (PDMS) waveguides that also provides fully functioning microvalve-based sample handling. A combination of these waveguides forms a liquid-core multimode interference waveguide that allows for multi-modal tuning of waveguide properties through core liquids and pressure/deformation. We also introduce a novel lifting-gate lightvalve that simultaneously acts as a fluidic microvalve and optical waveguide, enabling mechanically reconfigurable light and fluid paths and seamless incorporation of controlled particle analysis. These new functionalities are demonstrated by an optical switch with >45 dB extinction ratio and an actuatable particle trap for analysis of biological micro- and nanoparticles.

  10. Discriminative Dimensionality Reduction for Multi-dimensional Sequences.

    PubMed

    Su, Bing; Ding, Xiaoqing; Wang, Hao; Wu, Ying

    2017-02-07

    Since the observables at particular time instants in a temporal sequence exhibit dependencies, they are not independent samples. Thus, it is not plausible to apply i.i.d. assumption-based dimensionality reduction methods to sequence data. This paper presents a novel supervised dimensionality reduction approach for sequence data, called Linear Sequence Discriminant Analysis (LSDA). It learns a linear discriminative projection of the feature vectors in sequences to a lower-dimensional subspace by maximizing the separability of the sequence classes such that the entire sequences are holistically discriminated. The sequence class separability is constructed based on the sequence statistics, and the use of different statistics produces different LSDA methods. This paper presents and compares two novel LSDA methods, namely M-LSDA and D-LSDA. M-LSDA extracts model-based statistics by exploiting the dynamical structure of the sequence classes, and D-LSDA extracts the distance-based statistics by computing the pairwise similarity of samples from the same sequence class. Extensive experiments on several different tasks have demonstrated the effectiveness and the general applicability of the proposed methods.

  11. Hitchhiker's guide to multi-dimensional plant pathology.

    PubMed

    Saunders, Diane G O

    2015-02-01

    Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources.

  12. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    NASA Astrophysics Data System (ADS)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  13. An extension of the TV-HLL scheme for multi-dimensional compressible flows

    NASA Astrophysics Data System (ADS)

    Tiam Kapen, Pascalin; Tchuen, Ghislain

    2015-03-01

    This paper investigates a very simple method to numerically approximate the solution of the multi-dimensional Riemann problem for gas dynamics, using the literal extension of the Toro Vazquez-Harten Lax Leer (TV-HLL) scheme as its basis. Indeed, the present scheme is obtained by following the Toro-Vazquez splitting, and using the HLL algorithm with modified wave speeds for the pressure system. An essential feature of the TV-HLL scheme is its simplicity and its accuracy in computing multi-dimensional flows. The proposed scheme is carefully designed to simplify its eventual numerical implementation. It has been applied to numerical tests and its performances are demonstrated for some two-dimensional and three-dimensional test problems.

  14. towards a theory-based multi-dimensional framework for assessment in mathematics: The "SEA" framework

    NASA Astrophysics Data System (ADS)

    Anku, Sitsofe E.

    1997-09-01

    Using the reform documents of the National Council of Teachers of Mathematics (NCTM) (NCTM, 1989, 1991, 1995), a theory-based multi-dimensional assessment framework (the "SEA" framework) which should help expand the scope of assessment in mathematics is proposed. This framework uses a context based on mathematical reasoning and has components that comprise mathematical concepts, mathematical procedures, mathematical communication, mathematical problem solving, and mathematical disposition.

  15. Mokken Scale Analysis Using Hierarchical Clustering Procedures

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries

    2004-01-01

    Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…

  16. Study on the construction of multi-dimensional Remote Sensing feature space for hydrological drought

    NASA Astrophysics Data System (ADS)

    Xiang, Daxiang; Tan, Debao; Cui, Yuanlai; Wen, Xiongfei; Shen, Shaohong; Li, Zhe

    2014-03-01

    Hydrological drought refers to an abnormal water shortage caused by precipitation and surface water shortages or a groundwater imbalance. Hydrological drought is reflected in a drop of surface water, decrease of vegetation productivity, increase of temperature difference between day and night and so on. Remote sensing permits the observation of surface water, vegetation, temperature and other information from a macro perspective. This paper analyzes the correlation relationship and differentiation of both remote sensing and surface measured indicators, after the selection and extraction a series of representative remote sensing characteristic parameters according to the spectral characterization of surface features in remote sensing imagery, such as vegetation index, surface temperature and surface water from HJ-1A/B CCD/IRS data. Finally, multi-dimensional remote sensing features such as hydrological drought are built on a intelligent collaborative model. Further, for the Dong-ting lake area, two drought events are analyzed for verification of multi-dimensional features using remote sensing data with different phases and field observation data. The experiments results proved that multi-dimensional features are a good method for hydrological drought.

  17. Organising multi-dimensional biological image information: the BioImage Database.

    PubMed

    Carazo, J M; Stelzer, E H; Engel, A; Fita, I; Henn, C; Machtynger, J; McNeil, P; Shotton, D M; Chagoyen, M; de Alarcón, P A; Fritsch, R; Heymann, J B; Kalko, S; Pittet, J J; Rodriguez-Tomé, P; Boudier, T

    1999-01-01

    Nowadays it is possible to unravel complex information at all levels of cellular organization by obtaining multi-dimensional image information. At the macromolecular level, three-dimensional (3D) electron microscopy, together with other techniques, is able to reach resolutions at the nanometer or subnanometer level. The information is delivered in the form of 3D volumes containing samples of a given function, for example, the electron density distribution within a given macromolecule. The same situation happens at the cellular level with the new forms of light microscopy, particularly confocal microscopy, all of which produce biological 3D volume information. Furthermore, it is possible to record sequences of images over time (videos), as well as sequences of volumes, bringing key information on the dynamics of living biological systems. It is in this context that work on BioImage started two years ago, and that its first version is now presented here. In essence, BioImage is a database specifically designed to contain multi-dimensional images, perform queries and interactively work with the resulting multi-dimensional information on the World Wide Web, as well as accomplish the required cross-database links. Two sister home pages of BioImage can be accessed at http://www. bioimage.org and http://www-embl.bioimage.org

  18. Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices

    SciTech Connect

    Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

    2006-03-30

    Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

  19. The INTERGROWTH-21st Project Neurodevelopment Package: A Novel Method for the Multi-Dimensional Assessment of Neurodevelopment in Pre-School Age Children

    PubMed Central

    Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose

    2014-01-01

    Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589

  20. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  1. Rasch rating scale analysis of the Attitudes Toward Research Scale.

    PubMed

    Papanastasiou, Elena C; Schumacker, Randall

    2014-01-01

    College students may view research methods courses with negative attitudes, however, few studies have investigated this issue due to the lack of instruments that measure the students' attitudes towards research. Therefore, the purpose of this study was to examine the psychometric properties of a Attitudes Toward Research Scale using Rasch rating scale analysis. Assessment of attitudes toward research is essential to determine if students have negative attitudes towards research and assist instructors in better facilitation of learning research methods in their courses. The results of this study have shown that a thirty item Attitudes Toward Research Scale yielded scores with high person and item reliability.

  2. Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling

    ERIC Educational Resources Information Center

    Kim, Se-Kang; Davison, Mark L.; Frisby, Craig L.

    2007-01-01

    This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major…

  3. Multi-dimensional Simulations of Core Collapse Supernovae employing Ray-by-Ray Neutrino Transport

    NASA Astrophysics Data System (ADS)

    Hix, W. R.; Mezzacappa, A.; Liebendoerfer, M.; Messer, O. E. B.; Blondin, J. M.; Bruenn, S. W.

    2001-12-01

    Decades of research on the mechanism which causes core collapse supernovae has evolved a paradigm wherein the shock that results from the formation of the proto-neutron star stalls, failing to produce an explosion. Only when the shock is re-energized by the tremendous neutrino flux that is carrying off the binding energy of this proto-neutron star can it drive off the star's envelope, creating a supernova. Work in recent years has demonstrated the importance of multi-dimensional hydrodynamic effects like convection to successful simulation of an explosion. Further work has established the necessity of accurately characterizing the distribution of neutrinos in energy and direction. This requires discretizing the neutrino distribution into multiple groups, adding greatly to the computational cost. However, no supernova simulations to date have combined self-consistent multi-group neutrino transport with multi-dimensional hydrodynamics. We present preliminary results of our efforts to combine these important facets of the supernova mechanism by coupling self-consistent ray-by-ray multi-group Boltzmann and flux-limited diffusion neutrino transport schemes to multi-dimensional hydrodynamics. This research is supported by NASA under contract NAG5-8405, by the NSF under contract AST-9877130, and under a SciDAC grant from the DoE Office of Science High Energy and Nuclear Physics Program. Work at Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  4. An optimization approach to multi-dimensional time domain acoustic inverse problems.

    PubMed

    Gustafsson, M; He, S

    2000-10-01

    An optimization approach to a multi-dimensional acoustic inverse problem in the time domain is considered. The density and/or the sound speed are reconstructed by minimizing an objective functional. By introducing dual functions and using the Gauss divergence theorem, the gradient of the objective functional is found as an explicit expression. The parameters are then reconstructed by an iterative algorithm (the conjugate gradient method). The reconstruction algorithm is tested with noisy data, and these tests indicate that the algorithm is stable and robust. The computation time for the reconstruction is greatly improved when the analytic gradient is used.

  5. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  6. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle.

    PubMed

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, MinKwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-03-08

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles.

  7. Multi-dimensional modeling of the application of catalytic combustion to homogeneous charge compression ignition engine

    NASA Astrophysics Data System (ADS)

    Zeng, Wen; Xie, Maozhao

    2006-12-01

    The detailed surface reaction mechanism of methane on rhodium catalyst was analyzed. Comparisons between numerical simulation and experiments showed a basic agreement. The combustion process of homogeneous charge compression ignition (HCCI) engine whose piston surface has been coated with catalyst (rhodium and platinum) was numerically investigated. A multi-dimensional model with detailed chemical kinetics was built. The effects of catalytic combustion on the ignition timing, the temperature and CO concentration fields, and HC, CO and NOx emissions of the HCCI engine were discussed. The results showed the ignition timing of the HCCI engine was advanced and the emissions of HC and CO were decreased by the catalysis.

  8. Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.

    PubMed

    Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen

    2014-05-01

    Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg.

  9. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle

    PubMed Central

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, MinKwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-01-01

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles. PMID:28272445

  10. Computational multi-dimensional imaging based on compound-eye optics

    NASA Astrophysics Data System (ADS)

    Horisaki, Ryoichi; Nakamura, Tomoya; Tanida, Jun

    2014-11-01

    Artificial compound-eye optics have been used for three-dimensional information acquisition and display. It also enables us to realize a diversity of coded imaging process in each elemental optics. In this talk, we introduce our single-shot compound-eye imaging system to observe multi-dimensional information including depth, spectrum, and polarization based on compressive sensing. Furthermore it is applicable to increase the dynamic range and field-of-view. We also demonstrate an extended depth-of-field (DOF) cameras based on compound-eye optics. These extended DOF cameras physically or computationally implement phase modulations to increase the focusing range.

  11. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  12. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle

    NASA Astrophysics Data System (ADS)

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, Minkwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-03-01

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles.

  13. Discovering MicroRNA-Regulatory Modules in Multi-Dimensional Cancer Genomic Data: A Survey of Computational Methods

    PubMed Central

    Walsh, Christopher J.; Hu, Pingzhao; Batt, Jane; dos Santos, Claudia C.

    2016-01-01

    MicroRNAs (miRs) are small single-stranded noncoding RNA that function in RNA silencing and post-transcriptional regulation of gene expression. An increasing number of studies have shown that miRs play an important role in tumorigenesis, and understanding the regulatory mechanism of miRs in this gene regulatory network will help elucidate the complex biological processes at play during malignancy. Despite advances, determination of miR–target interactions (MTIs) and identification of functional modules composed of miRs and their specific targets remain a challenge. A large amount of data generated by high-throughput methods from various sources are available to investigate MTIs. The development of data-driven tools to harness these multi-dimensional data has resulted in significant progress over the past decade. In parallel, large-scale cancer genomic projects are allowing new insights into the commonalities and disparities of miR–target regulation across cancers. In the first half of this review, we explore methods for identification of pairwise MTIs, and in the second half, we explore computational tools for discovery of miR-regulatory modules in a cancer-specific and pan-cancer context. We highlight strengths and limitations of each of these tools as a practical guide for the computational biologists. PMID:27721651

  14. Source Code Analysis Laboratory (SCALe)

    DTIC Science & Technology

    2012-04-01

    revenue. Among respondents to the IAAR survey, 86% of companies certified in quality management realized a positive return on investment (ROI). An...SCALe undertakes. Testing and calibration laboratories that comply with ISO /IEC 17025 also operate in accordance with ISO 9001 . • NIST National...17025:2005 accredited and ISO 9001 :2008 registered. 4.3 SAIC Accreditation and Certification Services SAIC (Science Applications International

  15. Scale-PC shielding analysis sequences

    SciTech Connect

    Bowman, S.M.

    1996-05-01

    The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.

  16. POLARIZED LINE FORMATION IN MULTI-DIMENSIONAL MEDIA. V. EFFECTS OF ANGLE-DEPENDENT PARTIAL FREQUENCY REDISTRIBUTION

    SciTech Connect

    Anusha, L. S.; Nagendra, K. N.

    2012-02-10

    The solution of polarized radiative transfer equation with angle-dependent (AD) partial frequency redistribution (PRD) is a challenging problem. Modeling the observed, linearly polarized strong resonance lines in the solar spectrum often requires the solution of the AD line transfer problems in one-dimensional or multi-dimensional (multi-D) geometries. The purpose of this paper is to develop an understanding of the relative importance of the AD PRD effects and the multi-D transfer effects and particularly their combined influence on the line polarization. This would help in a quantitative analysis of the second solar spectrum (the linearly polarized spectrum of the Sun). We consider both non-magnetic and magnetic media. In this paper we reduce the Stokes vector transfer equation to a simpler form using a Fourier decomposition technique for multi-D media. A fast numerical method is also devised to solve the concerned multi-D transfer problem. The numerical results are presented for a two-dimensional medium with a moderate optical thickness (effectively thin) and are computed for a collisionless frequency redistribution. We show that the AD PRD effects are significant and cannot be ignored in a quantitative fine analysis of the line polarization. These effects are accentuated by the finite dimensionality of the medium (multi-D transfer). The presence of magnetic fields (Hanle effect) modifies the impact of these two effects to a considerable extent.

  17. Scaling in ANOVA-simultaneous component analysis.

    PubMed

    Timmerman, Marieke E; Hoefsloot, Huub C J; Smilde, Age K; Ceulemans, Eva

    In omics research often high-dimensional data is collected according to an experimental design. Typically, the manipulations involved yield differential effects on subsets of variables. An effective approach to identify those effects is ANOVA-simultaneous component analysis (ASCA), which combines analysis of variance with principal component analysis. So far, pre-treatment in ASCA received hardly any attention, whereas its effects can be huge. In this paper, we describe various strategies for scaling, and identify a rational approach. We present the approaches in matrix algebra terms and illustrate them with an insightful simulated example. We show that scaling directly influences which data aspects are stressed in the analysis, and hence become apparent in the solution. Therefore, the cornerstone for proper scaling is to use a scaling factor that is free from the effect of interest. This implies that proper scaling depends on the effect(s) of interest, and that different types of scaling may be proper for the different effect matrices. We illustrate that different scaling approaches can greatly affect the ASCA interpretation with a real-life example from nutritional research. The principle that scaling factors should be free from the effect of interest generalizes to other statistical methods that involve scaling, as classification methods.

  18. Ion-acoustic solitary waves and their multi-dimensional instability in a magnetized degenerate plasma

    SciTech Connect

    Haider, M. M.; Mamun, A. A.

    2012-10-15

    A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.

  19. Multi-dimensional Nanostructures for Microfluidic Screening of Biomarkers: From Molecular Separation to Cancer Cell Detection

    PubMed Central

    Ng, Elaine; Chen, Kaina; Hang, Annie; Syed, Abeer; Zhang, John X.J.

    2016-01-01

    Rapid screening of biomarkers, with high specificity and accuracy, is critical for many point-of-care diagnostics. Microfluidics, the use of microscale channels to manipulate small liquid samples and carry reactions in parallel, offers tremendous opportunities to address fundamental questions in biology and provide a fast growing set of clinical tools for medicine. Emerging multi-dimensional nanostructures, when coupled with microfluidics, enable effective and efficient screening with high specificity and sensitivity, both of which are important aspects of biological detection systems. In this review, we provide an overview of current research and technologies that utilize nanostructures to facilitate biological separation in microfluidic channels. Various important physical parameters and theoretical equations that characterize and govern flow in nanostructure-integrated microfluidic channels will be introduced and discussed. The application of multi-dimensional nanostructures, including nanoparticles, nanopillars, and nanoporous layers, integrated with microfluidic channels in molecular and cellular separation will also be reviewed. Finally, we will close with insights on the future of nanostructure-integrated microfluidic platforms and their role in biological and biomedical applications. PMID:26692080

  20. Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

    PubMed Central

    Li, Haoran; Xiong, Li; Jiang, Xiaoqian

    2014-01-01

    Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

  1. High-frequency stock linkage and multi-dimensional stationary processes

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Bao, Si; Chen, Jingchao

    2017-02-01

    In recent years, China's stock market has experienced dramatic fluctuations; in particular, in the second half of 2014 and 2015, the market rose sharply and fell quickly. Many classical financial phenomena, such as stock plate linkage, appeared repeatedly during this period. In general, these phenomena have usually been studied using daily-level data or minute-level data. Our paper focuses on the linkage phenomenon in Chinese stock 5-second-level data during this extremely volatile period. The method used to select the linkage points and the arbitrage strategy are both based on multi-dimensional stationary processes. A new program method for testing the multi-dimensional stationary process is proposed in our paper, and the detailed program is presented in the paper's appendix. Because of the existence of the stationary process, the strategy's logarithmic cumulative average return will converge under the condition of the strong ergodic theorem, and this ensures the effectiveness of the stocks' linkage points and the more stable statistical arbitrage strategy.

  2. Multi-Dimensional Nanostructures for Microfluidic Screening of Biomarkers: From Molecular Separation to Cancer Cell Detection.

    PubMed

    Ng, Elaine; Chen, Kaina; Hang, Annie; Syed, Abeer; Zhang, John X J

    2016-04-01

    Rapid screening of biomarkers, with high specificity and accuracy, is critical for many point-of-care diagnostics. Microfluidics, the use of microscale channels to manipulate small liquid samples and carry reactions in parallel, offers tremendous opportunities to address fundamental questions in biology and provide a fast growing set of clinical tools for medicine. Emerging multi-dimensional nanostructures, when coupled with microfluidics, enable effective and efficient screening with high specificity and sensitivity, both of which are important aspects of biological detection systems. In this review, we provide an overview of current research and technologies that utilize nanostructures to facilitate biological separation in microfluidic channels. Various important physical parameters and theoretical equations that characterize and govern flow in nanostructure-integrated microfluidic channels will be introduced and discussed. The application of multi-dimensional nanostructures, including nanoparticles, nanopillars, and nanoporous layers, integrated with microfluidic channels in molecular and cellular separation will also be reviewed. Finally, we will close with insights on the future of nanostructure-integrated microfluidic platforms and their role in biological and biomedical applications.

  3. A lock-free priority queue design based on multi-dimensional linked lists

    SciTech Connect

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN) for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.

  4. Multi-dimensional self-esteem and magnitude of change in the treatment of anorexia nervosa.

    PubMed

    Collin, Paula; Karatzias, Thanos; Power, Kevin; Howard, Ruth; Grierson, David; Yellowlees, Alex

    2016-03-30

    Self-esteem improvement is one of the main targets of inpatient eating disorder programmes. The present study sought to examine multi-dimensional self-esteem and magnitude of change in eating psychopathology among adults participating in a specialist inpatient treatment programme for anorexia nervosa. A standardised assessment battery, including multi-dimensional measures of eating psychopathology and self-esteem, was completed pre- and post-treatment for 60 participants (all white Scottish female, mean age=25.63 years). Statistical analyses indicated that self-esteem improved with eating psychopathology and weight over the course of treatment, but that improvements were domain-specific and small in size. Global self-esteem was not predictive of treatment outcome. Dimensions of self-esteem at baseline (Lovability and Moral Self-approval), however, were predictive of magnitude of change in dimensions of eating psychopathology (Shape and Weight Concern). Magnitude of change in Self-Control and Lovability dimensions were predictive of magnitude of change in eating psychopathology (Global, Dietary Restraint, and Shape Concern). The results of this study demonstrate that the relationship between self-esteem and eating disorder is far from straightforward, and suggest that future research and interventions should focus less exclusively on self-esteem as a uni-dimensional psychological construct.

  5. A lock-free priority queue design based on multi-dimensional linked lists

    DOE PAGES

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  6. Multi-dimensional NMR without coherence transfer: Minimizing losses in large systems

    NASA Astrophysics Data System (ADS)

    Liu, Yizhou; Prestegard, James H.

    2011-10-01

    Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient 15N- 1H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of 15N- 1H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with 15N in all alanine residues. Operation at 4 °C mimics observation of a 50 kDa protein at 35 °C.

  7. Visualization of Multi-dimensional MISR Datasets Using Self-Organizing Map

    NASA Astrophysics Data System (ADS)

    Li, P.; Jacob, J.; Braverman, A.; Block, G.

    2003-12-01

    Many techniques exist for visualization of high dimensional datasets including Parallel Coordinates, Projection Pursuit, and Self-Organizing Map (SOM), but none of these are particularly well suited to satellite data. Remote sensing datasets are typically highly multivariate, but also have spatial structure. In analyzing such data, it is critical to maintain the spatial context within which multivariate relationships exist. Only then can we begin to investigate how those relationships change spatially, and connect observed phenomena to physical processes that may explain them. We present an analysis and visualization system called SOM_VIS that applies an enhanced SOM algorithm proposed by Todd & Kirby [1] to multi-dimensional image datasets in a way that maintains spatial context. We first use SOM to project high-dimensional data into a non-uniform 3D lattice structure. The lattice structure is then mapped to a color space to serve as a colormap for the image. The Voronoi cell refinement algorithm is then used to map the SOM lattice structure to various levels of color resolution. The final result is a false color image with similar colors representing similar characteristics across all its data dimensions. We demonstrate this system using data from JPL's Multi-angle Imaging Spectro-Radiometer (MISR), which looks at Earth and its atmosphere in 36 channels: all combinations of four spectral bands and nine view angles. The SOM_VIS tool consists of a data control panel for users to select a subset from MISR's Level 1B Radiance data products, and a training control panel for users to choose various parameters for SOM training. These include the size of the SOM lattice, the method used to modify the control vectors towards the input training vector, convergence rate, and number of Voronoi regions. Also, the SOM_VIS system contains a multi-window display system allowing users to view false color SOM images and the corresponding color maps for trained SOM lattices. In

  8. Stride Segmentation during Free Walk Movements Using Multi-Dimensional Subsequence Dynamic Time Warping on Inertial Sensor Data

    PubMed Central

    Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.

    2015-01-01

    Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489

  9. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    PubMed

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes.

  10. Using Multidimensional Scaling for Curricular Goal Analysis.

    ERIC Educational Resources Information Center

    Leitzman, David F.; And Others

    1980-01-01

    Reports research that utilized multidimensional scaling and related analytic procedures to validate the curricular goals of a graduate therapeutic recreation program. Data analysis includes the use of the two-dimensional KYST and PREFMAP spaces. (Author/JD)

  11. Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, Surendra N.

    1994-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in

  12. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    PubMed

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  13. Multi-dimensional instability of dust-acoustic solitary waves in a magnetized plasma with opposite polarity dust

    SciTech Connect

    Akhter, T.; Hossain, M. M.; Mamun, A. A.

    2012-09-15

    Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.

  14. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM

    PubMed Central

    Singh, Brajesh K.; Srivastava, Vineet K.

    2015-01-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  15. Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping

    PubMed Central

    Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.

    2016-01-01

    Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085

  16. Racial-ethnic self-schemas: Multi-dimensional identity-based motivation

    PubMed Central

    Oyserman, Daphna

    2008-01-01

    Prior self-schema research focuses on benefits of being schematic vs. aschematic in stereotyped domains. The current studies build on this work, examining racial-ethnic self-schemas as multi-dimensional, containing multiple, conflicting, and non-integrated images. A multidimensional perspective captures complexity; examining net effects of dimensions predicts within-group differences in academic engagement and well-being. When racial-ethnicity self-schemas focus attention on membership in both in-group and broader society, engagement with school should increase since school is not seen as out-group defining. When racial-ethnicity self-schemas focus attention on inclusion (not obstacles to inclusion) in broader society, risk of depressive symptoms should decrease. Support for these hypotheses was found in two separate samples (8th graders, n = 213, 9th graders followed to 12th grade n = 141). PMID:19122837

  17. Multi-dimensional coherent optical spectroscopy of semiconductor nanostructures: Collinear and non-collinear approaches

    SciTech Connect

    Nardin, Gaël; Li, Hebin; Autry, Travis M.; Moody, Galan; Singh, Rohan; Cundiff, Steven T.

    2015-03-21

    We review our recent work on multi-dimensional coherent optical spectroscopy (MDCS) of semiconductor nanostructures. Two approaches, appropriate for the study of semiconductor materials, are presented and compared. A first method is based on a non-collinear geometry, where the Four-Wave-Mixing (FWM) signal is detected in the form of a radiated optical field. This approach works for samples with translational symmetry, such as Quantum Wells (QWs) or large and dense ensembles of Quantum Dots (QDs). A second method detects the FWM in the form of a photocurrent in a collinear geometry. This second approach extends the horizon of MDCS to sub-diffraction nanostructures, such as single QDs, nanowires, or nanotubes, and small ensembles thereof. Examples of experimental results obtained on semiconductor QW structures are given for each method. In particular, it is shown how MDCS can assess coupling between excitons confined in separated QWs.

  18. High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

  19. Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests

    SciTech Connect

    Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L

    2001-05-31

    The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.

  20. Boussinesq-like multi-component lattice equations and multi-dimensional consistency

    NASA Astrophysics Data System (ADS)

    Hietarinta, Jarmo

    2011-04-01

    Various classes of one-component lattice equations, defined by a multi-linear relation between values at the vertices of an elementary square, have recently been classified using the requirement of multi-dimensional consistency (consistency-around-the-cube, CAC). Here we consider multi-component equations, with some equations defined on the edges of the consistency cube and others on the faces of the cube. Some examples of this type are already known, including the lattice-modified Boussinesq equation (lmBSQ). We classify the edge equations into three canonical forms and derive the consequences of their CAC-property. This restricts the form of the face equation sufficiently so that its CAC-property can be analyzed. As a result we obtain a number of integrable multi-component lattice equations, some generalizing lmBSQ.

  1. Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping

    NASA Astrophysics Data System (ADS)

    Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.

    2016-12-01

    Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.

  2. A dynamic nuclear polarization strategy for multi-dimensional Earth's field NMR spectroscopy.

    PubMed

    Halse, Meghan E; Callaghan, Paul T

    2008-12-01

    Dynamic nuclear polarization (DNP) is introduced as a powerful tool for polarization enhancement in multi-dimensional Earth's field NMR spectroscopy. Maximum polarization enhancements, relative to thermal equilibrium in the Earth's magnetic field, are calculated theoretically and compared to the more traditional prepolarization approach for NMR sensitivity enhancement at ultra-low fields. Signal enhancement factors on the order of 3000 are demonstrated experimentally using DNP with a nitroxide free radical, TEMPO, which contains an unpaired electron which is strongly coupled to a neighboring (14)N nucleus via the hyperfine interaction. A high-quality 2D (19)F-(1)H COSY spectrum acquired in the Earth's magnetic field with DNP enhancement is presented and compared to simulation.

  3. Sequential acquisition of multi-dimensional heteronuclear chemical shift correlation spectra with 1H detection

    NASA Astrophysics Data System (ADS)

    Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai

    2014-03-01

    RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here.

  4. Sequential acquisition of multi-dimensional heteronuclear chemical shift correlation spectra with 1H detection

    PubMed Central

    Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai

    2014-01-01

    RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here. PMID:24671105

  5. Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae

    SciTech Connect

    Tanaka, Masaomi; Maeda, Keiichi; Mazzali, Paolo A.; Nomoto, Ken'ichi

    2008-05-21

    We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.

  6. Deadlock-free class routes for collective communications embedded in a multi-dimensional torus network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip

    2013-01-29

    A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.

  7. MULTI-DIMENSIONAL MASS SPECTROMETRY-BASED SHOTGUN LIPIDOMICS AND NOVEL STRATEGIES FOR LIPIDOMIC ANALYSES

    PubMed Central

    Han, Xianlin; Yang, Kui; Gross, Richard W.

    2011-01-01

    Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525

  8. Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Tanaka, Masaomi; Maeda, Keiichi; Mazzali, Paolo A.; Nomoto, Ken'ichi

    2008-05-01

    We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (<~40 days after the explosion) and late phases (~1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20×1051 ergs. Properties of off-axis hypernovae are also discussed briefly.

  9. Generation and entanglement of multi-dimensional multi-mode coherent fields in cavity QED

    NASA Astrophysics Data System (ADS)

    Maleki, Y.

    2016-11-01

    We introduce generalized multi-mode superposition of multi-dimensional coherent field states and propose a generation scheme of such states in a cavity QED scenario. An appropriate encoding of information on these states is employed, which maps the states to the Hilbert space of some multi-qudit states. The entanglement of these states is characterized based on such proper encodings. A detailed study of entanglement in general multi-qudit coherent states is presented, and in addition to establishing some explicit expressions for quantifying entanglement of such systems, several important features of entanglement in these system states are exposed. Furthermore, the effects of both cavity decay and channel noise on these system states are studied and their properties are illustrated.

  10. Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport

    NASA Astrophysics Data System (ADS)

    Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun

    2015-02-01

    We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.

  11. Predicting respiratory tumor motion with multi-dimensional adaptive filters and support vector regression.

    PubMed

    Riaz, Nadeem; Shanker, Piyush; Wiersma, Rodney; Gudmundsson, Olafur; Mao, Weihua; Widrow, Bernard; Xing, Lei

    2009-10-07

    Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.

  12. Multi-dimensional validation of a maximum-entropy-based interpolative moment closure

    NASA Astrophysics Data System (ADS)

    Tensuda, Boone R.; McDonald, James G.; Groth, Clinton P. T.

    2016-11-01

    The performance of a novel maximum-entropy-based 14-moment interpolative closure is examined for multi-dimensional flows via validation of the closure for several established benchmark problems. Despite its consideration of heat transfer, this 14-moment closure contains closed-form expressions for the closing fluxes, unlike the maximum-entropy models on which it is based. While still retaining singular behaviour in some regions of realizable moment space, the interpolative closure proves to have a large region of hyperbolicity while remaining computationally tractable. Furthermore, the singular nature has been shown to be advantageous for practical simulations. The multi-dimensional cases considered here include Couette flow, heat transfer between infinite parallel plates, subsonic flow past a circular cylinder, and lid-driven cavity flow. The 14-moment predictions are compared to analytical, DSMC, and experimental results as well the results of other closures. For each case, a range of Knudsen numbers are explored in order to assess the validity and accuracy of the closure in different regimes. For Couette flow and heat transfer between flat plates, it is shown that the closure predictions are consistent with the expected analytical solutions in all regimes. In the cases of flow past a circular cylinder and lid-driven cavity flow, the closure is found to give more accurate results than the related lower-order maximum-entropy Gaussian and maximum-entropy-based regularized Gaussian closures. The ability to predict important non-equilibrium phenomena, such as a counter-gradient heat flux, is also established.

  13. TWO-DIMENSIONAL CORE-COLLAPSE SUPERNOVA MODELS WITH MULTI-DIMENSIONAL TRANSPORT

    SciTech Connect

    Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun E-mail: burrows@astro.princeton.edu

    2015-02-10

    We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant O(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate O(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ''ray-by-ray'' approach employed by all other groups may be compromising their results. We show that ''ray-by-ray'' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.

  14. Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.

    SciTech Connect

    Evans, Gregory Herbert; Chen, Ken Shuang

    2004-06-01

    This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using the finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.

  15. Incorporating scale into digital terrain analysis

    NASA Astrophysics Data System (ADS)

    Dragut, L. D.; Eisank, C.; Strasser, T.

    2009-04-01

    Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative

  16. MAI (Multi-Dimensional Activity Based Integrated Approach): A Strategy for Cognitive Development of the Learners at the Elementary Stage

    ERIC Educational Resources Information Center

    Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu

    2012-01-01

    Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…

  17. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  18. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    NASA Astrophysics Data System (ADS)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in

  19. Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport

    NASA Astrophysics Data System (ADS)

    Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl

    We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.

  20. Effect of a multi-dimensional intervention programme on the motivation of physical education students.

    PubMed

    Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás

    2014-01-01

    This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4(th) year Secondary Education students--control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop.

  1. Software Defined Networking (SDN) controlled all optical switching networks with multi-dimensional switching architecture

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng

    2014-08-01

    Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.

  2. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  3. Efficient multi-dimensional solution of PDEs using Chebyshev spectral methods

    NASA Astrophysics Data System (ADS)

    Julien, Keith; Watson, Mike

    2009-03-01

    A robust methodology is presented for efficiently solving partial differential equations using Chebyshev spectral techniques. It is well known that differential equations in one dimension can be solved efficiently with Chebyshev discretizations, O( N) operations for N unknowns, however this efficiency is lost in higher dimensions due to the coupling between modes. This paper presents the "quasi-inverse" technique (QIT), which combines optimizations of one-dimensional spectral differentiation matrices with Kronecker matrix products to build efficient multi-dimensional operators. This strategy results in O( N2 D-1 ) operations for ND unknowns, independent of the form of the differential operators. QIT is compared to the matrix diagonalization technique (MDT) of Haidvogel and Zang [D.B. Haidvogel, T. Zang, The accurate solution of Poisson's equation by expansion in Chebyshev polynomials, J. Comput. Phys. 30 (1979) 167-180] and Shen [J. Shen, Efficient spectral-Galerkin method. II. Direct solvers of second- and fourth-order equations using Chebyshev polynomials, SIAM J. Sci. Comp. 16 (1) (1995) 74-87]. While the cost for MDT and QIT are the same in two dimensions, there are significant differences. MDT utilizes an eigenvalue/eigenvector decomposition and can only be used for relatively simple differential equations. QIT is based upon intrinsic properties of the Chebyshev polynomials and is adaptable to linear PDEs with constant coefficients in simple domains. We present results for a standard suite of test problems, and discuss of the adaptability of QIT to more complicated problems.

  4. TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients.

    PubMed

    Loorak, Mona Hosseinkhani; Perin, Charles; Kamal, Noreen; Hill, Michael; Carpendale, Sheelagh

    2016-01-01

    We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan.

  5. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  6. Multi-dimensional evaluation and ranking of coastal areas using GIS and multiple criteria choice methods.

    PubMed

    Kitsiou, Dimitra; Coccossis, Harry; Karydis, Michael

    2002-02-04

    Coastal ecosystems are increasingly threatened by short-sighted management policies that focus on human activities rather than the systems that sustain them. The early assessment of the impacts of human activities on the quality of the environment in coastal areas is important for decision-making, particularly in cases of environment/development conflicts, such as environmental degradation and saturation in tourist areas. In the present study, a methodology was developed for the multi-dimensional evaluation and ranking of coastal areas using a set of criteria and based on the combination of multiple criteria choice methods and Geographical Information Systems (GIS). The northeastern part of the island of Rhodes in the Aegean Sea, Greece was the case study area. A distinction in sub-areas was performed and they were ranked according to socio-economic and environmental parameters. The robustness of the proposed methodology was assessed using different configurations of the initial criteria and reapplication of the process. The advantages and disadvantages, as well as the usefulness of this methodology for comparing the status of coastal areas and evaluating their potential for further development based on various criteria, is further discussed.

  7. Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

    2000-01-01

    We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

  8. Hierarchical multi-dimensional limiting strategy for correction procedure via reconstruction

    NASA Astrophysics Data System (ADS)

    Park, Jin Seok; Kim, Chongam

    2016-03-01

    Hierarchical multi-dimensional limiting process (MLP) is improved and extended for flux reconstruction or correction procedure via reconstruction (FR/CPR) on unstructured grids. MLP was originally developed in finite volume method (FVM) and it provides an accurate, robust and efficient oscillation-control mechanism in multiple dimensions for linear reconstruction. This limiting philosophy can be hierarchically extended into higher-order Pn approximation or reconstruction. The resulting algorithm is referred to as the hierarchical MLP and facilitates detailed capture of flow structures while maintaining formal order-of-accuracy in a smooth region and providing accurate non-oscillatory solutions across a discontinuous region. This algorithm was developed within modal DG framework, but it can also be formulated into a nodal framework, most notably the FR/CPR framework. Troubled-cells are detected by applying the MLP concept, and the final accuracy is determined by a projection procedure and the hierarchical MLP limiting step. Extensive numerical analyses and computations, ranging from two-dimensional to three-dimensional fluid systems, have demonstrated that the proposed limiting approach yields outstanding performances in capturing compressible inviscid and viscous flow features.

  9. Beyond the continuum: a multi-dimensional phase space for neutral–niche community assembly

    PubMed Central

    Latombe, Guillaume; McGeoch, Melodie A.

    2015-01-01

    Neutral and niche processes are generally considered to interact in natural communities along a continuum, exhibiting community patterns bounded by pure neutral and pure niche processes. The continuum concept uses niche separation, an attribute of the community, to test the hypothesis that communities are bounded by pure niche or pure neutral conditions. It does not accommodate interactions via feedback between processes and the environment. By contrast, we introduce the Community Assembly Phase Space (CAPS), a multi-dimensional space that uses community processes (such as dispersal and niche selection) to define the limiting neutral and niche conditions and to test the continuum hypothesis. We compare the outputs of modelled communities in a heterogeneous landscape, assembled by pure neutral, pure niche and composite processes. Differences in patterns under different combinations of processes in CAPS reveal hidden complexity in neutral–niche community dynamics. The neutral–niche continuum only holds for strong dispersal limitation and niche separation. For weaker dispersal limitation and niche separation, neutral and niche processes amplify each other via feedback with the environment. This generates patterns that lie well beyond those predicted by a continuum. Inferences drawn from patterns about community assembly processes can therefore be misguided when based on the continuum perspective. CAPS also demonstrates the complementary information value of different patterns for inferring community processes and captures the complexity of community assembly. It provides a general tool for studying the processes structuring communities and can be applied to address a range of questions in community and metacommunity ecology. PMID:26702047

  10. Opportunities in multi dimensional trace metal imaging: Taking copper associated disease research to the next level

    PubMed Central

    Vogt, Stefan; Ralle, Martina

    2012-01-01

    Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951

  11. Off-Center Thawed Gaussian Multi-Dimensional Approximation for Semiclassical Propagation

    NASA Astrophysics Data System (ADS)

    Kocia, Lucas; Heller, Eric

    2014-03-01

    The Off-Center Thawed Gaussian Approximation's (OCTGA) performance in multi-dimensional coupled systems is shown in comparison to Herman-Kluk (HK), the current workhorse of semiclassical propagation in the field. As with the Heller-Huber method and Van Voorhis et al.'s nearly-real method of trajectories, OCTGA requires only a single trajectory and associated stability matrix at every timestep to compute Gaussian wave packet overlaps under any Hamiltonian. This is in sharp contrast to HK which suffers from the necessity of having to propagate thousands or more computationally expensive stability matrices at every timestep. Unlike similar methods, the OCTGA relies upon a single real guiding trajectory, which in general does not start at the center of the initial wave packet. This guiding ``off-center'' trajectory is used to expand the local potential, controlling the propagating ``thawed'' Gaussian wavepacket such that it is led to optimal overlap with a final state. Its simple and efficient performance in any number of dimensions heralds an exciting addition to the semiclassical tools available for quantum propagation.

  12. A multi-dimensional finite volume cell-centered direct ALE solver for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Clair, G.; Ghidaglia, J.-M.; Perlat, J.-P.

    2016-12-01

    In this paper we describe a second order multi-dimensional scheme, belonging to the class of direct Arbitrary Lagrangian-Eulerian (ALE) methods, for the solution of non-linear hyperbolic systems of conservation law. The scheme is constructed upon a cell-centered explicit Lagrangian solver completed with an edge-based upwinded formulation of the numerical fluxes, computed from the MUSCL-Hancock method, to obtain a full ALE formulation. Numerical fluxes depend on nodal grid velocities which are either set or computed to avoid most of the mesh problems typically encountered in purely Lagrangian simulations. In order to assess the robustness of the scheme, most results proposed in this paper have been obtained by computing the grid velocities as a fraction of the Lagrangian nodal velocities, the ratio being set before running the test case. The last part of the paper describes preliminary results about the triple point test case run in the ALE framework by computing the grid velocities with the fully adaptive Large Eddy Limitation (L.E.L.) method proposed in [1]. Such a method automatically computes the grid velocities at each node defining the mesh from the local characteristics of the flow. We eventually discuss the advantages and the drawback of the coupling.

  13. Future CAD in multi-dimensional medical images--project on multi-organ, multi-disease CAD system.

    PubMed

    Kobatake, Hidefumi

    2007-01-01

    A large research project on the subject of computer-aided diagnosis (CAD) entitled "Intelligent Assistance in Diagnosis of Multi-dimensional Medical Images" was initiated in Japan in 2003. The objective of this research project is to develop a multi-organ, multi-disease CAD system that incorporates anatomical knowledge of the human body and diagnostic knowledge of various types of diseases. The present paper provides an overview of the project and clarifies the trend of future CAD technologies in Japan.

  14. Effect of a Multi-Dimensional and Inter-Sectoral Intervention on the Adherence of Psychiatric Patients

    PubMed Central

    Pauly, Anne; Wolf, Carolin; Mayr, Andreas; Lenz, Bernd; Kornhuber, Johannes; Friedland, Kristina

    2015-01-01

    Background In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy. Methods 269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013) received usual care, whereas intervention patients (05/2013-12/2013) underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component) during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component). Adherence was measured by the “Medication Adherence Report Scale” (MARS) and the “Drug Attitude Inventory” (DAI). Results The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73–1.93) higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15–2.72) more for intervention patients. Conclusion These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program. Trial Registration German Clinical Trials Register DRKS00006358 PMID:26437449

  15. Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

    NASA Astrophysics Data System (ADS)

    Wood, William Alfred, III

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. The scalar test cases include advected shear, circular advection, non-linear advection with coalescing shock and expansion fans, and advection-diffusion. For all scalar cases the fluctuation splitting scheme is more accurate, and the primary mechanism for the improved fluctuation splitting performance is shown to be the reduced production of artificial dissipation relative to DMFDSFV. The most significant scalar result is for combined advection-diffusion, where the present fluctuation splitting scheme is able to resolve the physical dissipation from the artificial dissipation on a much coarser mesh than DMFDSFV is able to, allowing order-of-magnitude reductions in solution time. Among the inviscid test cases the converging supersonic streams problem is notable in that the fluctuation splitting scheme exhibits superconvergent third-order spatial accuracy. For the inviscid cases of a supersonic diamond airfoil, supersonic slender cone, and incompressible circular bump the fluctuation splitting drag coefficient errors are typically half the DMFDSFV drag errors. However, for the incompressible inviscid sphere the fluctuation splitting drag error is larger than for DMFDSFV. A Blasius flat plate viscous validation case reveals a more accurate v-velocity profile for fluctuation splitting, and the reduced artificial dissipation

  16. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  17. On SCALE Validation for PBR Analysis

    SciTech Connect

    Ilas, Germina

    2010-01-01

    Studies were performed to assess the capabilities of the SCALE code system to provide accurate cross sections for analyses of pebble bed reactor configurations. The analyzed configurations are representative of fuel in the HTR-10 reactor in the first critical core and at full power operation conditions. Relevant parameters-multiplication constant, spectral indices, few-group cross sections-are calculated with SCALE for the considered configurations. The results are compared to results obtained with corresponding consistent MCNP models. The code-to-code comparison shows good agreement at both room and operating temperatures, indicating a good performance of SCALE for analysis of doubly heterogeneous fuel configurations. The development of advanced methods and computational tools for the analysis of pebble bed reactor (PBR) configurations has been a research area of renewed interest for the international community during recent decades. The PBR, which is a High Temperature Gas Cooled Reactor (HTGR) system, represents one of the potential candidates for future deployment throughout the world of reactor systems that would meet the increased requirements of efficiency, safety, and proliferation resistance and would support other applications such as hydrogen production or nuclear waste recycling. In the U.S, the pebble bed design is one of the two designs under consideration by the Next Generation Nuclear Plant (NGNP) Program.

  18. Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort

    NASA Astrophysics Data System (ADS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2003-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and ``two space'' test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow rig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this multi-D code development effort.

  19. Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort

    NASA Astrophysics Data System (ADS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-12-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  20. ALEGRA-HEDP Multi-Dimensional Simulations of Z-pinch Related Physics

    NASA Astrophysics Data System (ADS)

    Garasi, Christopher J.

    2003-10-01

    The marriage of experimental diagnostics and computer simulations continues to enhance our understanding of the physics and dynamics associated with current-driven wire arrays. Early models that assumed the formation of an unstable, cylindrical shell of plasma due to wire merger have been replaced with a more complex picture involving wire material ablating non-uniformly along the wires, creating plasma pre-fill interior to the array before the bulk of the array collapses due to magnetic forces. Non-uniform wire ablation leads to wire breakup, which provides a mechanism for some wire material to be left behind as the bulk of the array stagnates onto the pre-fill. Once the bulk of the material has stagnated, electrical current can then shift back to the material left behind and cause it to stagnate onto the already collapsed bulk array mass. These complex effects impact the total radiation output from the wire array which is very important to application of that radiation for inertial confinement fusion. A detailed understanding of the formation and evolution of wire array perturbations is needed, especially for those which are three-dimensional in nature. Sandia National Laboratories has developed a multi-physics research code tailored to simulate high energy density physics (HEDP) environments. ALEGRA-HEDP has begun to simulate the evolution of wire arrays and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. Our three-dimensional code capability now provides us with the ability to solve for the magnetic field and current density distribution associated with the wire array and the evolution of three-dimensional effects seen experimentally. The insight obtained from these multi-dimensional simulations of wire arrays will be presented and specific simulations will be compared to experimental data.

  1. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  2. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    PubMed

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  3. MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S. E-mail: takiwaki.tomoya@nao.ac.jp E-mail: shoichi@heap.phys.waseda.ac.jp

    2015-01-01

    We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ∼20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (∼2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.

  4. POLARIZED LINE FORMATION IN MULTI-DIMENSIONAL MEDIA. III. HANLE EFFECT WITH PARTIAL FREQUENCY REDISTRIBUTION

    SciTech Connect

    Anusha, L. S.; Nagendra, K. N.

    2011-09-01

    In two previous papers, we solved the polarized radiative transfer (RT) equation in multi-dimensional (multi-D) geometries with partial frequency redistribution as the scattering mechanism. We assumed Rayleigh scattering as the only source of linear polarization (Q/I, U/I) in both these papers. In this paper, we extend these previous works to include the effect of weak oriented magnetic fields (Hanle effect) on line scattering. We generalize the technique of Stokes vector decomposition in terms of the irreducible spherical tensors T{sup K}{sub Q}, developed by Anusha and Nagendra, to the case of RT with Hanle effect. A fast iterative method of solution (based on the Stabilized Preconditioned Bi-Conjugate-Gradient technique), developed by Anusha et al., is now generalized to the case of RT in magnetized three-dimensional media. We use the efficient short-characteristics formal solution method for multi-D media, generalized appropriately to the present context. The main results of this paper are the following: (1) a comparison of emergent (I, Q/I, U/I) profiles formed in one-dimensional (1D) media, with the corresponding emergent, spatially averaged profiles formed in multi-D media, shows that in the spatially resolved structures, the assumption of 1D may lead to large errors in linear polarization, especially in the line wings. (2) The multi-D RT in semi-infinite non-magnetic media causes a strong spatial variation of the emergent (Q/I, U/I) profiles, which is more pronounced in the line wings. (3) The presence of a weak magnetic field modifies the spatial variation of the emergent (Q/I, U/I) profiles in the line core, by producing significant changes in their magnitudes.

  5. Multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates.

    SciTech Connect

    Chen, Ken Shuang

    2004-11-01

    This report documents the author's efforts in the deterministic modeling of copper-sulfidation corrosion on non-planar substrates such as diodes and electrical connectors. A new framework based on Goma was developed for multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates. In this framework, the moving sulfidation front is explicitly tracked by treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and repeatedly performing re-meshing using CUBIT and re-mapping using MAPVAR. Three one-dimensional studies were performed for verifying the framework in asymptotic regimes. Limited model validation was also carried out by comparing computed copper-sulfide thickness with experimental data. The framework was first demonstrated in modeling one-dimensional copper sulfidation with charge separation. It was found that both the thickness of the space-charge layers and the electrical potential at the sulfidation surface decrease rapidly as the Cu{sub 2}S layer thickens initially but eventually reach equilibrium values as Cu{sub 2}S layer becomes sufficiently thick; it was also found that electroneutrality is a reasonable approximation and that the electro-migration flux may be estimated by using the equilibrium potential difference between the sulfidation and annihilation surfaces when the Cu{sub 2}S layer is sufficiently thick. The framework was then employed to model copper sulfidation in the solid-state-diffusion controlled regime (i.e. stage II sulfidation) on a prototypical diode until a continuous Cu{sub 2}S film was formed on the diode surface. The framework was also applied to model copper sulfidation on an intermittent electrical contact between a gold-plated copper pin and gold-plated copper pad; the presence of Cu{sub 2}S was found to raise the effective electrical resistance drastically. Lastly, future research needs in modeling atmospheric copper sulfidation are discussed.

  6. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty.

    PubMed

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

  7. Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty

    PubMed Central

    Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2014-01-01

    Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features

  8. Numerical simulation of multi-dimensional NMR response in tight sandstone

    NASA Astrophysics Data System (ADS)

    Guo, Jiangfeng; Xie, Ranhong; Zou, Youlong; Ding, Yejiao

    2016-06-01

    Conventional logging methods have limitations in the evaluation of tight sandstone reservoirs. The multi-dimensional nuclear magnetic resonance (NMR) logging method has the advantage that it can simultaneously measure transverse relaxation time (T 2), longitudinal relaxation time (T 1) and diffusion coefficient (D). In this paper, we simulate NMR measurements of tight sandstone with different wettability and saturations by the random walk method and obtain the magnetization decays of Carr-Purcell-Meiboom-Gill pulse sequences with different wait times (TW) and echo spacings (TE) under a magnetic field gradient, resulting in D-T 2-T 1 maps by the multiple echo trains joint inversion method. We also study the effects of wettability, saturation, signal-to-noise ratio (SNR) of data and restricted diffusion on the D-T 2-T 1 maps in tight sandstone. The results show that with decreasing wetting fluid saturation, the surface relaxation rate of the wetting fluid gradually increases and the restricted diffusion phenomenon becomes more and more obvious, which leads to the wetting fluid signal moving along the direction of short relaxation and the direction of the diffusion coefficient decreasing in D-T 2-T 1 maps. Meanwhile, the non-wetting fluid position in D-T 2-T 1 maps does not change with saturation variation. With decreasing SNR, the ability to identify water and oil signals based on NMR maps gradually decreases. The wetting fluid D-T 1 and D-T 2 correlations in NMR diffusion-relaxation maps of tight sandstone are obtained through expanding the wetting fluid restricted diffusion models, and are further applied to recognize the wetting fluid in simulated D-T 2 maps and D-T 1 maps.

  9. Multi-dimensional Conjunctive Operation Rule for the Water Supply System

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Tan, C. A.; CHEN, Y.; Tung, C.

    2011-12-01

    In recent years, with the increment of floods and droughts, not only in numbers but also in intensities, floods were severer during the wet season and the droughts were more serious during the dry season. In order to reduce their impact on agriculture, industry, and even human being, the conjunctive use of surface water and groundwater has been paid much attention and become a new direction for the future research. Traditionally, the reservoir operation usually follows the operation rule curve to satisfy the water demand and considers only water levels at the reservoirs and time series. The strategy used in the conjunctive-use management model is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the deficit between demand and supply, if exists, is provided by the groundwater. In this study, we propose a new operation rule, named multi-dimensional conjunctive operation rule curve (MCORC), which is extended from the concept of reservoir operation rule curve. The MCORC is a three-dimensional curve and is applied to both surface water and groundwater. Three sets of parameters, water levels and the supply percentage at reservoirs, groundwater levels and the supply percentage, and time series, are considered simultaneously in the curve. The zonation method and heuristic algorithm are applied to optimize the curve subject to the constraints of the reservoir operation rules and the safety yield of groundwater. The proposed conjunctive operation rule was applied to the water supply system which is analogue to the area in northern Taiwan. The results showed that the MCORC could increase the efficiency of water use and reduce the risk of serious water deficits.

  10. Large-scale parametric survival analysis.

    PubMed

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  11. Detection and analysis of multi-dimensional pulse wave based on optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Shen, Yihui; Li, Zhifang; Li, Hui; Chen, Haiyu

    2014-11-01

    Pulse diagnosis is an important method of traditional Chinese medicine (TCM). Doctors diagnose the patients' physiological and pathological statuses through the palpation of radial artery for radial artery pulse information. Optical coherence tomography (OCT) is an useful tool for medical optical research. Current conventional diagnostic devices only function as a pressure sensor to detect the pulse wave - which can just partially reflect the doctors feelings and lost large amounts of useful information. In this paper, the microscopic changes of the surface skin above radial artery had been studied in the form of images based on OCT. The deformation of surface skin in a cardiac cycle which is caused by arterial pulse is detected by OCT. The patient's pulse wave is calculated through image processing. It is found that it is good consistent with the result conducted by pulse analyzer. The real-time patient's physiological and pathological statuses can be monitored. This research provides a kind of new method for pulse diagnosis of traditional Chinese medicine.

  12. A Multi-Dimensional Analysis of Feedback by Tutors and Teacher-Educators to Their Students.

    ERIC Educational Resources Information Center

    Van Looy, Linda; Vrijsen, Mike

    This study analyzed guidance given to student teachers in Flanders, Belgium, by their tutors and teacher-educators, tutors' and teacher educators' written accounts, and teacher-educators' guidance discussions. The study included 895 guidance reports written by 29 tutors, which included 7,589 remarks. There were also 135 reports written by 13…

  13. Correlation network analysis for multi-dimensional data in stocks market

    NASA Astrophysics Data System (ADS)

    Kazemilari, Mansooreh; Djauhari, Maman Abdurachman

    2015-07-01

    This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.

  14. Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.

    PubMed

    Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef

    2009-12-01

    In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental

  15. Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling.

    PubMed

    Kim, Se-Kang; Davison, Mark L; Frisby, Craig L

    2007-01-01

    This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major profiles in a multi-subtest test battery. Major profile patterns are represented as dimensions extracted from a MDS analysis. PAMS represents an individual observed score as a linear combination of dimensions where the dimensions are the most typical profile patterns present in a population. While the PAMS approach was initially developed for exploratory purposes, its results can later be confirmed in a different sample by CFA. Since CFA is often used to verify results from an exploratory factor analysis, the present paper makes the connection between a factor model and the PAMS model, and then illustrates CFA with a simulated example (that was generated by the PAMS model) and at the same time with a real example. The real example demonstrates confirmation of PAMS exploratory results by using a different sample. Fit indexes can be used to indicate whether the CFA reparameterization as a confirmatory approach works for the PAMS exploratory results.

  16. A two scale analysis of tight sandstones

    NASA Astrophysics Data System (ADS)

    Adler, P. M.; Davy, C. A.; Song, Y.; Troadec, D.; Hauss, G.; Skoczylas, F.

    2015-12-01

    Tight sandstones have a low porosity and a very small permeability K. Available models for K do not compare well with measurements. These sandstones are made of SiO_2 grains, with a typical size of several hundreds of micron. These grains are separated by a network of micro-cracks, with sizes ranging between microns down to tens of nm. Therefore, the structure can be schematized by Voronoi polyhedra separated by plane and permeable polygonal micro-cracks. Our goal is to estimate K based on a two scale analysis and to compare the results to measurements. For a particular sample [2], local measurements on several scales include FIB/SEM [3], CMT and 2D SEM. FIB/SEM is selected because the peak pore size given by Mercury Intrusion Porosimetry is of 350nm. FIB/SEM imaging (with 50 nm voxel size) identifies an individual crack of 180nm average opening, whereas CMT provides a connected porosity (individual crack) for 60 nm voxel size, of 4 micron average opening. Numerical modelling is performed by combining the micro-crack network scale (given by 2D SEM) and the 3D micro-crack scale (given by either FIB/SEM or CMT). Estimates of the micro-crack density are derived from 2D SEM trace maps by counting the intersections with scanlines, the surface density of traces, and the number of fracture intersections. K is deduced by using a semi empirical formula valid for identical, isotropic and uniformly distributed fractures [1]. This value is proportional to the micro-crack transmissivity sigma. Sigma is determined by solving the Stokes equation in the micro-cracks measured by FIB/SEM or CMT. K is obtained by combining the two previous results. Good correlation with measured values on centimetric plugs is found when using sigma from CMT data. The results are discussed and further research is proposed. [1] Adler et al, Fractured porous media, Oxford Univ. Press, 2012. [2] Duan et al, Int. J. Rock Mech. Mining Sci., 65, p75, 2014. [3] Song et al, Marine and Petroleum Eng., 65, p63

  17. Multi-dimensional, fully-implicit, spectral method for the Vlasov-Maxwell equations with exact conservation laws in discrete form

    NASA Astrophysics Data System (ADS)

    Delzanno, G. L.

    2015-11-01

    A spectral method for the numerical solution of the multi-dimensional Vlasov-Maxwell equations is presented. The plasma distribution function is expanded in Fourier (for the spatial part) and Hermite (for the velocity part) basis functions, leading to a truncated system of ordinary differential equations for the expansion coefficients (moments) that is discretized with an implicit, second order accurate Crank-Nicolson time discretization. The discrete non-linear system is solved with a preconditioned Jacobian-Free Newton-Krylov method. It is shown analytically that the Fourier-Hermite method features exact conservation laws for total mass, momentum and energy in discrete form. Standard tests involving plasma waves and the whistler instability confirm the validity of the conservation laws numerically. The whistler instability test also shows that we can step over the fastest time scale in the system without incurring in numerical instabilities. Some preconditioning strategies are presented, showing that the number of linear iterations of the Krylov solver can be drastically reduced and a significant gain in performance can be obtained.

  18. A Cross-Cultural Comparison of Singaporean and Taiwanese Eighth Graders' Science Learning Self-Efficacy from a Multi-Dimensional Perspective

    NASA Astrophysics Data System (ADS)

    Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung

    2013-05-01

    Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two student groups. Moreover, within-culture comparisons were made in terms of gender. The results showed that, first, the SLSE instrument was valid and reliable for measuring the Singaporean and Taiwanese students' SLSE. Second, through a two-way multivariate analysis of variance analysis (nationality by gender), the main result indicated that the SLSE held by the Singaporean eighth graders was significantly higher than that of their Taiwanese counterparts in all dimensions, including 'conceptual understanding and higher-order cognitive skills', 'practical work (PW)', 'everyday application', and 'science communication'. In addition, the within-culture gender comparisons indicated that the male Singaporean students tended to possess higher SLSE than the female students did in all SLSE dimensions except for the 'PW' dimension. However, no gender differences were found in the Taiwanese sample. The findings unraveled in this study were interpreted from a socio-cultural perspective in terms of the curriculum differences, societal expectations of science education, and educational policies in Singapore and Taiwan.

  19. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  20. Multi-dimensional crest factor reduction and digital predistortion for multi-band radio-over-fiber links.

    PubMed

    Chen, Hao; Li, Jianqiang; Yin, Chunjing; Xu, Kun; Dai, Yitang; Yin, Feifei

    2014-08-25

    A multi-dimensional crest factor reduction (MD-CFR) technique is proposed to improve the performance and efficiency of multi-band radio-over-fiber (RoF) links. Cooperating with multi-dimensional digital predistortion (MD-DPD), MD-CFR increases the performance of both directly-modulated and externally-modulated RoF links, in terms of error vector magnitude (EVM) and adjacent channel power ratio (ACPR). For directly-modulated RoF link, more than 5 dB output ACPR reduction is obtained, output EVMs are reduced from 11.83% and 12.47% to 7.51% and 7.26% for two bands respectively, while only a slight improvement to 11.58% and 10.78% is obtained solely using MD-DPD. Similar results are achieved in externally-modulated RoF link. Given a threshold in EVM or ACPR, the RF power transmit efficiency is also further enhanced.

  1. Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR

    SciTech Connect

    Ishii, M.; Ravankar, S.T.; Dowlati, R.

    1996-04-01

    The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.

  2. Global scale precipitation from monthly to centennial scales: empirical space-time scaling analysis, anthropogenic effects

    NASA Astrophysics Data System (ADS)

    de Lima, Isabel; Lovejoy, Shaun

    2016-04-01

    The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to

  3. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  4. Dimensionality of the hospital anxiety and depression scale (HADS) in cardiac patients: comparison of Mokken scale analysis and factor analysis.

    PubMed

    Emons, Wilco H M; Sijtsma, Klaas; Pedersen, Susanne S

    2012-09-01

    The hospital anxiety and depression scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken scale analysis and factor analysis and (b) the scale properties of the HADS. Mokken scale analysis and factor analysis suggested that three dimensions adequately capture the structure of the HADS. Of the three corresponding scales, two scales of five items each were found to be structurally sound and reliable. These scales covered the two key attributes of anxiety and (anhedonic) depression. The findings suggest that the HADS may be reduced to a 10-item questionnaire comprising two 5-item scales measuring anxiety and depressive symptoms.

  5. Developmental Work Personality Scale: An Initial Analysis.

    ERIC Educational Resources Information Center

    Strauser, David R.; Keim, Jeanmarie

    2002-01-01

    The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive…

  6. Quality Assessment in Early Childhood Programs: A Multi-Dimensional Approach.

    ERIC Educational Resources Information Center

    Fiene, Richard; Melnick, Steven A.

    The relationships among independent observer ratings of a child care program on the Early Childhood Environment Rating Scale (ECERS), state department personnel ratings of program quality using the Child Development Program Evaluation Scale (CDPES), and self-evaluation ratings using the self-assessment instrument designed for the Early Childhood…

  7. Dynamical scaling analysis of plant callus growth

    NASA Astrophysics Data System (ADS)

    Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.

    2003-07-01

    We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.

  8. Scaling analysis of Langevin-type equations

    NASA Astrophysics Data System (ADS)

    Hanfei; Ma, Benkun

    1993-05-01

    The approach of scaling behavior of open dissipative systems, which was proposed by Hentschel and Family [Phys. Rev. Lett. 66, 1982 (1991)], is developed to analyze several models. The results show there are two scaling regions, a strong-coupling region and a weak-coupling region, in each model. The dynamic renormalization-group results are exactly the same as the results in the weak-coupling region. The scaling exponents in the strong-coupling region and the crossover behavior are also discussed.

  9. Design Analysis for a Scaled Erosion Test

    SciTech Connect

    Lee, S.Y.

    2002-04-10

    In support of a slurry wear evaluation in the pretreatment filtration and evaporation systems of RPP-WTP, Engineering Modeling and Simulation Group (EMSG) has developed the computational models to help guide component design and scaling decisions and to assist in the full-scale analyses. This report deals with the filtration system. In this project, computational fluid dynamics (CFD) methods were applied to ensure that the test facility design would capture the erosion phenomena expected in the full-scale cross-flow ultrafiltration facility. The literature survey was initially performed to identify the principal mechanisms of erosion for a solids laden fluid.

  10. Technical Aspects for the Creation of a Multi-Dimensional Land Information System

    NASA Astrophysics Data System (ADS)

    Ioannidis, Charalabos; Potsiou, Chryssy; Soile, Sofia; Verykokou, Styliani; Mourafetis, George; Doulamis, Nikolaos

    2016-06-01

    The complexity of modern urban environments and civil demands for fast, reliable and affordable decision-making requires not only a 3D Land Information System, which tends to replace traditional 2D LIS architectures, but also the need to address the time and scale parameters, that is, the 3D geometry of buildings in various time instances (4th dimension) at various levels of detail (LoDs - 5th dimension). This paper describes and proposes solutions for technical aspects that need to be addressed for the 5D modelling pipeline. Such solutions include the creation of a 3D model, the application of a selective modelling procedure between various time instances and at various LoDs, enriched with cadastral and other spatial data, and a procedural modelling approach for the representation of the inner parts of the buildings. The methodology is based on automatic change detection algorithms for spatial-temporal analysis of the changes that took place in subsequent time periods, using dense image matching and structure from motion algorithms. The selective modelling approach allows a detailed modelling only for the areas where spatial changes are detected. The procedural modelling techniques use programming languages for the textual semantic description of a building; they require the modeller to describe its part-to-whole relationships. Finally, a 5D viewer is developed, in order to tackle existing limitations that accompany the use of global systems, such as the Google Earth or the Google Maps, as visualization software. An application based on the proposed methodology in an urban area is presented and it provides satisfactory results.

  11. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  12. Minimum Sample Size Requirements for Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…

  13. The Kids' Empathic Development Scale (KEDS): A Multi-Dimensional Measure of Empathy in Primary School-Aged Children

    ERIC Educational Resources Information Center

    Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine

    2013-01-01

    Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic…

  14. Failure Analysis of a Pilot Scale Melter

    SciTech Connect

    Imrich, K J

    2001-09-14

    Failure of the pilot-scale test melter resulted from severe overheating of the Inconel 690 jacketed molybdenum electrode. Extreme temperatures were required to melt the glass during this campaign because the feed material contained a very high waste loading.

  15. Multi-Scale Modeling, Surrogate-Based Analysis, and Optimization of Lithium-Ion Batteries for Vehicle Applications

    NASA Astrophysics Data System (ADS)

    Du, Wenbo

    A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the

  16. Convective scale weather analysis and forecasting

    NASA Technical Reports Server (NTRS)

    Purdom, J. F. W.

    1984-01-01

    How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.

  17. Multi-dimensional titanium dioxide with desirable structural qualities for enhanced performance in quantum-dot sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Wu, Dapeng; He, Jinjin; Zhang, Shuo; Cao, Kun; Gao, Zhiyong; Xu, Fang; Jiang, Kai

    2015-05-01

    Multi-dimensional TiO2 hierarchal structures (MD-THS) assembled by mesoporous nanoribbons consisted of oriented aligned nanocrystals are prepared via thermal decomposing Ti-contained gelatin-like precursor. A unique bridge linking mechanism is proposed to illustrate the formation process of the precursor. Moreover, the as-prepared MD-THS possesses high surface area of ∼106 cm2 g-1, broad pore size distribution from several nanometers to ∼100 nm and oriented assembled primary nanocrystals, which gives rise to high CdS/CdSe quantum dots loading amount and inhibits the carries recombination in the photoanode. Thanks to these structural advantages, the cell derived from MD-THS demonstrates a power conversion efficiency (PCE) of 4.15%, representing ∼36% improvement compared with that of the nanocrystal based cell, which permits the promising application of MD-THS as photoanode material in quantum-dot sensitized solar cells.

  18. High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

  19. Multi-dimensional instability of obliquely propagating ion acoustic solitary waves in electron-positron-ion superthermal magnetoplasmas

    SciTech Connect

    EL-Shamy, E. F.

    2014-08-15

    The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.

  20. Multi-dimensional instability of dust-ion-acoustic solitary structure with opposite polarity ions and non-thermal electrons

    NASA Astrophysics Data System (ADS)

    Haider, M. M.; Rahman, O.

    2016-12-01

    An attempt has been made to study the multi-dimensional instability of dust-ion-acoustic (DIA) solitary waves (SWs) in magnetized multi-ion plasmas containing opposite polarity ions, opposite polarity dusts and non-thermal electrons. First of all, we have derived Zakharov-Kuznetsov (ZK) equation to study the DIA SWs in this case using reductive perturbation method as well as its solution. Small- k perturbation technique was employed to find out the instability criterion and growth rate of such a wave which can give a guideline in understanding the space and laboratory plasmas, situated in the D-region of the Earth's ionosphere, mesosphere, and solar photosphere, as well as the microelectronics plasma processing reactors.

  1. Reginal Frequency Analysis Based on Scaling Properties and Bayesian Models

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; Lee, Jeong-Ju; Moon, Young-Il

    2010-05-01

    A regional frequency analysis based on Hierarchical Bayesian Network (HBN) and scaling theory was developmed. Many recording rain gauges over South Korea were used for the analysis. First, a scaling approach combined with extreme distribution was employed to derive regional formula for frequency analysis. Second, HBN model was used to represent additional information about the regional structure of the scaling parameters, especially the location parameter and shape parameter. The location and shape parameters of the extreme distribution were estimated by utilizing scaling properties in a regression framework, and the scaling parameters linking the parameters (location and shape) to various duration times were simultaneously estimated. It was found that the regional frequency analysis combined with HBN and scaling properties show promising results in terms of establishing regional IDF curves.

  2. The Importance of a Multi-Dimensional Approach for Studying the Links between Food Access and Consumption1–3

    PubMed Central

    Rose, Donald; Bodor, J. Nicholas; Hutchinson, Paul L.; Swalm, Chris M.

    2010-01-01

    Research on neighborhood food access has focused on documenting disparities in the food environment and on assessing the links between the environment and consumption. Relatively few studies have combined in-store food availability measures with geographic mapping of stores. We review research that has used these multi-dimensional measures of access to explore the links between the neighborhood food environment and consumption or weight status. Early research in California found correlations between red meat, reduced-fat milk, and whole-grain bread consumption and shelf space availability of these products in area stores. Subsequent research in New York confirmed the low-fat milk findings. Recent research in Baltimore has used more sophisticated diet assessment tools and store-based instruments, along with controls for individual characteristics, to show that low availability of healthy food in area stores is associated with low-quality diets of area residents. Our research in southeastern Louisiana has shown that shelf space availability of energy-dense snack foods is positively associated with BMI after controlling for individual socioeconomic characteristics. Most of this research is based on cross-sectional studies. To assess the direction of causality, future research testing the effects of interventions is needed. We suggest that multi-dimensional measures of the neighborhood food environment are important to understanding these links between access and consumption. They provide a more nuanced assessment of the food environment. Moreover, given the typical duration of research project cycles, changes to in-store environments may be more feasible than changes to the overall mix of retail outlets in communities. PMID:20410084

  3. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  4. Assessing pretreatment reactor scaling through empirical analysis

    DOE PAGES

    Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; ...

    2016-10-10

    Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and

  5. Assessing pretreatment reactor scaling through empirical analysis

    SciTech Connect

    Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; Nagle, Nicholas J.; Schell, Daniel J.; Tucker, Melvin P.; McMillan, James D.; Wolfrum, Edward J.

    2016-10-10

    Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, this is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was

  6. Efficient High Order Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations: Talk Slides

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Brian R. (Technical Monitor)

    2002-01-01

    This viewgraph presentation presents information on the attempt to produce high-order, efficient, central methods that scale well to high dimension. The central philosophy is that the equations should evolve to the point where the data is smooth. This is accomplished by a cyclic pattern of reconstruction, evolution, and re-projection. One dimensional and two dimensional representational methods are detailed, as well.

  7. Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders

    ERIC Educational Resources Information Center

    Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.

    2013-01-01

    Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…

  8. A Cross-Cultural Comparison of Singaporean and Taiwanese Eighth Graders' Science Learning Self-Efficacy from a Multi-Dimensional Perspective

    ERIC Educational Resources Information Center

    Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung

    2013-01-01

    Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two student…

  9. FACTOR ANALYSIS OF THE ELKINS HYPNOTIZABILITY SCALE

    PubMed Central

    Elkins, Gary; Johnson, Aimee K.; Johnson, Alisa J.; Sliwinski, Jim

    2015-01-01

    Assessment of hypnotizability can provide important information for hypnosis research and practice. The Elkins Hypnotizability Scale (EHS) consists of 12 items and was developed to provide a time-efficient measure for use in both clinical and laboratory settings. The EHS has been shown to be a reliable measure with support for convergent validity with the Stanford Hypnotic Susceptibility Scale, Form C (r = .821, p < .001). The current study examined the factor structure of the EHS, which was administered to 252 adults (51.3% male; 48.7% female). Average time of administration was 25.8 minutes. Four factors selected on the basis of the best theoretical fit accounted for 63.37% of the variance. The results of this study provide an initial factor structure for the EHS. PMID:25978085

  10. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  11. Construct validity of the Depression and Somatic Symptoms Scale: evaluation by Mokken scale analysis

    PubMed Central

    Chou, Ya-Hsin; Lee, Chin-Pang; Liu, Chia-Yih; Hung, Ching-I

    2017-01-01

    Objective Previous studies of the Depression and Somatic Symptoms Scale (DSSS), a free scale, have been based on the classical test theory, and the construct validity and dimensionality of the DSSS are as yet uncertain. The aim of this study was to use Mokken scale analysis (MSA) to assess the dimensionality of the DSSS. Methods A sample of 214 psychiatric outpatients with mood and anxiety disorders were enrolled at a medical center in Taiwan (age: mean [SD] =38.3 [10.5] years; 63.1% female) and asked to complete the DSSS. MSA was used to assess the dimensionality of the DSSS. Results All 22 items of the DSSS formed a moderate unidimensional scale (Hs=0.403), supporting its construct validity. The DSSS was divided into 4 subscales (Hs ranged from 0.35 to 0.67), including a general somatic scale (GSS), melancholic scale (MS), muscular pain scale (MPS), and chest symptom scale (CSS). The GSS is a weak reliable Mokken scale; the other 3 scales are strong reliable Mokken scales. Conclusion The DSSS is a psychometrically sound measure of depression and somatic symptoms in adult psychiatric outpatients with depression or anxiety. The summed score of the DSSS and its 4 subscales are valid statistics. Further research is required for replication of the 4 subscales of the DSSS. PMID:28182138

  12. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  13. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  14. A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory

    NASA Technical Reports Server (NTRS)

    Prozan, R. J.

    1982-01-01

    The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.

  15. X-Ray absorption in homogeneous catalysis research: the iron-catalyzed Michael addition reaction by XAS, RIXS and multi-dimensional spectroscopy.

    PubMed

    Bauer, Matthias; Gastl, Christoph

    2010-06-07

    A survey over X-ray absorption methods in homogeneous catalysis research is given with the example of the iron-catalyzed Michael addition reaction. A thorough investigation of the catalytic cycle was possible by combination of conventional X-ray absorption spectroscopy (XAS), resonant inelastic X-ray scattering (RIXS) and multi-dimensional spectroscopy. The catalytically active compound formed in the first step of the Michael reaction of methyl vinyl ketone with 2-oxocyclopentanecarboxylate (1) could be elucidated in situ by RIXS spectroscopy, and the reduced catalytic activity of FeCl(3) x 6 H(2)O (2) compared to Fe(ClO(4))(3) x 9 H(2)O (3) could be further explained by the formation of a [Fe(III)Cl(4)(-)](3)[Fe(III)(1-H)(2)(H(2)O)(2)(+)][H(+)](2) complex. Chloride was identified as catalyst poison with a combined XAS-UV/vis study, which revealed that Cl(-) binds quantitatively to the available iron centers that are deactivated by formation of [FeCl(4)(-)]. Operando studies in the course of the reaction of methyl vinyl ketone with 1 by combined XAS-Raman spectroscopy allowed the exclusion of changes in the oxidation state and the octahedral geometry at the iron site; a reaction order of two with respect to methyl vinyl ketone and a rate constant of k = 1.413 min(-2) were determined by analysis of the C=C and C=O vibration band. Finally, a dedicated experimental set-up for three-dimensional spectroscopic studies (XAS, UV/vis and Raman) of homogeneous catalytic reactions under laboratory conditions, which emerged from the discussed investigations, is presented.

  16. A Multi-dimensional Program Evaluation Model: Considerations of Cost-Effectiveness, Equity, Quality, and Sustainability.

    ERIC Educational Resources Information Center

    Reinke, William A.

    1999-01-01

    Presents an algorithm for integrating evaluative concerns of cost effectiveness, equity, quality, and sustainability in program evaluation and offers suggestions for refining the system of measurement and analysis. (SLD)

  17. Mokken Scale Analysis for Dichotomous Items Using Marginal Models

    ERIC Educational Resources Information Center

    van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas

    2008-01-01

    Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…

  18. Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)

    1999-01-01

    We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.

  19. Scaling properties of sea ice deformation from buoy dispersion analysis

    NASA Astrophysics Data System (ADS)

    Rampal, P.; Weiss, J.; Marsan, D.; Lindsay, R.; Stern, H.

    2008-03-01

    A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over timescales from 3 h to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate (the Arctic sea ice cover) stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e., it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multiscale fracturing/faulting processes.

  20. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  1. Metal analysis of scales taken from Arctic grayling.

    PubMed

    Farrell, A P; Hodaly, A H; Wang, S

    2000-11-01

    This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca, Ni, Zn, As, Se, Cd, Sb, Hg, and Pb) were measured in 10 to 16 ablation sites (5 microm radius) on each scale sample from Arctic grayling (Thymallus arcticus) (n = 10 fish). Ca, Mg, and Zn were at physiological levels in all scale samples. Se, Hg, and As were also detected in all scale samples. Only Cd was below detection limits of the LA-ICPMS for all samples, but some of the samples were below detection limits for Sb, Pb, and Ni. The mean scale concentrations for Se, Hg, and Pb were not significantly different from the muscle concentrations and individual fish values were within fourfold of each other. Cd was not detected in either muscle or scale tissue, whereas Sb was detected at low levels in some scale samples but not in any of the muscle samples. Similarly, As was detected in all scale samples but not in muscle, and Ni was detected almost all scale samples but only in one of the muscle samples. Therefore, there were good qualitative and quantitative agreements between the metal concentrations in scale and muscle tissues, with LA-ICPMS analysis of scales appearing to be a more sensitive method of detecting the body burden of Ni and As when compared with muscle tissue. Correlation analyses, performed for Pb, Hg, and Se concentrations, revealed that the scale concentrations for these three metals generally exceeded those of the muscle at low muscle concentrations. The LA-ICPMS analysis of scales had the capability to resolve significant spatial differences in metal concentrations within a fish scale. We conclude that metal analysis of fish scales using LA

  2. SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION

    SciTech Connect

    John Crepeau

    2011-03-01

    Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.

  3. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Analysis of Scale-Invariant Processes Jingfeng Wang, Rafael L. Bras, Veronica Nieves Georgia Tech Research Corporation Office of Sponsored Programs...processes Veronica Nieves , Jingfeng Wang, and Rafael L. Bras Citation: AIP Conf. Proc. 1443, 56 (2012); doi: 10.1063/1.3703620 View online: http...http://proceedings.aip.org/about/rights_permissions A Bayesian Analysis of Scale-Invariant Processes Veronica Nieves ∗, Jingfeng Wang† and Rafael L. Bras

  4. A multi-dimensional Smolyak collocation method in curvilinear coordinates for computing vibrational spectra

    SciTech Connect

    Avila, Gustavo Carrington, Tucker

    2015-12-07

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in Avila and Carrington, Jr. [J. Chem. Phys. 139, 134114 (2013)]. Using an iterative eigensolver, energy levels and wavefunctions are determined from values of the potential on a Smolyak grid. The kinetic energy matrix-vector product is evaluated by transforming a vector labelled with (nondirect product) grid indices to a vector labelled by (nondirect product) basis indices. Both the transformation and application of the kinetic energy operator (KEO) scale favorably. Collocation facilitates dealing with complicated KEOs because it obviates the need to calculate integrals of coordinate dependent coefficients of differential operators. The ideas are tested by computing energy levels of HONO using a KEO in bond coordinates.

  5. Multi-dimensional Magnetotelluric Modeling of General Anisotropy and Its Implication for Structural Interpretation

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Wei, W.; Egbert, G. D.

    2015-12-01

    Although electrical anisotropy is likely at various scales in the Earth, present 3D inversion codes only allow for isotropic models. In fact, any effects of anisotropy present in any real data can always be accommodated by (possibly fine scale) isotropic structures. This suggests that some complex structures found in 3D inverse solutions (e.g., alternating elongate conductive and resistive "streaks" of Meqbel et al. (2014)), may actually represent anisotropic layers. As a step towards better understanding how anisotropy is manifest in 3D inverse models, and to better incorporate anisotropy in 3D MT interpretations, we have implemented new 1D, 2D AND 3D forward modeling codes which allow for general anisotropy and are implemented in matlab using an object oriented (OO) approach. The 1D code is used primarily to provide boundary conditions (BCs). For the 2D case we have used the OO approach to quickly develop and compare several variants including different formulations (three coupled electric field components, one electric and one magnetic component coupled) and different discretizations (staggered and fixed grids). The 3D case is implemented in integral form on a staggered grid, using either 1D or 2D BC. Iterative solvers, including divergence correction, allow solution for large model grids. As an initial application of these codes we are conducting synthetic inversion tests. We construct test models by replacing streaky conductivity layers, as found at the top of the mantle in the EarthScope models of Meqbel et al. (2014), with simpler smoothly varying anisotropic layers. The modeling process is iterated to obtain a reasonable match to actual data. Synthetic data generated from these 3D anisotropic models can then be inverted with a 3D code (ModEM) and compared to the inversions obtained with actual data. Results will be assessed, taking into account the diffusive nature of EM imaging, to better understand how actual anisotropy is mapped to structure by 3D

  6. A pilot study to evaluate multi-dimensional effects of dance for people with Parkinson's disease.

    PubMed

    Ventura, Maria I; Barnes, Deborah E; Ross, Jessica M; Lanni, Kimberly E; Sigvardt, Karen A; Disbrow, Elizabeth A

    2016-11-01

    Parkinson's disease (PD) is a progressive neurodegenerative disease associated with deficits in motor, cognitive, and emotion/quality of life (QOL) domains, yet most pharmacologic and behavioral interventions focus only on motor function. Our goal was to perform a pilot study of Dance for Parkinson's-a community-based program that is growing in popularity-in order to compare effect sizes across multiple outcomes and to inform selection of primary and secondary outcomes for a larger trial. Study participants were people with PD who self-enrolled in either Dance for Parkinson's classes (intervention group, N=8) or PD support groups (control group, N=7). Assessments of motor function (Timed-Up-and-Go, Gait Speed, Standing Balance Test), cognitive function (Test of Everyday Attention, Verbal Fluency, Alternate Uses, Digit Span Forward and Backward), and emotion/QOL (Geriatric Depression Scale, Falls Efficacy Scale-International, Parkinson's Disease Questionnaire-39 (total score and Activities of Daily Living subscale)) were performed in both groups at baseline and follow-up. Standardized effect sizes were calculated within each group and between groups for all 12 measures. Effect sizes were positive (suggesting improvement) for all 12 measures within the intervention group and 7 of 12 measures within the control group. The largest between-group differences were observed for the Test of Everyday Attention (a measure of cognitive switching), gait speed and falls efficacy. Our findings suggest that dance has potential to improve multiple outcomes in people with PD. Future trials should consider co-primary outcomes given potential benefits in motor, cognitive and emotion/QOL domains.

  7. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  8. Voice Dysfunction in Dysarthria: Application of the Multi-Dimensional Voice Program.

    ERIC Educational Resources Information Center

    Kent, R. D.; Vorperian, H. K.; Kent, J. F.; Duffy, J. R.

    2003-01-01

    Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice in individuals with dysarthria. In Part 2, acoustic data are reviewed for dysarthria associated with Parkinson disease (PD), cerebellar disease, amytrophic lateral sclerosis, traumatic brain injury, unilateral hemispheric stroke, and essential tremor.…

  9. Multi-Dimensional Evaluation for Module Improvement: A Mathematics-Based Case Study

    ERIC Educational Resources Information Center

    Ellery, Karen

    2006-01-01

    Due to a poor module evaluation, mediocre student grades and a difficult teaching experience in lectures, the Data Analysis section of a first year core module, Research Methods for Social Sciences (RMSS), offered at the University of KwaZulu-Natal in South Africa, was completely revised. In order to review the effectiveness of these changes in…

  10. Multiple time scale complexity analysis of resting state FMRI.

    PubMed

    Smith, Robert X; Yan, Lirong; Wang, Danny J J

    2014-06-01

    The present study explored multi-scale entropy (MSE) analysis to investigate the entropy of resting state fMRI signals across multiple time scales. MSE analysis was developed to distinguish random noise from complex signals since the entropy of the former decreases with longer time scales while the latter signal maintains its entropy due to a "self-resemblance" across time scales. A long resting state BOLD fMRI (rs-fMRI) scan with 1000 data points was performed on five healthy young volunteers to investigate the spatial and temporal characteristics of entropy across multiple time scales. A shorter rs-fMRI scan with 240 data points was performed on a cohort of subjects consisting of healthy young (age 23 ± 2 years, n = 8) and aged volunteers (age 66 ± 3 years, n = 8) to investigate the effect of healthy aging on the entropy of rs-fMRI. The results showed that MSE of gray matter, rather than white matter, resembles closely that of f (-1) noise over multiple time scales. By filtering out high frequency random fluctuations, MSE analysis is able to reveal enhanced contrast in entropy between gray and white matter, as well as between age groups at longer time scales. Our data support the use of MSE analysis as a validation metric for quantifying the complexity of rs-fMRI signals.

  11. Using Qualitative Methods to Inform Scale Development

    ERIC Educational Resources Information Center

    Rowan, Noell; Wulff, Dan

    2007-01-01

    This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being…

  12. Modulation and nonlinear evolution of multi-dimensional Langmuir wave envelopes in a relativistic plasma

    NASA Astrophysics Data System (ADS)

    Shahmansouri, M.; Misra, A. P.

    2016-12-01

    The modulational instability (MI) and the evolution of weakly nonlinear two-dimensional (2D) Langmuir wave (LW) packets are studied in an unmagnetized collisionless plasma with weakly relativistic electron flow. By using a 2D self-consistent relativistic fluid model and employing the standard multiple-scale technique, a coupled set of Davey-Stewartson (DS)-like equations is derived, which governs the slow modulation and the evolution of LW packets in relativistic plasmas. It is found that the relativistic effects favor the instability of LW envelopes in the k - θ plane, where k is the wave number and θ ( 0 ≤ θ ≤ π ) the angle of modulation. It is also found that as the electron thermal velocity or θ increases, the growth rate of MI increases with cutoffs at higher wave numbers of modulation. Furthermore, in the nonlinear evolution of the DS-like equations, it is seen that with an effect of the relativistic flow, a Gaussian wave beam collapses in a finite time, and the collapse can be arrested when the effect of the thermal pressure or the relativistic flow is slightly relaxed. The present results may be useful to the MI and the formation of localized LW envelopes in cosmic plasmas with a relativistic flow of electrons.

  13. Construct distinctiveness and variance composition of multi-dimensional instruments: Three short-form masculinity measures.

    PubMed

    Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R

    2015-07-01

    Focusing on a set of 3 multidimensional measures of conceptually related but different aspects of masculinity, we use factor analytic techniques to address 2 issues: (a) whether psychological constructs that are theoretically distinct but require fairly subtle discriminations by survey respondents can be accurately captured by self-report measures, and (b) how to better understand sources of variance in subscale and total scores developed from such measures. The specific measures investigated were the: (a) Male Role Norms Inventory-Short Form (MRNI-SF); (b) Conformity to Masculine Norms Inventory-46 (CMNI-46); and (c) Gender Role Conflict Scale-Short Form (GRCS-SF). Data (N = 444) were from community-dwelling and college men who responded to an online survey. EFA results demonstrated the discriminant validity of the 20 subscales comprising the 3 instruments, thus indicating that relatively subtle distinctions between norms, conformity, and conflict can be captured with self-report measures. CFA was used to compare 2 different methods of modeling a broad/general factor for each of the 3 instruments. For the CMNI-46 and MRNI-SF, a bifactor model fit the data significantly better than did a hierarchical factor model. In contrast, the hierarchical model fit better for the GRCS-SF. The discussion addresses implications of these specific findings for use of the measures in research studies, as well as broader implications for measurement development and assessment in other research domains of counseling psychology which also rely on multidimensional self-report instruments.

  14. Tectonic setting of basic igneous and metaigneous rocks of Borborema Province, Brazil using multi-dimensional geochemical discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeet K.; Oliveira, Elson P.

    2015-03-01

    Fifteen multi-dimensional diagrams for basic and ultrabasic rocks, based on log-ratio transformations, were used to infer tectonic setting for eight case studies of Borborema Province, NE Brazil. The applications of these diagrams indicated the following results: (1) a mid-ocean ridge setting for Forquilha eclogites (Central Ceará domain) during the Mesoproterozoic; (2) an oceanic plateau setting for Algodões amphibolites (Central Ceará domain) during the Paleoproterozoic; (3) an island arc setting for Brejo Seco amphibolites (Riacho do Pontal belt) during the Proterozoic; (4) an island arc to mid-ocean ridge setting for greenschists of the Monte Orebe Complex (Riacho do Pontal belt) during the Neoproterozoic; (5) within-plate (continental) setting for Vaza Barris domain mafic rocks (Sergipano belt) during the Neoproterozoic; (6) a less precise arc to continental rift for the Gentileza unit metadiorite/gabbro (Sergipano belt) during the Neoproterozoic; (7) an island arc setting for the Novo Gosto unit metabasalts (Sergipano belt) during Neoproterozoic; (8) continental rift setting for Rio Grande do Norte basic rocks during Miocene.

  15. Closed-cycle cold helium magic-angle spinning for sensitivity-enhanced multi-dimensional solid-state NMR.

    PubMed

    Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi

    2015-10-01

    Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR=4-12 kHz) at cryogenic temperatures (T=35-120 K) for over a week without consuming helium at a cost for electricity of 16 kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T=40 K and B0=16.4 T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100 K.

  16. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-07-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.

  17. Zwitterionic hydrophilic interaction solid-phase extraction and multi-dimensional mass spectrometry for shotgun lipidomic study of Hypophthalmichthys nobilis.

    PubMed

    Jin, Renyao; Li, Linqiu; Feng, Junli; Dai, Zhiyuan; Huang, Yao-Wen; Shen, Qing

    2017-02-01

    Zwitterionic hydrophilic interaction liquid chromatography (ZIC-HILIC) material was used as solid-phase extraction sorbent for purification of phospholipids from Hypophthalmichthys nobilis. The conditions were optimized to be pH 6, flow rate 2.0mL·min(-1), loading breakthrough volume ⩽5mL, and eluting solvent 5mL. Afterwards, the extracts were analyzed by multi-dimensional mass spectrometry (MDMS) based shotgun lipidomics; 20 species of phosphatidylcholine (PC), 22 species of phosphatidylethanoamine (PE), 15 species of phosphatidylserine (PS), and 5 species of phosphatidylinositol (PI) were identified, with content 224.1, 124.1, 27.4, and 34.7μg·g(-1), respectively. The MDMS method was validated in terms of linearity (0.9963-0.9988), LOD (3.7ng·mL(-1)), LOQ (9.8ng·mL(-1)), intra-day precision (<3.64%), inter-day precision (<5.31%), and recovery (78.8-85.6%). ZIC-HILIC and MDMS shotgun lipidomics are efficient for studying phospholipids in H. nobilis.

  18. Taking sociality seriously: the structure of multi-dimensional social networks as a source of information for individuals

    PubMed Central

    Barrett, Louise; Henzi, S. Peter; Lusseau, David

    2012-01-01

    Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054

  19. Experimental demonstration of multi-dimensional resources integration for service provisioning in cloud radio over fiber network

    PubMed Central

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young

    2016-01-01

    Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296

  20. Optimal energy management in a dual-storage fuel-cell hybrid vehicle using multi-dimensional dynamic programming

    NASA Astrophysics Data System (ADS)

    Ansarey, Mehdi; Shariat Panahi, Masoud; Ziarati, Hussein; Mahjoob, Mohammad

    2014-03-01

    Hybrid storage systems consisting of battery and ultra-capacitor have recently emerged as an alternative to the conventional single buffer layout in hybrid vehicles. Their high power and energy density could improve the performance indices of the vehicle, provided that an optimal energy management strategy is employed that could handle systems with multiple degrees of freedom (DOF). The majority of existing energy management strategies is limited to a single DOF and the small body of work on multi-DOF systems is mainly heuristic-based. We propose an optimal solution to the energy management problem in fuel-cell hybrid vehicles with dual storage buffer for fuel economy in a standard driving cycle using multi-dimensional dynamic programming (MDDP). An efficient MDDP code is developed using MATLAB™'s vectorization feature that helps reduce the inherently high computational cost of MDDP. Results of multiple simulated experiments are presented to demonstrate the applicability and performance of the proposed strategy. A comparison is also made between a single and a double buffer fuel-cell hybrid vehicle in various driving cycles to determine the maximum reduction in fuel consumption that can be achieved by the addition of an ultra-capacitor.

  1. Multi-dimensional optimization of a terawatt seeded tapered Free Electron Laser with a Multi-Objective Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Juhao; Hu, Newman; Setiawan, Hananiel; Huang, Xiaobiao; Raubenheimer, Tor O.; Jiao, Yi; Yu, George; Mandlekar, Ajay; Spampinati, Simone; Fang, Kun; Chu, Chungming; Qiang, Ji

    2017-02-01

    There is a great interest in generating high-power hard X-ray Free Electron Laser (FEL) in the terawatt (TW) level that can enable coherent diffraction imaging of complex molecules like proteins and probe fundamental high-field physics. A feasibility study of producing such X-ray pulses was carried out employing a configuration beginning with a Self-Amplified Spontaneous Emission FEL, followed by a "self-seeding" crystal monochromator generating a fully coherent seed, and finishing with a long tapered undulator where the coherent seed recombines with the electron bunch and is amplified to high power. The undulator tapering profile, the phase advance in the undulator break sections, the quadrupole focusing strength, etc. are parameters to be optimized. A Genetic Algorithm (GA) is adopted for this multi-dimensional optimization. Concrete examples are given for LINAC Coherent Light Source (LCLS) and LCLS-II-type systems. Analytical estimate is also developed to cross check the simulation and optimization results as a quick and complimentary tool.

  2. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    SciTech Connect

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I. E-mail: sshibata@post.kek.jp

    2015-08-15

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.

  3. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  4. Stochastic optimization framework (SOF) for computer-optimized design, engineering, and performance of multi-dimensional systems and processes

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang

    2008-04-01

    Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )

  5. Do discrimination tasks discourage multi-dimensional stimulus processing? Evidence from a cross-modal object discrimination in rats.

    PubMed

    Jeffery, Kathryn J

    2007-11-02

    Neurobiologists are becoming increasingly interested in how complex cognitive representations are formed by the integration of sensory stimuli. To this end, discrimination tasks are frequently used to assess perceptual and cognitive processes in animals, because they are easy to administer and score, and the ability of an animal to make a particular discrimination establishes beyond doubt that the necessary perceptual/cognitive processes are present. It does not, however, follow that absence of discrimination means the animal cannot make a particular perceptual judgement; it may simply mean that the animal did not manage to discover the relevant discriminative stimulus when trying to learn the task. Here, it is shown that rats did not learn a cross-modal object discrimination (requiring association of each object's visual appearance with its odour) when trained on the complete task from the beginning. However, they could eventually make the discrimination when trained on the component parts step by step, showing that they were able to do the necessary cross-modal integration in the right circumstances. This finding adds to growing evidence that discrimination tasks tend to encourage feature-based discrimination, perhaps by engaging automatic, habit-based brain systems. Thus, they may not be the best way to assess the formation of multi-dimensional stimulus representations of the kind needed in more complex cognitive processes such as declarative memory. Instead, more natural tasks such as spontaneous exploration may be preferable.

  6. ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels

    DOE PAGES

    Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; ...

    2015-01-01

    A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtainedmore » together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.« less

  7. ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels

    SciTech Connect

    Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino

    2015-01-01

    A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtained together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.

  8. Interpolation of multi-sheeted multi-dimensional potential-energy surfaces via a linear optimization procedure.

    PubMed

    Opalka, Daniel; Domcke, Wolfgang

    2013-06-14

    Significant progress has been achieved in recent years with the development of high-dimensional permutationally invariant analytic Born-Oppenheimer potential-energy surfaces, making use of polynomial invariant theory. In this work, we have developed a generalization of this approach which is suitable for the construction of multi-sheeted multi-dimensional potential-energy surfaces exhibiting seams of conical intersections. The method avoids the nonlinear optimization problem which is encountered in the construction of multi-sheeted diabatic potential-energy surfaces from ab initio electronic-structure data. The key of the method is the expansion of the coefficients of the characteristic polynomial in polynomials which are invariant with respect to the point group of the molecule or the permutation group of like atoms. The multi-sheeted adiabatic potential-energy surface is obtained from the Frobenius companion matrix which contains the fitted coefficients. A three-sheeted nine-dimensional adiabatic potential-energy surface of the (2)T2 electronic ground state of the methane cation has been constructed as an example of the application of this method.

  9. Conservative-variable average states for equilibrium gas multi-dimensional fluxes

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.

    1992-01-01

    Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.

  10. Imaging Multi-Dimensional Electrical Resistivity Structure as a Tool in Developing Enhanced Geothermal Systems (EGS)

    SciTech Connect

    Philip E. Wannamaker

    2007-12-31

    The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.

  11. Multi-dimensional Crustal and Lithospheric Structure of the Atlas Mountains of Morocco by Magnetotelluric Imaging

    NASA Astrophysics Data System (ADS)

    Kiyan, D.; Jones, A. G.; Fullea, J.; Ledo, J.; Siniscalchi, A.; Romano, G.

    2014-12-01

    The PICASSO (Program to Investigate Convective Alboran Sea System Overturn) project and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROSCORES TOPO-EUROPE project) project were designed to collect high resolution, multi-disciplinary lithospheric scale data in order to understand the tectonic evolution and lithospheric structure of the western Mediterranean. The over-arching objectives of the magnetotelluric (MT) component of the projects are (i) to provide new electrical conductivity constraints on the crustal and lithospheric structure of the Atlas Mountains, and (ii) to test the hypotheses for explaining the purported lithospheric cavity beneath the Middle and High Atlas inferred from potential-field lithospheric modeling. We present the results of an MT experiment we carried out in Morocco along two profiles: an approximately N-S oriented profile crossing the Middle Atlas, the High Atlas and the eastern Anti-Atlas to the east (called the MEK profile, for Meknes) and NE-SW oriented profile through western High Atlas to the west (called the MAR profile, for Marrakech). Our results are derived from three-dimensional (3-D) MT inversion of the MT data set employing the parallel version of Modular system for Electromagnetic inversion (ModEM) code. The distinct conductivity differences between the Middle-High Atlas (conductive) and the Anti-Atlas (resistive) correlates with the South Atlas Front fault, the depth extent of which appears to be limited to the uppermost mantle (approx. 60 km). In all inverse solutions, the crust and the upper mantle show resistive signatures (approx. 1,000 Ωm) beneath the Anti-Atlas, which is the part of stable West African Craton. Partial melt and/or exotic fluids enriched in volatiles produced by the melt can account for the high middle to lower crustal and uppermost mantle conductivity in the Folded Middle Atlas, the High Moulouya Plain and the

  12. Sensate abstraction: hybrid strategies for multi-dimensional data in expressive virtual reality contexts

    NASA Astrophysics Data System (ADS)

    West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman

    2009-02-01

    ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.

  13. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  14. Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia

    ERIC Educational Resources Information Center

    Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal

    2014-01-01

    A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by…

  15. A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Van Leer, Bram

    1989-01-01

    A scheme of solving the two-dimensional Euler equations is developed. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.

  16. Multi-dimensional high order essentially non-oscillatory finite difference methods in generalized coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1992-01-01

    The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.

  17. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  18. Honeycomb: Visual Analysis of Large Scale Social Networks

    NASA Astrophysics Data System (ADS)

    van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.

    The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.

  19. Scaling range of power laws that originate from fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  20. Scaling range of power laws that originate from fluctuation analysis.

    PubMed

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R^{2} of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  1. High-resolution heteronuclear multi-dimensional NMR spectroscopy in magnetic fields with unknown spatial variations.

    PubMed

    Zhang, Zhiyong; Huang, Yuqing; Smith, Pieter E S; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong

    2014-05-01

    Heteronuclear NMR spectroscopy is an extremely powerful tool for determining the structures of organic molecules and is of particular significance in the structural analysis of proteins. In order to leverage the method's potential for structural investigations, obtaining high-resolution NMR spectra is essential and this is generally accomplished by using very homogeneous magnetic fields. However, there are several situations where magnetic field distortions and thus line broadening is unavoidable, for example, the samples under investigation may be inherently heterogeneous, and the magnet's homogeneity may be poor. This line broadening can hinder resonance assignment or even render it impossible. We put forth a new class of pulse sequences for obtaining high-resolution heteronuclear spectra in magnetic fields with unknown spatial variations based on distant dipolar field modulations. This strategy's capabilities are demonstrated with the acquisition of high-resolution 2D gHSQC and gHMBC spectra. These sequences' performances are evaluated on the basis of their sensitivities and acquisition efficiencies. Moreover, we show that by encoding and decoding NMR observables spatially, as is done in ultrafast NMR, an extra dimension containing J-coupling information can be obtained without increasing the time necessary to acquire a heteronuclear correlation spectrum. Since the new sequences relax magnetic field homogeneity constraints imposed upon high-resolution NMR, they may be applied in portable NMR sensors and studies of heterogeneous chemical and biological materials.

  2. Multi-dimensional construction of a novel active yolk@conductive shell nanofiber web as a self-standing anode for high-performance lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai

    2015-11-01

    A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode.A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode. Electronic supplementary information (ESI) available: Experimental details and additional information about material characterization. See DOI: 10.1039/c5nr06531c

  3. Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS).

    PubMed

    Kim, Se-Kang; Frisby, Craig L; Davison, Mark L

    2004-10-01

    Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via Multidimensional Scaling (PAMS; Davison, 1996), is introduced to meet the challenge. PAMS extends the use of simple multidimensional scaling methods to identify latent profiles in a multi-test battery. Application of PAMS to profile analysis is described. The PAMS model is then used to identify latent profiles from a subgroup (N = 357) within the sample of the Woodcock-Johnson Psychoeducational Battery-Revised (WJ-R; McGrew, Werder, & Woodcock, 1991; Woodcock & Johnson, 1989), followed by a discussion of procedures for interpreting participants' observed score profiles from the latent PAMS profiles. Finally, advantages and limitations of the PAMS technique are discussed.

  4. Real-time Data Fusion Platforms: The Need of Multi-dimensional Data-driven Research in Biomedical Informatics.

    PubMed

    Raje, Satyajeet; Kite, Bobbie; Ramanathan, Jay; Payne, Philip

    2015-01-01

    Systems designed to expedite data preprocessing tasks such as data discovery, interpretation, and integration that are required before data analysis drastically impact the pace of biomedical informatics research. Current commercial interactive and real-time data integration tools are designed for large-scale business analytics requirements. In this paper we identify the need for end-to-end data fusion platforms from the researcher's perspective, supporting ad-hoc data interpretation and integration.

  5. Application Perspective of 2D+SCALE Dimension

    NASA Astrophysics Data System (ADS)

    Karim, H.; Rahman, A. Abdul

    2016-09-01

    Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.

  6. Multiple-length-scale deformation analysis in a thermoplastic polyurethane

    PubMed Central

    Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.

    2015-01-01

    Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945

  7. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  8. Multi-dimensional construction of a novel active yolk@conductive shell nanofiber web as a self-standing anode for high-performance lithium-ion batteries.

    PubMed

    Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai

    2015-12-21

    A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode.

  9. Multi-Dimensional Health Assessment Questionnaire in China: Reliability, Validity and Clinical Value in Patients with Rheumatoid Arthritis

    PubMed Central

    Song, Yang; Zhu, Li-an; Wang, Su-li; Leng, Lin; Bucala, Richard; Lu, Liang-Jing

    2014-01-01

    Objective To evaluate the psychometric properties and clinical utility of Chinese Multidimensional Health Assessment Questionnaire (MDHAQ-C) in patients with rheumatoid arthritis (RA) in China. Methods 162 RA patients were recruited in the evaluation process. The reliability of the questionnaire was tested by internal consistency and item analysis. Convergent validity was assessed by correlations of MDHAQ-C with Health Assessment Questionnaire (HAQ), the 36-item Short-Form Health Survey (SF-36) and the Hospital anxiety and depression scales (HAD). Discriminant validity was tested in groups of patients with varied disease activities and functional classes. To evaluate the clinical values, correlations were calculated between MDHAQ-C and indices of clinical relevance and disease activity. Agreement with the Disease Activity Score (DAS28) and Clinical Disease Activity Index (CDAI) was estimated. Results The Cronbach's alpha was 0.944 in the Function scale (FN) and 0.768 in the scale of psychological status (PS). The item analysis indicated all the items of FN and PS are correlated at an acceptable level. MDHAQ-C correlated with the questionnaires significantly in most scales and scores of scales differed significantly in groups of different disease activity and functional status. MDHAQ-C has moderate to high correlation with most clinical indices and high correlation with a spearman coefficient of 0.701 for DAS 28 and 0.843 for CDAI. The overall agreement of categories was satisfying. Conclusion MDHAQ-C is a reliable, valid instrument for functional measurement and a feasible, informative quantitative index for busy clinical settings in Chinese RA patients. PMID:24848431

  10. Scale analysis using X-ray microfluorescence and computed radiography

    NASA Astrophysics Data System (ADS)

    Candeias, J. P.; de Oliveira, D. F.; dos Anjos, M. J.; Lopes, R. T.

    2014-02-01

    Scale deposits are the most common and most troublesome damage problems in the oil field and can occur in both production and injection wells. They occur because the minerals in produced water exceed their saturation limit as temperatures and pressures change. Scale can vary in appearance from hard crystalline material to soft, friable material and the deposits can contain other minerals and impurities such as paraffin, salt and iron. In severe conditions, scale creates a significant restriction, or even a plug, in the production tubing. This study was conducted to qualify the elements present in scale samples and quantify the thickness of the scale layer using synchrotron radiation micro-X-ray fluorescence (SRμXRF) and computed radiography (CR) techniques. The SRμXRF results showed that the elements found in the scale samples were strontium, barium, calcium, chromium, sulfur and iron. The CR analysis showed that the thickness of the scale layer was identified and quantified with accuracy. These results can help in the decision making about removing the deposited scale.

  11. Basin-scale Modeling of Geological Carbon Sequestration: Model Complexity, Injection Scenario and Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.

    2013-12-01

    Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and

  12. Geographical Scale Effects on the Analysis of Leptospirosis Determinants

    PubMed Central

    Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães

    2014-01-01

    Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536

  13. Tectonomagmatic origin of Precambrian rocks of Mexico and Argentina inferred from multi-dimensional discriminant-function based discrimination diagrams

    NASA Astrophysics Data System (ADS)

    Pandarinath, Kailasa

    2014-12-01

    Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of

  14. MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A

    SciTech Connect

    Anusha, L. S.; Nagendra, K. N. E-mail: knn@iiap.res.in

    2013-04-20

    Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with

  15. Shielding analysis methods available in the scale computational system

    SciTech Connect

    Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

    1986-01-01

    Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

  16. Full-scale system impact analysis: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.

  17. Governing equations of transient soil water flow and soil water flux in multi-dimensional fractional anisotropic media and fractional time

    NASA Astrophysics Data System (ADS)

    Kavvas, M. Levent; Ercan, Ali; Polsinelli, James

    2017-03-01

    In this study dimensionally consistent governing equations of continuity and motion for transient soil water flow and soil water flux in fractional time and in fractional multiple space dimensions in anisotropic media are developed. Due to the anisotropy in the hydraulic conductivities of natural soils, the soil medium within which the soil water flow occurs is essentially anisotropic. Accordingly, in this study the fractional dimensions in two horizontal and one vertical directions are considered to be different, resulting in multi-fractional multi-dimensional soil space within which the flow takes place. Toward the development of the fractional governing equations, first a dimensionally consistent continuity equation for soil water flow in multi-dimensional fractional soil space and fractional time is developed. It is shown that the fractional soil water flow continuity equation approaches the conventional integer form of the continuity equation as the fractional derivative powers approach integer values. For the motion equation of soil water flow, or the equation of water flux within the soil matrix in multi-dimensional fractional soil space and fractional time, a dimensionally consistent equation is also developed. Again, it is shown that this fractional water flux equation approaches the conventional Darcy equation as the fractional derivative powers approach integer values. From the combination of the fractional continuity and motion equations, the governing equation of transient soil water flow in multi-dimensional fractional soil space and fractional time is obtained. It is shown that this equation approaches the conventional Richards equation as the fractional derivative powers approach integer values. Then by the introduction of the Brooks-Corey constitutive relationships for soil water into the fractional transient soil water flow equation, an explicit form of the equation is obtained in multi-dimensional fractional soil space and fractional time. The

  18. Tools for Large-Scale Mobile Malware Analysis

    SciTech Connect

    Bierma, Michael

    2014-01-01

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.

  19. Multi-scale curvature tensor analysis of machined surfaces

    NASA Astrophysics Data System (ADS)

    Bartkowiak, Tomasz; Brown, Christopher

    2016-12-01

    This paper demonstrates the use of multi-scale curvature analysis, an areal new surface characterization technique for better understanding topographies, for analyzing surfaces created by conventional machining and grinding. Curvature, like slope and area, changes with scale of observation, or calculation, on irregular surfaces, therefore it can be used for multi-scale geometric analysis. Curvatures on a surface should be indicative of topographically dependent behavior of a surface and curvatures are, in turn, influenced by the processing and use of the surface. Curvatures have not been well characterized previously. Curvature has been used for calculations in contact mechanics and for the evaluation of cutting edges. In the current work two parts were machined and then one of them was ground. The surface topographies were measured with a scanning laser confocal microscope. Plots of curvatures as a function of position and scale are presented, and the means and standard deviations of principal curvatures are plotted as a function of scale. Statistical analyses show the relations between curvature and these two manufacturing processes at multiple scales.

  20. Complexity of carbon market from multi-scale entropy analysis

    NASA Astrophysics Data System (ADS)

    Fan, Xinghua; Li, Shasha; Tian, Lixin

    2016-06-01

    Complexity of carbon market is the consequence of economic dynamics and extreme social political events in global carbon markets. The multi-scale entropy can measure the long-term structures in the daily price return time series. By using multi-scale entropy analysis, we explore the complexity of carbon market and mean reversion trend of daily price return. The logarithmic difference of data Dec16 from August 6, 2010 to May 22, 2015 is selected as the sample. The entropy is higher in small time scale, while lower in large. The dependence of the entropy on the time scale reveals the mean reversion of carbon prices return in the long run. A relatively great fluctuation over some short time period indicates that the complexity of carbon market evolves consistently with economic development track and the events of international climate conferences.

  1. The scaling of time series size towards detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Gao, Xiaolei; Ren, Liwei; Shang, Pengjian; Feng, Guochen

    2016-06-01

    In this paper, we introduce a modification of detrended fluctuation analysis (DFA), called multivariate DFA (MNDFA) method, based on the scaling of time series size N. In traditional DFA method, we obtained the influence of the sequence segmentation interval s, and it inspires us to propose a new model MNDFA to discuss the scaling of time series size towards DFA. The effectiveness of the procedure is verified by numerical experiments with both artificial and stock returns series. Results show that the proposed MNDFA method contains more significant information of series compared to traditional DFA method. The scaling of time series size has an influence on the auto-correlation (AC) in time series. For certain series, we obtain an exponential relationship, and also calculate the slope through the fitting function. Our analysis and finite-size effect test demonstrate that an appropriate choice of the time series size can avoid unnecessary influences, and also make the testing results more accurate.

  2. Adaptive multi-scale parameterization for one-dimensional flow in unsaturated porous media

    NASA Astrophysics Data System (ADS)

    Hayek, Mohamed; Lehmann, François; Ackerer, Philippe

    2008-01-01

    In the analysis of the unsaturated zone, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the porous media. Adaptative multi-scale parameterization consists in solving the problem through successive approximations by refining the parameter at the next finer scale all over the domain and stopping the process when the refinement does not induce significant decrease of the objective function any more. In this context, the refinement indicators algorithm provides an adaptive parameterization technique that opens the degrees of freedom in an iterative way driven at first order by the model to locate the discontinuities of the sought parameters. We present a refinement indicators algorithm for adaptive multi-scale parameterization that is applicable to the estimation of multi-dimensional hydraulic parameters in unsaturated soil water flow. Numerical examples are presented which show the efficiency of the algorithm in case of noisy data and missing data.

  3. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  4. Educational Benefit-Cost Analysis and the Problem of Scale.

    ERIC Educational Resources Information Center

    Welty, Gordon A.

    Benefit-cost analysis consists of establishing ratios of benefits to costs for a set of project variants. The decision rule is to select that project variant where the ratio is a maximum. This paper argues that specification and estimation errors can contribute to findings for large-scale systems of benefit-cost ratios approximating zero. The…

  5. Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale

    USGS Publications Warehouse

    Nykanen, V.; Raines, G.L.

    2006-01-01

    A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.

  6. Exploratory Data analysis ENvironment eXtreme scale (EDENx)

    SciTech Connect

    Steed, Chad Allen

    2015-07-01

    EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can select a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.

  7. SINEX: SCALE shielding analysis GUI for X-Windows

    SciTech Connect

    Browman, S.M.; Barnett, D.L.

    1997-12-01

    SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.

  8. Application of the Multi-Dimensional Surface Water Modeling System at Bridge 339, Copper River Highway, Alaska

    USGS Publications Warehouse

    Brabets, Timothy P.; Conaway, Jeffrey S.

    2009-01-01

    The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would

  9. Initial-phase investigation of multi-dimensional streamflow simulations in the Colorado River, Moab Valley, Grand County, Utah, 2004

    USGS Publications Warehouse

    Kenney, Terry A.

    2005-01-01

    A multi-dimensional hydrodynamic model was applied to aid in the assessment of the potential hazard posed to the uranium mill tailings near Moab, Utah, by flooding in the Colorado River as it flows through Moab Valley. Discharge estimates for the 100- and 500-year recurrence interval and for the Probable Maximum Flood (PMF) were evaluated with the model for the existing channel geometry. These discharges also were modeled for three other channel-deepening configurations representing hypothetical scour of the channel at the downstream portal of Moab Valley. Water-surface elevation, velocity distribution, and shear-stress distribution were predicted for each simulation.The hydrodynamic model was developed from measured channel topography and over-bank topographic data acquired from several sources. A limited calibration of the hydrodynamic model was conducted. The extensive presence of tamarisk or salt cedar in the over-bank regions of the study reach presented challenges for determining roughness coefficients.Predicted water-surface elevations for the current channel geometry indicated that the toe of the tailings pile would be inundated by about 4 feet by the 100-year discharge and 25 feet by the PMF discharge. A small area at the toe of the tailings pile was characterized by velocities of about 1 to 2 feet per second for the 100-year discharge. Predicted velocities near the toe for the PMF discharge increased to between 2 and 4 feet per second over a somewhat larger area. The manner to which velocities progress from the 100-year discharge to the PMF discharge in the area of the tailings pile indicates that the tailings pile obstructs the over-bank flow of flood discharges. The predicted path of flow for all simulations along the existing Colorado River channel indicates that the current distribution of tamarisk in the over-bank region affects how flood-flow velocities are spatially distributed. Shear-stress distributions were predicted throughout the study reach

  10. SCALE system cross-section validation for criticality safety analysis

    SciTech Connect

    Hathout, A M; Westfall, R M; Dodds, Jr, H L

    1980-01-01

    The purpose of this study is to test selected data from three cross-section libraries for use in the criticality safety analysis of UO/sub 2/ fuel rod lattices. The libraries, which are distributed with the SCALE system, are used to analyze potential criticality problems which could arise in the industrial fuel cycle for PWR and BWR reactors. Fuel lattice criticality problems could occur in pool storage, dry storage with accidental moderation, shearing and dissolution of irradiated elements, and in fuel transport and storage due to inadequate packing and shipping cask design. The data were tested by using the SCALE system to analyze 25 recently performed critical experiments.

  11. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    NASA Astrophysics Data System (ADS)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  12. Multi-scale statistical analysis of coronal solar activity

    DOE PAGES

    Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.

    2016-07-08

    Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.

  13. Scaled-particle theory analysis of cylindrical cavities in solution.

    PubMed

    Ashbaugh, Henry S

    2015-04-01

    The solvation of hard spherocylindrical solutes is analyzed within the context of scaled-particle theory, which takes the view that the free energy of solvating an empty cavitylike solute is equal to the pressure-volume work required to inflate a solute from nothing to the desired size and shape within the solvent. Based on our analysis, an end cap approximation is proposed to predict the solvation free energy as a function of the spherocylinder length from knowledge regarding only the solvent density in contact with a spherical solute. The framework developed is applied to extend Reiss's classic implementation of scaled-particle theory and a previously developed revised scaled-particle theory to spherocylindrical solutes. To test the theoretical descriptions developed, molecular simulations of the solvation of infinitely long cylindrical solutes are performed. In hard-sphere solvents classic scaled-particle theory is shown to provide a reasonably accurate description of the solvent contact correlation and resulting solvation free energy per unit length of cylinders, while the revised scaled-particle theory fitted to measured values of the contact correlation provides a quantitative free energy. Applied to the Lennard-Jones solvent at a state-point along the liquid-vapor coexistence curve, however, classic scaled-particle theory fails to correctly capture the dependence of the contact correlation. Revised scaled-particle theory, on the other hand, provides a quantitative description of cylinder solvation in the Lennard-Jones solvent with a fitted interfacial free energy in good agreement with that determined for purely spherical solutes. The breakdown of classical scaled-particle theory does not result from the failure of the end cap approximation, however, but is indicative of neglected higher-order curvature dependences on the solvation free energy.

  14. Bridgman crystal growth in low gravity - A scaling analysis

    NASA Technical Reports Server (NTRS)

    Alexander, J. I. D.; Rosenberger, Franz

    1990-01-01

    The results of an order-of-magnitude or scaling analysis are compared with those of numerical simulations of the effects of steady low gravity on compositional nonuniformity in crystals grown by the Bridgman-Stockbarger technique. In particular, the results are examined of numerical simulations of the effect of steady residual acceleration on the transport of solute in a gallium-doped germanium melt during directional solidification under low-gravity conditions. The results are interpreted in terms of the relevant dimensionless groups associated with the process, and scaling techniques are evaluated by comparing their predictions with the numerical results. It is demonstrated that, when convective transport is comparable with diffusive transport, some specific knowledge of the behavior of the system is required before scaling arguments can be used to make reasonable predictions.

  15. Assessing the Primary Schools--A Multi-Dimensional Approach: A School Level Analysis Based on Indian Data

    ERIC Educational Resources Information Center

    Sengupta, Atanu; Pal, Naibedya Prasun

    2012-01-01

    Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…

  16. A numerical analysis of Stefan problems for generalized multi-dimensional phase-change structures using the enthalpy transforming model

    NASA Technical Reports Server (NTRS)

    Cao, Yiding; Faghri, Amir; Chang, Won Soon

    1989-01-01

    An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.

  17. Bayesian analysis of spatially-dependent functional responses with spatially-dependent multi-dimensional functional predictors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...

  18. Multi-dimensional finite element code for the analysis of coupled fluid energy, and solute transport (CFEST)

    SciTech Connect

    Gupta, S.K.; Kincaid, C.T.; Meyer, P.R.; Newbill, C.A.; Cole, C.R.

    1982-08-01

    The Seasonal Thermal Energy Storage Program is being conducted for the Department of Energy by Pacific Northwest Laboratory. A major thrust of this program has been the study of natural aquifers as hosts for thermal energy storage and retrieval. Numerical simulation of the nonisothermal response of the host media is fundamental to the evaluation of proposed experimental designs and field test results. This report represents the primary documentation for the coupled fluid, energy and solute transport (CFEST) code. Sections of this document are devoted to the conservation equations and their numerical analogues, the input data requirements, and the verification studies completed to date.

  19. Multi-Dimensional Analysis of the Forced Bubble Dynamics Associated with Bubble Fusion Phenomena. Final Topical Report

    SciTech Connect

    Lahey, Jr., Richard T.; Jansen, Kenneth E.; Nagrath, Sunitha

    2002-12-02

    A new adaptive grid, 3-D FEM hydrodynamic shock (ie, HYDRO )code called PHASTA-2C has been developed and used to investigate bubble implosion phenomena leading to ultra-high temperatures and pressures. In particular, it was shown that nearly spherical bubble compressions occur during bubble implosions and the predicted conditions associated with a recent ORNL Bubble Fusion experiment [Taleyarkhan et al, Science, March, 2002] are consistent with the occurrence of D/D fusion.

  20. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    DTIC Science & Technology

    2010-12-01

    applications for conformance to one of the CERT® secure coding standards. CERT secure coding standards provide a detailed enumeration of coding errors...automated analysis tools to help them code securely. Secure coding standards provide a detailed enumeration of coding errors that have caused...including possible additional job aids . SCALe analysts will also be interviewed for context information surrounding incorrect judgments as part of

  1. Multi-scaling allometric analysis for urban and regional development

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2017-01-01

    The concept of allometric growth is based on scaling relations, and it has been applied to urban and regional analysis for a long time. However, most allometric analyses were devoted to the single proportional relation between two elements of a geographical system. Few researches focus on the allometric scaling of multielements. In this paper, a process of multiscaling allometric analysis is developed for the studies on spatio-temporal evolution of complex systems. By means of linear algebra, general system theory, and by analogy with the analytical hierarchy process, the concepts of allometric growth can be integrated with the ideas from fractal dimension. Thus a new methodology of geo-spatial analysis and the related theoretical models emerge. Based on the least squares regression and matrix operations, a simple algorithm is proposed to solve the multiscaling allometric equation. Applying the analytical method of multielement allometry to Chinese cities and regions yields satisfying results. A conclusion is reached that the multiscaling allometric analysis can be employed to make a comprehensive evaluation for the relative levels of urban and regional development, and explain spatial heterogeneity. The notion of multiscaling allometry may enrich the current theory and methodology of spatial analyses of urban and regional evolution.

  2. Perceptual security of encrypted images based on wavelet scaling analysis

    NASA Astrophysics Data System (ADS)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2016-08-01

    The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.

  3. Confirmatory factor analysis of the supports intensity scale for children.

    PubMed

    Verdugo, Miguel A; Guillén, Verónica M; Arias, Benito; Vicente, Eva; Badia, Marta

    2016-01-01

    Support needs assessment instruments and recent research related to this construct have been more focused on adults with intellectual disability than on children. However, the design and implementation of Individualized Support Plans (ISP) must start at an early age. Currently, a project for the translation, adaptation and validation of the supports intensity scale for children (SIS-C) is being conducted in Spain. In this study, the internal structure of the scale was analyzed to shed light on the nature of this construct when evaluated in childhood. A total of 814 children with intellectual disability between 5 and 16 years of age participated in the study. Their support need level was assessed by the SIS-C, and a confirmatory factor analysis (CFA), including different hypotheses, was carried out to identify the optimal factorial structure of this scale. The CFA results indicated that a unidimensional model is not sufficient to explain our data structure. On the other hand, goodness-of-fit indices showed that both correlated first-order factors and higher-order factor models of the construct could explain the data obtained from the scale. Specifically, a better fit of our data with the correlated first-order factors model was found. These findings are similar to those identified in previous analyses performed with adults. Implications and directions for further research are discussed.

  4. Scales

    MedlinePlus

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Eczema , ringworm , and psoriasis ...

  5. Scaling analysis for the investigation of slip mechanisms in nanofluids

    NASA Astrophysics Data System (ADS)

    Savithiri, S.; Pattamatta, Arvind; Das, Sarit K.

    2011-07-01

    The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.

  6. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and

  7. REGIONAL-SCALE WIND FIELD CLASSIFICATION EMPLOYING CLUSTER ANALYSIS

    SciTech Connect

    Glascoe, L G; Glaser, R E; Chin, H S; Loosmore, G A

    2004-06-17

    The classification of time-varying multivariate regional-scale wind fields at a specific location can assist event planning as well as consequence and risk analysis. Further, wind field classification involves data transformation and inference techniques that effectively characterize stochastic wind field variation. Such a classification scheme is potentially useful for addressing overall atmospheric transport uncertainty and meteorological parameter sensitivity issues. Different methods to classify wind fields over a location include the principal component analysis of wind data (e.g., Hardy and Walton, 1978) and the use of cluster analysis for wind data (e.g., Green et al., 1992; Kaufmann and Weber, 1996). The goal of this study is to use a clustering method to classify the winds of a gridded data set, i.e, from meteorological simulations generated by a forecast model.

  8. Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow. Degree awarded by Virginia Polytechnic Inst. and State Univ., 9 Nov. 2001

    NASA Technical Reports Server (NTRS)

    Wood, William A., III

    2002-01-01

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

  9. Reactor Physics Methods and Analysis Capabilities in SCALE

    SciTech Connect

    DeHart, Mark D; Bowman, Stephen M

    2011-01-01

    The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

  10. Reactor Physics Methods and Analysis Capabilities in SCALE

    SciTech Connect

    Mark D. DeHart; Stephen M. Bowman

    2011-05-01

    The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

  11. Medium and small-scale analysis of financial data

    NASA Astrophysics Data System (ADS)

    Nawroth, Andreas P.; Peinke, Joachim

    2007-08-01

    A stochastic analysis of financial data is presented. In particular we investigate how the statistics of log returns change with different time delays τ. The scale-dependent behaviour of financial data can be divided into two regions. The first time range, the small-timescale region (in the range of seconds) seems to be characterised by universal features. The second time range, the medium-timescale range from several minutes upwards can be characterised by a cascade process, which is given by a stochastic Markov process in the scale τ. A corresponding Fokker-Planck equation can be extracted from given data and provides a non-equilibrium thermodynamical description of the complexity of financial data.

  12. Scaling and dimensional analysis of acoustic streaming jets

    NASA Astrophysics Data System (ADS)

    Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.; Garandet, J.-P.

    2014-09-01

    This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.

  13. Scaling and dimensional analysis of acoustic streaming jets

    SciTech Connect

    Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.

    2014-09-15

    This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.

  14. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  15. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of

  16. Nonlinearity analysis of model-scale jet noise

    NASA Astrophysics Data System (ADS)

    Gee, Kent L.; Atchley, Anthony A.; Falco, Lauren E.; Shepherd, Micah R.

    2012-09-01

    This paper describes the use of a spectrally-based "nonlinearity indicator" to complement ordinary spectral analysis of jet noise propagation data. The indicator, which involves the cross spectrum between the temporal acoustic pressure and the square of the acoustic pressure, stems directly from ensemble averaging the generalized Burgers equation. The indicator is applied to unheated model-scale jet noise from subsonic and supersonic nozzles. The results demonstrate how the indicator can be used to interpret the evolution of power spectra in the transition from the geometric near to far field. Geometric near-field and nonlinear effects can be distinguished from one another, thus lending additional physical insight into the propagation.

  17. Application of multi-dimensional discrimination diagrams and probability calculations to Paleoproterozoic acid rocks from Brazilian cratons and provinces to infer tectonic settings

    NASA Astrophysics Data System (ADS)

    Verma, Sanjeet K.; Oliveira, Elson P.

    2013-08-01

    In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case

  18. Scaling analysis of the anisotropic nonlocal Kardar-Parisi-Zhang equation

    NASA Astrophysics Data System (ADS)

    Tang, Gang; Ma, Benkun

    2002-07-01

    The scaling behaviors of the anisotropic nonlocal Kardar-Parisi-Zhang equation are studied by the scaling analysis method introduced by Hentschel and Family. The scaling exponents in both the weak- and strong-coupling regions are obtained, respectively. The scaling exponents in weak-coupling region can well match the results of the dynamic renormalization-group analysis.

  19. A Multi-scale Approach to Urban Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Gluch, Renne; Quattrochi, Dale A.

    2005-01-01

    An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.

  20. Problems of allometric scaling analysis: examples from mammalian reproductive biology.

    PubMed

    Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K

    2005-05-01

    Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best

  1. Diagnostic accuracy of the International HIV Dementia Scale and HIV Dementia Scale: A meta-analysis.

    PubMed

    Hu, Xueying; Zhou, Yang; Long, Jianxiong; Feng, Qiming; Wang, Rensheng; Su, Li; Zhao, Tingting; Wei, Bo

    2012-10-01

    This aim of this study was to assess the diagnostic accuracy of the International HIV Dementia Scale (IHDS) or HIV Dementia Scale (HDS) for the diagnosis of HIV-associated neurocognitive disorders (HAND). A comprehensive and systematic search was carried out in PubMed and EMBASE databases. Sensitivity, specificity, Q(*)-values, summary receiver operating characteristic curves and other measures of accuracy of IHDS or HDS in the diagnosis of HAND were summarized. Summary receiver operator characteristic (SROC) curve analysis for HAND data demonstrates a pooled sensitivity of 0.90 [95% confidence interval (CI), 0.88-0.91] and overall specificity of 0.96 (95% CI, 0.95-0.97) for IHDS, the Q(*)-value for IHDS was 0.9195 and the diagnostic odds ratio (DOR) was 162.28 (95% CI, 91.82-286.81). HDS had an overall sensitivity of 0.39 (95% CI, 0.34-0.43) and specificity of 0.90 (95% CI, 0.89-0.91), the Q(*)-value for HDS was 0.6321 and DOR was 5.81 (95% CI, 3.64-9.82). There was significant heterogeneity for studies that reported IHDS and HDS. This meta-analysis has shown that IHDS and HDS may offer high diagnostic performance accuracy for the detection of HAND in primary health care and resource-limited settings. IHDS and HDS may require reformed neuropsychological characterization of impairments in accordance with regional culture and language in future international studies.

  2. Large-scale analysis of microRNA evolution

    PubMed Central

    2012-01-01

    Background In animals, microRNAs (miRNA) are important genetic regulators. Animal miRNAs appear to have expanded in conjunction with an escalation in complexity during early bilaterian evolution. Their small size and high-degree of similarity makes them challenging for phylogenetic approaches. Furthermore, genomic locations encoding miRNAs are not clearly defined in many species. A number of studies have looked at the evolution of individual miRNA families. However, we currently lack resources for large-scale analysis of miRNA evolution. Results We addressed some of these issues in order to analyse the evolution of miRNAs. We perform syntenic and phylogenetic analysis for miRNAs from 80 animal species. We present synteny maps, phylogenies and functional data for miRNAs across these species. These data represent the basis of our analyses and also act as a resource for the community. Conclusions We use these data to explore the distribution of miRNAs across phylogenetic space, characterise their birth and death, and examine functional relationships between miRNAs and other genes. These data confirm a number of previously reported findings on a larger scale and also offer novel insights into the evolution of the miRNA repertoire in animals, and it’s genomic organization. PMID:22672736

  3. Analysis of a Two Wrap Meso Scale Scroll Pump

    NASA Astrophysics Data System (ADS)

    Moore, Eric J.; Muntz, E. Phillip; Erye, Francis; Myung, Nosang; Orient, Otto; Shcheglov, Kirill; Wiberg, Dean

    2003-05-01

    The scroll pump is an interesting positive displacement pump. One scroll in the form of an Archimedes spiral moves with respect to another, similarly shaped stationary scroll, forming a peristaltic pumping action. The moving scroll traces an orbital path but is maintained at a constant angular orientation. Pockets of gas are forced along the fixed scroll from its periphery, eventually reaching the center where the gas is discharged. A model of a multi-wrap scroll pump was created and applied to predict pumping performance. Meso-scale scroll pumps have been proposed for use as roughing pumps in mobile, sampling mass spectrometer systems. The main objective of the present analysis is to obtain estimates of a scroll pump's performance, taking into account the effect of manufacturing tolerances, in order to determine if the meso scale scroll pump will meet the necessarily small power and volume requirements associated with mobile, sampling mass spectrometer systems. The analysis involves developing the governing equations for the pump in terms of several operating parameters, taking into account the leaks to and from the trapped gasses as they are displaced to the discharge port. The power and volume required for pumping tasks is also obtained in terms of the operating parameters and pump size. Performance evaluations such as power and volume per unit of pumped gas upflow are obtained.

  4. Psychometric analysis of the Ten-Item Perceived Stress Scale.

    PubMed

    Taylor, John M

    2015-03-01

    Although the 10-item Perceived Stress Scale (PSS-10) is a popular measure, a review of the literature reveals 3 significant gaps: (a) There is some debate as to whether a 1- or a 2-factor model best describes the relationships among the PSS-10 items, (b) little information is available on the performance of the items on the scale, and (c) it is unclear whether PSS-10 scores are subject to gender bias. These gaps were addressed in this study using a sample of 1,236 adults from the National Survey of Midlife Development in the United States II. Based on self-identification, participants were 56.31% female, 77% White, 17.31% Black and/or African American, and the average age was 54.48 years (SD = 11.69). Findings from an ordinal confirmatory factor analysis suggested the relationships among the items are best described by an oblique 2-factor model. Item analysis using the graded response model provided no evidence of item misfit and indicated both subscales have a wide estimation range. Although t tests revealed a significant difference between the means of males and females on the Perceived Helplessness Subscale (t = 4.001, df = 1234, p < .001), measurement invariance tests suggest that PSS-10 scores may not be substantially affected by gender bias. Overall, the findings suggest that inferences made using PSS-10 scores are valid. However, this study calls into question inferences where the multidimensionality of the PSS-10 is ignored.

  5. Statistical analysis of large-scale neuronal recording data

    PubMed Central

    Reed, Jamie L.; Kaas, Jon H.

    2010-01-01

    Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings. PMID:20472395

  6. Tera-scale astronomical data analysis and visualization

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.; Kilborn, V. A.

    2013-03-01

    We present a high-performance, graphics processing unit (GPU) based framework for the efficient analysis and visualization of (nearly) terabyte (TB) sized 3D images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image (1) volume rendering using an arbitrary transfer function at 7-10 frames per second, (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s, (3) evaluation of the image histogram in 4 s and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching 1 teravoxel per second, and are 10-100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows that the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array Pathfinder radio telescopes.

  7. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  8. Surface Roughness from Point Clouds - A Multi-Scale Analysis

    NASA Astrophysics Data System (ADS)

    Milenković, Milutin; Ressl, Camillo; Hollaus, Markus; Pfeifer, Norbert

    2013-04-01

    Roughness is a physical parameter of surfaces which should include the surface complexity in geophysical models. In hydrodynamic modeling, e.g., roughness should estimate the resistance caused by the surface on the flow, or in remote sensing, how the signal is scattered. Roughness needs to be estimated as a parameter of the model. This has been identified as main source of the uncertainties in model prediction, mainly due to the errors that follow a traditional roughness estimation, e.g. from surface profiles, or by a visual interpretation and manual delineation from aerial photos. Currently, roughness estimation is shifting towards point clouds of surfaces, which primarily come from laser scanning and image matching techniques. However, those data sets are also not free of errors and may affect roughness estimation. Our study focusses on the estimation of roughness indices from different point clouds, and the uncertainties that follow such a procedure. The analysis is performed on a graveled surface of a river bed in Eastern Austria, using point clouds acquired by a triangulating laser scanner (Minolta Vivid 910), photogrammetry (DSLR camera), and terrestrial laser scanner (Riegl FWF scanner). To enable their comparison, all the point clouds are transformed to a superior coordinate system. Then, different roughness indices are calculated and compared at different scales, including stochastic and features-based indices like RMS of elevation, std.dev., Peak to Valley height, openness. The analysis is additionally supported with the spectral signatures (frequency domain) of the different point clouds. The selected techniques provide point clouds of different resolution (0.1-10cm) and coverage (0.3-10m), which also justifies the multi-scale roughness analysis. By doing this, it becomes possible to differentiate between the measurement errors and the roughness of the object at the resolutions of the point clouds. Parts of this study have been funded by the project

  9. 13C metabolic flux analysis at a genome-scale.

    PubMed

    Gopalakrishnan, Saratram; Maranas, Costas D

    2015-11-01

    Metabolic models used in 13C metabolic flux analysis generally include a limited number of reactions primarily from central metabolism. They typically omit degradation pathways, complete cofactor balances, and atom transition contributions for reactions outside central metabolism. This study addresses the impact on prediction fidelity of scaling-up mapping models to a genome-scale. The core mapping model employed in this study accounts for (75 reactions and 65 metabolites) primarily from central metabolism. The genome-scale metabolic mapping model (GSMM) (697 reaction and 595 metabolites) is constructed using as a basis the iAF1260 model upon eliminating reactions guaranteed not to carry flux based on growth and fermentation data for a minimal glucose growth medium. Labeling data for 17 amino acid fragments obtained from cells fed with glucose labeled at the second carbon was used to obtain fluxes and ranges. Metabolic fluxes and confidence intervals are estimated, for both core and genome-scale mapping models, by minimizing the sum of square of differences between predicted and experimentally measured labeling patterns using the EMU decomposition algorithm. Overall, we find that both topology and estimated values of the metabolic fluxes remain largely consistent between core and GSM model. Stepping up to a genome-scale mapping model leads to wider flux inference ranges for 20 key reactions present in the core model. The glycolysis flux range doubles due to the possibility of active gluconeogenesis, the TCA flux range expanded by 80% due to the availability of a bypass through arginine consistent with labeling data, and the transhydrogenase reaction flux was essentially unresolved due to the presence of as many as five routes for the inter-conversion of NADPH to NADH afforded by the genome-scale model. By globally accounting for ATP demands in the GSMM model the unused ATP decreased drastically with the lower bound matching the maintenance ATP requirement. A non

  10. Analysis of intermittence, scale invariance and characteristic scales in the behavior of major indices near a crash

    NASA Astrophysics Data System (ADS)

    Ferraro, Marta; Furman, Nicolas; Liu, Yang; Mariani, Cristina; Rial, Diego

    2006-01-01

    This work is devoted to the study of the relation between intermittence and scale invariance. We find the conditions that a function in which both effects are present must satisfy, and we analyze the relation with characteristic scales. We present an efficient method that detects characteristic scales in different systems. Finally we develop a model that predicts the existence of intermittence and characteristic scales in the behavior of a financial index near a crash, and we apply the model to the analysis of several financial indices.

  11. Reliability analysis of a utility-scale solar power plant

    NASA Astrophysics Data System (ADS)

    Kolb, G. J.

    1992-10-01

    This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western US utilities led by the Southern California Edison Company. The 10 MW plant is known as Solar Two and is scheduled to be on-line in 1994. It is a prototype which should lead to the construction of 100 MW commercial-scale plants by the year 2000. The availability calculation was performed with the UNIRAM computer code. The analysis predicted a forced outage rate of 5.4 percent and an overall plant availability, including scheduled outages, of 91 percent. The code also identified the most important contributors to plant unavailability. Control system failures were identified as the most important cause of forced outages. Receiver problems were rated second with turbine outages third. The overall plant availability of 91 percent exceeds the goal identified by the US utility study. This paper discuses the availability calculation and presents evidence why the 91 percent availability is a credible estimate.

  12. Growth and characterization of a multi-dimensional ZnO hybrid structure on a glass substrate by using metal organic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Lee, Dohan; Lee, Je-Haeng; Byun, Dongjin

    2014-05-01

    A multi-dimensional zinc oxide (ZnO) hybrid structure was successfully grown on a glass substrate by using metal organic chemical vapor deposition (MOCVD). The ZnO hybrid structure was composed of nanorods grown continuously on the ZnO film without any catalysts. The growth mode could be changed from a two-dimensional (2D) film to one-dimensional (1D) nanorods by simply controlling the substrate's temperature. The ZnO with a hybrid structure showed improved electrical and optical properties. The ZnO hybrid structure grown by using MOCVD has excellent potential for applications in opto-electronic devices and solar cells as anti-reflection coatings (ARCs), transparent conductive oxides (TCOs) and transparent thin-film transistors (TTFTs).

  13. Optimism and well-being: a prospective multi-method and multi-dimensional examination of optimism as a resilience factor following the occurrence of stressful life events.

    PubMed

    Kleiman, Evan M; Chiara, Alexandra M; Liu, Richard T; Jager-Hyman, Shari G; Choi, Jimmy Y; Alloy, Lauren B

    2017-02-01

    Optimism has been conceptualised variously as positive expectations (PE) for the future , optimistic attributions , illusion of control , and self-enhancing biases. Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multi-dimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: PEs, Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety.

  14. Large-scale Biomedical Image Analysis in Grid Environments

    PubMed Central

    Kumar, Vijay S.; Rutt, Benjamin; Kurc, Tahsin; Catalyurek, Umit; Pan, Tony; Saltz, Joel; Chow, Sunny; Lamont, Stephan; Martone, Maryann

    2012-01-01

    Digital microscopy scanners are capable of capturing multi-Gigapixel images from single slides, thus producing images of sizes up to several tens of Gigabytes each, and a research study may have hundreds of slides from a specimen. The sheer size of the images and the complexity of image processing operations create roadblocks to effective integration of large-scale imaging data in research. This paper presents the application of a component-based Grid middleware system for processing extremely large images obtained from digital microscopy devices. We have developed parallel, out-of-core techniques for different classes of data processing operations commonly employed on images from confocal microscopy scanners. These techniques are combined into data pre-processing and analysis pipelines using the component-based middleware system. The experimental results show that 1) our implementation achieves good performance and can handle very large (terabyte-scale) datasets on high-performance Grid nodes, consisting of computation and/or storage clusters, and 2) it can take advantage of multiple Grid nodes connected over high-bandwidth wide-area networks by combining task- and data-parallelism. PMID:18348945

  15. Two-field analysis of no-scale supergravity inflation

    SciTech Connect

    Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu

    2015-01-01

    Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values || 0.1. We also calculate the non-Gaussianity measure f{sub NL}, finding that is well below the current experimental sensitivity.

  16. Scaling law analysis of paraffin thin films on different surfaces

    SciTech Connect

    Dotto, M. E. R.; Camargo, S. S. Jr.

    2010-01-15

    The dynamics of paraffin deposit formation on different surfaces was analyzed based on scaling laws. Carbon-based films were deposited onto silicon (Si) and stainless steel substrates from methane (CH{sub 4}) gas using radio frequency plasma enhanced chemical vapor deposition. The different substrates were characterized with respect to their surface energy by contact angle measurements, surface roughness, and morphology. Paraffin thin films were obtained by the casting technique and were subsequently characterized by an atomic force microscope in noncontact mode. The results indicate that the morphology of paraffin deposits is strongly influenced by substrates used. Scaling laws analysis for coated substrates present two distinct dynamics: a local roughness exponent ({alpha}{sub local}) associated to short-range surface correlations and a global roughness exponent ({alpha}{sub global}) associated to long-range surface correlations. The local dynamics is described by the Wolf-Villain model, and a global dynamics is described by the Kardar-Parisi-Zhang model. A local correlation length (L{sub local}) defines the transition between the local and global dynamics with L{sub local} approximately 700 nm in accordance with the spacing of planes measured from atomic force micrographs. For uncoated substrates, the growth dynamics is related to Edwards-Wilkinson model.

  17. Anomaly Detection in Multiple Scale for Insider Threat Analysis

    SciTech Connect

    Kim, Yoohwan; Sheldon, Frederick T; Hively, Lee M

    2012-01-01

    We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.

  18. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  19. Multidimensional scaling analysis of the dynamics of a country economy.

    PubMed

    Tenreiro Machado, J A; Mata, Maria Eugénia

    2013-01-01

    This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process.

  20. Large-Scale Quantitative Analysis of Painting Arts

    NASA Astrophysics Data System (ADS)

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  1. INUNDATION PATTERNS AND FATALITY ANALYSIS ON LARGE-SCALE FLOOD

    NASA Astrophysics Data System (ADS)

    Ikeuchi, Koji; Ochi, Shigeo; Yasuda, Goro; Okamura, Jiro; Aono, Masashi

    In order to enhance the emergency preparedness for large-scale floods of the Ara River, we categorized the inundation patterns and calculated fatality estimates. We devised an effective continuous embankment elevation estimation method employing light detection and ranging data analysis. Drainage pump capabilities, in terms of operatable inundation depth and operatable duration limited by fuel supply logistics, were modeled from pump station data of eac h site along the rivers. Fatality reduction effects due to the enhancement of the drainage capabilities were calculated. We found proper operations of the drainage facilities can decrease the number of estimat ed fatalities considerably in some cases. We also estimated the difference of risk between floods with 200 years return period and those with 1000 years return period. In some of the 1000 years return period cases, we found the estimated fatalities jumped up whereas the populations in inundated areas changed only a little.

  2. Multidimensional Scaling Analysis of the Dynamics of a Country Economy

    PubMed Central

    Mata, Maria Eugénia

    2013-01-01

    This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

  3. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  4. Event-scale power law recession analysis: quantifying methodological uncertainty

    NASA Astrophysics Data System (ADS)

    Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.

    2017-01-01

    The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship

  5. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled

  6. Analysis of the topological properties of the proximal femur on a regional scale: evaluation of multi-detector CT-scans for the assessment of biomechanical strength using local Minkowski functionals in 3D

    NASA Astrophysics Data System (ADS)

    Boehm, H. F.; Link, T. M.; Monetti, R. A.; Kuhn, V.; Eckstein, F.; Raeth, C. W.; Reiser, M.

    2006-03-01

    In our recent studies on the analysis of bone texture in the context of Osteoporosis, we could already demonstrate the great potential of the topological evaluation of bone architecture based on the Minkowski Functionals (MF) in 2D and 3D for the prediction of the mechanical strength of cubic bone specimens depicted by high resolution MRI. Other than before, we now assess the mechanical characteristics of whole hip bone specimens imaged by multi-detector computed tomography. Due to the specific properties of the imaging modality and the bone tissue in the proximal femur, this requires to introduce a new analysis method. The internal architecture of the hip is functionally highly specialized to withstand the complex pattern of external and internal forces associated with human gait. Since the direction, connectivity and distribution of the trabeculae changes considerably within narrow spatial limits it seems most reasonable to evaluate the femoral bone structure on a local scale. The Minkowski functionals are a set of morphological descriptors for the topological characterization of binarized, multi-dimensional, convex objects with respect to shape, structure, and the connectivity of their components. The MF are usually used as global descriptors and may react very sensitively to minor structural variations which presents a major limitation in a number of applications. The objective of this work is to assess the mechanical competence of whole hip bone specimens using parameters based on the MF. We introduce an algorithm that considers the local topological aspects of the bone architecture of the proximal femur allowing to identify regions within the bone that contribute more to the overall mechanical strength than others.

  7. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    SciTech Connect

    Bhatele, Abhinav

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  8. MDI-GPU: accelerating integrative modelling for genomic-scale data using GP-GPU computing.

    PubMed

    Mason, Samuel A; Sayyid, Faiz; Kirk, Paul D W; Starr, Colin; Wild, David L

    2016-03-01

    The integration of multi-dimensional datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct--but often complementary--information. However, the large amount of data adds burden to any inference task. Flexible Bayesian methods may reduce the necessity for strong modelling assumptions, but can also increase the computational burden. We present an improved implementation of a Bayesian correlated clustering algorithm, that permits integrated clustering to be routinely performed across multiple datasets, each with tens of thousands of items. By exploiting GPU based computation, we are able to improve runtime performance of the algorithm by almost four orders of magnitude. This permits analysis across genomic-scale data sets, greatly expanding the range of applications over those originally possible. MDI is available here: http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/.

  9. Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.

    ERIC Educational Resources Information Center

    Faulstich, Michael E.; And Others

    1985-01-01

    Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)

  10. Ultrafast Multi-Dimensional Infrared Vibrational Echo Spectroscopy of Molecular Dynamics on Surfaces and in Bulk Systems

    DTIC Science & Technology

    2012-02-28

    trifluoromethanesulfonyl ) imide (alkyl = ethyl, butyl, hexyl, octyl) room temperature ionic liquids (RTIL). The two fluorescent probe molecules display...principally orientational relaxation, caused by the addition of lithium bis (trifluoromethylsulfonyl) imide to the ionic liquid 1-butyl 3-methylimidazolium... bis (trifluoromethylsulfonyl) imide . The lithium salt concentration was found to affect the dynamics on multiple time scales and discontinuities in

  11. Motivation and Engagement in the United States, Canada, United Kingdom, Australia, and China: Testing a Multi-Dimensional Framework

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.

    2015-01-01

    This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory factor…

  12. In situ vitrification large-scale operational acceptance test analysis

    SciTech Connect

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack.

  13. Analysis of scale-invariant slab avalanche size distributions

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.; Daudon, D.

    2003-04-01

    Scale invariance of snow avalanche sizes was reported for the first time in 2001 by Louchet et al. at the EGS conference, using both acoustic emission duration, and the surface of the crown step left at the top of the starting zone, where the former parameter characterises the size of the total avalanche flow, and the latter that of the starting zone. The present paper focuses on parameters of the second type, that are more directly related to precise release mechanisms, vz. the crown crack length L, the crown crack or slab depth H, the crown step surface HxL, the volume HxL^2 of the snow involved in the starting zone, and LxH^2 which is a measure of the stress concentration at the basal crack tip at failure. The analysis is performed on two data sets, from la Grande Plagne (GP) and Tignes (T) ski resorts. For both catalogs, cumulative distributions of L, H, HxL, HxL^2 and LxH^2 are shown to be roughly linear in a log-log plot. i.e. consistent with so-called scale invariant (or power law) distributions for both triggered and natural avalanches. Plateaus are observed at small sizes, and roll-offs at large sizes. The power law exponents for each of these quantities are roughly independent of the ski resort (GP or T) they come from. In contrast, exponents for natural events are significantly smaller than those for artificial ones. We analyse the possible reasons for the scale invariance of these quantities, for the possible "universality" of the exponents corresponding to a given triggering mode, and for the difference in exponents between triggered and natural events. The physical meaning of the observed roll-offs and plateaus is also discussed. The power law distributions analysed here provide the occurrence probability of an avalanche of a given (starting) volume in a given time period on a given area. A possible use of this type of distributions for snow avalanche hazard assessment is contemplated, as it is for earthquakes or rockfalls.

  14. Large-scale dimension densities for heart rate variability analysis

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen

    2006-04-01

    In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (ρlsμ=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ρlsμ (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.

  15. Measuring Mathematics Anxiety: Psychometric Analysis of a Bidimensional Affective Scale

    ERIC Educational Resources Information Center

    Bai, Haiyan; Wang, LihShing; Pan, Wei; Frey, Mary

    2009-01-01

    The purpose of this study is to develop a theoretically and methodologically sound bidimensional affective scale measuring mathematics anxiety with high psychometric quality. The psychometric properties of a 14-item Mathematics Anxiety Scale-Revised (MAS-R) adapted from Betz's (1978) 10-item Mathematics Anxiety Scale were empirically analyzed on a…

  16. Revision and Factor Analysis of a Death Anxiety Scale.

    ERIC Educational Resources Information Center

    Thorson, James A.; Powell, F. C.

    Earlier research on death anxiety using the 34-item scale developed by Nehrke-Templer-Boyar (NTB) indicated that females and younger persons have significantly higher death anxiety. To simplify a death anxiety scale for use with different age groups, and to determine the conceptual factors actually measured by the scale, a revised 25-item…

  17. Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

    ERIC Educational Resources Information Center

    Chow, Kui Foon; Kennedy, Kerry John

    2014-01-01

    International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…

  18. Diffusion entropy analysis on the scaling behavior of financial markets

    NASA Astrophysics Data System (ADS)

    Cai, Shi-Min; Zhou, Pei-Ling; Yang, Hui-Jie; Yang, Chun-Xia; Wang, Bing-Hong; Zhou, Tao

    2006-07-01

    In this paper the diffusion entropy technique is applied to investigate the scaling behavior of financial markets. The scaling behaviors of four representative stock markets, Dow Jones Industrial Average, Standard&Poor 500, Heng Seng Index, and Shang Hai Stock Synthetic Index, are almost the same; with the scale-invariance exponents all in the interval [0.92,0.95]. We also estimate the local scaling exponents which indicate the financial time series is homogenous perfectly. In addition, a parsimonious percolation model for stock markets is proposed, of which the scaling behavior agrees with the real-life markets well.

  19. Numerical Simulation and Scaling Analysis of Cell Printing

    NASA Astrophysics Data System (ADS)

    Qiao, Rui; He, Ping

    2011-11-01

    Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.

  20. Genome-scale metabolic models: reconstruction and analysis.

    PubMed

    Baart, Gino J E; Martens, Dirk E

    2012-01-01

    Metabolism can be defined as the complete set of chemical reactions that occur in living organisms in order to maintain life. Enzymes are the main players in this process as they are responsible for catalyzing the chemical reactions. The enzyme-reaction relationships can be used for the reconstruction of a network of reactions, which leads to a metabolic model of metabolism. A genome-scale metabolic network of chemical reactions that take place inside a living organism is primarily reconstructed from the information that is present in its genome and the literature and involves steps such as functional annotation of the genome, identification of the associated reactions and determination of their stoichiometry, assignment of localization, determination of the biomass composition, estimation of energy requirements, and definition of model constraints. This information can be integrated into a stoichiometric model of metabolism that can be used for detailed analysis of the metabolic potential of the organism using constraint-based modeling approaches and hence is valuable in understanding its metabolic capabilities.

  1. Acoustic modal analysis of a full-scale annular combustor

    NASA Technical Reports Server (NTRS)

    Karchmer, A. M.

    1982-01-01

    An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine.

  2. MIXREGLS: A Program for Mixed-Effects Location Scale Analysis

    PubMed Central

    Hedeker, Donald; Nordgren, Rachel

    2013-01-01

    MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062

  3. Review of multi-dimensional large-scale kinetic simulation and physics validation of ion acceleration in relativistic laser-matter interaction

    SciTech Connect

    Wu, Hui-Chun; Hegelich, B.M.; Fernandez, J.C.; Shah, R.C.; Palaniyappan, S.; Jung, D.; Yin, L; Albright, B.J.; Bowers, K.; Huang, C.; Kwan, T.J.

    2012-06-19

    Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.

  4. Scales

    ScienceCinema

    Murray Gibson

    2016-07-12

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  5. Scales

    SciTech Connect

    Murray Gibson

    2007-04-27

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  6. Stochastic analysis of shear-wave splitting length scales

    NASA Astrophysics Data System (ADS)

    Becker, Thorsten W.; Browaeys, Jules T.; Jordan, Thomas H.

    2007-07-01

    The coherence of azimuthal seismic anisotropy, as inferred from shear-wave splitting measurements, decreases with the relative distance between stations. Stochastic models of a two-dimensional vector field defined by a von Karma'n [T. von Karma'n, Progress in the statistical theory of turbulence, J. Mar. Res., 7 (1948) 252-264.] autocorrelation function with horizontal correlation length L provide a useful means to evaluate this heterogeneity and coherence lengths. We use the compilation of SKS splitting measurements by Fouch [M. Fouch, Upper mantle anisotropy database, accessed in 06/2006, http://geophysics.asu.edu/anisotropy/upper/] and supplement it with additional studies, including automated measurements by Evans et al. [Evans, M.S., Kendall, J.-M., Willemann, R.J., 2006. Automated SKS splitting and upper-mantle anisotropy beneath Canadian seismic stations, Geophys. J. Int. 165, 931-942, Evans, M.S., Kendall, J.-M., Willemann, R.J. Automated splitting project database, Online at http://www.isc.ac.uk/SKS/, accessed 02/2006]. The correlation lengths of this dataset depend on the geologic setting in the continental regions: in young Phanerozoic orogens and magmatic zones L ˜ 600 km, smaller than the smooth L ˜ 1600 km patterns in tectonically more stable regions such as Phanerozoic platforms. Our interpretation is that the relatively large coherence underneath older crust reflects large-scale tectonic processes (e.g. continent-continent collisions) that are frozen into the tectosphere. In younger continental regions, smaller scale flow (e.g. slab anomaly induced) may predominantly affect anisotropy. In this view, remnant anisotropy is dominant in the old continents and deformation-induced anisotropy caused by recent asthenospheric flow is dominant in active continental regions and underneath oceanic plates. Auxiliary analysis of surface-wave anisotropy and combined mantle flow and anisotropic texture modeling is consistent with this suggestion. In continental

  7. Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer

    NASA Astrophysics Data System (ADS)

    Schnieders, Jana; Garbe, Christoph

    2014-05-01

    The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale

  8. Self-similarity Detection via Multi-scale Image Analysis

    NASA Astrophysics Data System (ADS)

    Kamejima, Kohji

    A dynamic scheme is presented for generating multi-scale images associated with self-similar patterns. By blurring with a small scale parameter, brightness distributions are extended to geometrically singular fractal patterns. Through weighted averaging with respect to scale factors, a multi-scale image is generated as a representation of the conditional probability for capturing unknown attractors. The local structure of the multi-scale image is analyzed to demonstrate the structural consistency of the capturing probability with respect to the imaging process associated with the attractor. By extracting stochastic features based on the capturing probability, a computational scheme is introduced for matching observed attractors with a preassigned dictionary of patterns. Proposed method was verified by simulation studies.

  9. Impact and fracture analysis of fish scales from Arapaima gigas.

    PubMed

    Torres, F G; Malásquez, M; Troncoso, O P

    2015-06-01

    Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature.

  10. Regional Scale Analysis of Extremes in an SRM Geoengineering Simulation

    NASA Astrophysics Data System (ADS)

    Muthyala, R.; Bala, G.

    2014-12-01

    Only a few studies in the past have investigated the statistics of extreme events under geoengineering. In this study, a global climate model is used to investigate the impact of solar radiation management on extreme precipitation events on regional scale. Solar constant was reduced by 2.25% to counteract the global mean surface temperature change caused by a doubling of CO2 (2XCO2) from its preindustrial control value. Using daily precipitation rates, extreme events are defined as those which exceed 99.9th percentile precipitation threshold. Extremes are substantially reduced in geoengineering simulation: the magnitude of change is much smaller than those that occur in a simulation with doubled CO2. Regional analysis over 22 Giorgi land regions is also performed. Doubling of CO2 leads to an increase in intensity of extreme (99.9th percentile) precipitation by 17.7% on global-mean basis with maximum increase in intensity over South Asian region by 37%. In the geoengineering simulation, there is a global-mean reduction in intensity of 3.8%, with a maximum reduction over Tropical Ocean by 8.9%. Further, we find that the doubled CO2 simulation shows an increase in the frequency of extremes (>50 mm/day) by 50-200% with a global mean increase of 80%. In contrast, in geo-engineering climate there is a decrease in frequency of extreme events by 20% globally with a larger decrease over Tropical Ocean by 30%. In both the climate states (2XCO2 and geo-engineering) change in "extremes" is always greater than change in "means" over large domains. We conclude that changes in precipitation extremes are larger in 2XCO2 scenario compared to preindustrial climate while extremes decline slightly in the geoengineered climate. We are also investigating the changes in extreme statistics for daily maximum and minimum temperature, evapotranspiration and vegetation productivity. Results will be presented at the meeting.

  11. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    SciTech Connect

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  12. A theoretical analysis of basin-scale groundwater temperature distribution

    NASA Astrophysics Data System (ADS)

    An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2015-03-01

    The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814

  13. Scaling parameters for PFBC cyclone separator system analysis

    SciTech Connect

    Gil, A.; Romeo, L.M.; Cortes, C.

    1999-07-01

    Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.

  14. Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)

    ERIC Educational Resources Information Center

    Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.

    2004-01-01

    Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…

  15. A Critical Analysis of the Concept of Scale Dependent Macrodispersivity

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Cvetkovic, Vladimir; Dagan, Gedeon; Dietrich, Peter; Fiori, Aldo; Rubin, Yoram; Teutsch, Georg

    2015-04-01

    Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity αL (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally αL is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime). It was surmised in the literature that αL scales up with travel distance L following a universal scaling law. Attempts to define the scaling law were sursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992. Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011). Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported αL values. Our investigation concludes that transport, and particularly αL, is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.

  16. A Critical Analysis of the Concept of Scale Dependent Macrodispersivity

    NASA Astrophysics Data System (ADS)

    Zech, A.; Attinger, S.; Cvetkovic, V.; Dagan, G.; Dietrich, P.; Fiori, A.; Rubin, Y.; Teutsch, G.

    2014-12-01

    Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity αL (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally αL is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime).It was surmised in the literature that αL(L) scales up with travel distance following a universal scaling law. Attempts to define the scaling law were pursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992.Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011).Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported αL values. Our investigation concludes that transport, and particularly αL(L), is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.

  17. Lattice analysis for the energy scale of QCD phenomena.

    PubMed

    Yamamoto, Arata; Suganuma, Hideo

    2008-12-12

    We formulate a new framework in lattice QCD to study the relevant energy scale of QCD phenomena. By considering the Fourier transformation of link variable, we can investigate the intrinsic energy scale of a physical quantity nonperturbatively. This framework is broadly available for all lattice QCD calculations. We apply this framework for the quark-antiquark potential and meson masses in quenched lattice QCD. The gluonic energy scale relevant for the confinement is found to be less than 1 GeV in the Landau or Coulomb gauge.

  18. Confirmatory Factor Analysis of the Scales for Diagnosing Attention Deficit Hyperactivity Disorder (SCALES)

    ERIC Educational Resources Information Center

    Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.

    2010-01-01

    The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…

  19. Rating Scale Analysis and Psychometric Properties of the Caregiver Self-Efficacy Scale for Transfers

    ERIC Educational Resources Information Center

    Cipriani, Daniel J.; Hensen, Francine E.; McPeck, Danielle L.; Kubec, Gina L. D.; Thomas, Julie J.

    2012-01-01

    Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid…

  20. Refining a self-assessment of informatics competency scale using Mokken scaling analysis.

    PubMed

    Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne

    2015-01-01

    Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions.

  1. Characterizing multi-dimensionality of urban sprawl in Jamnagar, India using multi-date remote sensing data

    NASA Astrophysics Data System (ADS)

    Jain, G.; Sharma, S.; Vyas, A.; Rajawat, A. S.

    2014-11-01

    This study attempts to measure and characterize urban sprawl using its multiple dimensions in the Jamnagar city, India. The study utilized the multi-date satellite images acquired by CORONA, IRS 1D PAN & LISS-3, IRS P6 LISS-4 and Resourcesat-2 LISS-4 sensors. The extent of urban growth in the study area was mapped at 1 : 25,000 scale for the years 1965, 2000, 2005 and 2011. The growth of urban areas was further categorized into infill growth, expansion and leapfrog development. The city witnessed an annual growth of 1.60 % per annum during the period 2000-2011 whereas the population growth during the same period was observed at less than 1.0 % per annum. The new development in the city during 2000-2005 time period comprised of 22 % as infill development, 60 % as extension of the peripheral urbanized areas, and 18 % as leapfrogged development. However, during 2005-2011 timeframe the proportion of leapfrog development increased to 28 % whereas due to decrease in availability of developable area within the city, the infill developments declined to 9 %. The urban sprawl in Jamnagar city was further characterized on the basis of five dimensions of sprawl viz. population density, continuity, clustering, concentration and centrality by integrating the population data with sprawl for year 2001 and 2011. The study characterised the growth of Jamnagar as low density and low concentration outwardly expansion.

  2. SU-E-T-472: A Multi-Dimensional Measurements Comparison to Analyze a 3D Patient Specific QA Tool

    SciTech Connect

    Ashmeg, S; Jackson, J; Zhang, Y; Oldham, M; Yin, F; Ren, L

    2014-06-01

    Purpose: To quantitatively evaluate a 3D patient specific QA tool using 2D film and 3D Presage dosimetry. Methods: A brain IMRT case was delivered to Delta4, EBT2 film and Presage plastic dosimeter. The film was inserted in the solid water slabs at 7.5cm depth for measurement. The Presage dosimeter was inserted into a head phantom for 3D dose measurement. Delta4's Anatomy software was used to calculate the corresponding dose to the film in solid water slabs and to Presage in the head phantom. The results from Anatomy were compared to both calculated results from Eclipse and measured dose from film and Presage to evaluate its accuracy. Using RIT software, we compared the “Anatomy” dose to the EBT2 film measurement and the film measurement to ECLIPSE calculation. For 3D analysis, DICOM file of “Anatomy” was extracted and imported to CERR software, which was used to compare the Presage dose to both “Anatomy” calculation and ECLIPSE calculation. Gamma criteria of 3% - 3mm and 5% - 5mm was used for comparison. Results: Gamma passing rates of film vs “Anatomy”, “Anatomy” vs ECLIPSE and film vs ECLIPSE were 82.8%, 70.9% and 87.6% respectively when 3% - 3mm criteria is used. When the criteria is changed to 5% - 5mm, the passing rates became 87.8%, 76.3% and 90.8% respectively. For 3D analysis, Anatomy vs ECLIPSE showed gamma passing rate of 86.4% and 93.3% for 3% - 3mm and 5% - 5mm respectively. The rate is 77.0% for Presage vs ECLIPSE analysis. The Anatomy vs ECLIPSE were absolute dose comparison. However, film and Presage analysis were relative comparison Conclusion: The results show higher passing rate in 3D than 2D in “Anatomy” software. This could be due to the higher degrees of freedom in 3D than in 2D for gamma analysis.

  3. A spatially stabilized TDG based finite element framework for modeling biofilm growth with a multi-dimensional multi-species continuum biofilm model

    NASA Astrophysics Data System (ADS)

    Feng, D.; Neuweiler, I.; Nackenhorst, U.

    2017-02-01

    We consider a model for biofilm growth in the continuum mechanics framework, where the growth of different components of biomass is governed by a time dependent advection-reaction equation. The recently developed time-discontinuous Galerkin (TDG) method combined with two different stabilization techniques, namely the Streamline Upwind Petrov Galerkin (SUPG) method and the finite increment calculus (FIC) method, are discussed as solution strategies for a multi-dimensional multi-species biofilm growth model. The biofilm interface in the model is described by a convective movement following a potential flow coupled to the reaction inside of the biofilm. Growth limiting substrates diffuse through a boundary layer on top of the biofilm interface. A rolling ball method is applied to obtain a boundary layer of constant height. We compare different measures of the numerical dissipation and dispersion of the simulation results in particular for those with non-trivial patterns. By using these measures, a comparative study of the TDG-SUPG and TDG-FIC schemes as well as sensitivity studies on the time step size, the spatial element size and temporal accuracy are presented.

  4. On the Cauchy problem for the multi-dimensional Cauchy-Riemann operator in the Lebesgue space L^2 in a domain

    NASA Astrophysics Data System (ADS)

    Fedchenko, Dmitrii P.; Shlapunov, Aleksandr A.

    2008-12-01

    Let D be a bounded domain in \\mathbb C^n ( n\\ge1) with infinitely smooth boundary \\partial D. We describe necessary and sufficient conditions for the solvability of the Cauchy problem in the Lebesgue space L^2(D) in the domain D for the multi-dimensional Cauchy-Riemann operator \\overline\\partial. As an example we consider the situation where the domain D is the part of a spherical shell \\Omega(r,R)=B(R)\\setminus\\overline B(r), 0, in \\mathbb C^n, where B(R) is the ball of radius R with centre at the origin, cut off by a smooth hypersurface \\Gamma with the same orientation as \\partial D. In this case, using the Laurent expansion for harmonic functions in the shell \\Omega(R,r) we construct the Carleman formula for recovering a function in the Lebesgue space L^2(D) from its values on \\overline\\Gamma and the values of \\overline\\partial u in the domain D, if these values belong to L^2(\\Gamma) and L^2(D), respectively. Bibliography: 16 titles.

  5. Neuron-glia interaction as a possible glue to translate the mind-brain gap: a novel multi-dimensional approach toward psychology and psychiatry.

    PubMed

    Kato, Takahiro A; Watabe, Motoki; Kanba, Shigenobu

    2013-10-21

    Neurons and synapses have long been the dominant focus of neuroscience, thus the pathophysiology of psychiatric disorders has come to be understood within the neuronal doctrine. However, the majority of cells in the brain are not neurons but glial cells including astrocytes, oligodendrocytes, and microglia. Traditionally, neuroscientists regarded glial functions as simply providing physical support and maintenance for neurons. Thus, in this limited role glia had been long ignored. Recently, glial functions have been gradually investigated, and increasing evidence has suggested that glial cells perform important roles in various brain functions. Digging up the glial functions and further understanding of these crucial cells, and the interaction between neurons and glia may shed new light on clarifying many unknown aspects including the mind-brain gap, and conscious-unconscious relationships. We briefly review the current situation of glial research in the field, and propose a novel translational research with a multi-dimensional model, combining various experimental approaches such as animal studies, in vitro & in vivo neuron-glia studies, a variety of human brain imaging investigations, and psychometric assessments.

  6. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    SciTech Connect

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.

  7. A WENO-limited, ADER-DT, finite-volume scheme for efficient, robust, and communication-avoiding multi-dimensional transport

    NASA Astrophysics Data System (ADS)

    Norman, Matthew R.

    2014-10-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.

  8. A multi-dimensional high-order DG-ALE method based on gas-kinetic theory with application to oscillating bodies

    NASA Astrophysics Data System (ADS)

    Ren, Xiaodong; Xu, Kun; Shyy, Wei

    2016-07-01

    This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.

  9. Analysis of Geomechanical Behavior for the Drift Scale Test

    SciTech Connect

    Blair, S C; Carlson, S R; Wagoner, J L

    2001-03-05

    The Drift Scale Test (DST) now underway at Yucca Mountain has been simulated using a Drift Scale Distinct Element (DSDE) model. Simulated deformations show good agreement with field deformation measurements. Results indicate most fracture deformation is located above the crown of the Heated Drift. This work is part of the model validation effort for the DSDE model, which is used to assess thermal-mechanical effects on the hydrology of the rock mass surrounding a potential repository.

  10. Three-dimensional analysis of scale morphology in bluegill sunfish, Lepomis macrochirus.

    PubMed

    Wainwright, Dylan K; Lauder, George V

    2016-06-01

    Fish scales are morphologically diverse among species, within species, and on individuals. Scales of bony fishes are often categorized into three main types: cycloid scales have smooth edges; spinoid scales have spines protruding from the body of the scale; ctenoid scales have interdigitating spines protruding from the posterior margin of the scale. For this study, we used two- and three-dimensional (2D and 3D) visualization techniques to investigate scale morphology of bluegill sunfish (Lepomis macrochirus) on different regions of the body. Micro-CT scanning was used to visualize individual scales taken from different regions, and a new technique called GelSight was used to rapidly measure the 3D surface structure and elevation profiles of in situ scale patches from different regions. We used these data to compare the surface morphology of scales from different regions, using morphological measurements and surface metrology metrics to develop a set of shape variables. We performed a discriminant function analysis to show that bluegill scales differ across the body - scales are cycloid on the opercle but ctenoid on the rest of the body, and the proportion of ctenii coverage increases ventrally on the fish. Scales on the opercle and just below the anterior spinous dorsal fin were smaller in height, length, and thickness than scales elsewhere on the body. Surface roughness did not appear to differ over the body of the fish, although scales at the start of the caudal peduncle had higher skew values than other scales, indicating they have a surface that contains more peaks than valleys. Scale shape also differs along the body, with scales near the base of the tail having a more elongated shape. This study adds to our knowledge of scale structure and diversity in fishes, and the 3D measurement of scale surface structure provides the basis for future testing of functional hypotheses relating scale morphology to locomotor performance.

  11. Mokken scale analysis of the UPDRS: dimensionality of the Motor Section revisited.

    PubMed

    Stochl, Jan; Boomsma, Anne; van Duijn, Marijtje; Brozová, Hana; Růzická, Evzen

    2008-02-01

    The dimensionality and reliability of the Motor Section of the Unified Parkinson Disease Rating Scale (UPDRS III) was studied with non-parametric Mokken scale analysis. UPDRS measures were obtained on 147 patients with PD (96 men, 51 women, mean age 61, range 35-80 yrs). Mokken scale analysis revealed a four-dimensional structure of the UPDRS III. Left-sided bradykinesia and rigidity appeared to co-occur with axial signs, gait disturbance, and speech/hypomimia, whereas right-sided bradykinesia and rigidity formed a second scale. Two further small scales were found consisting of right- and left-sided tremor. Results from the scale analysis reveal that all four subscales are strong. The reliability of the two tremor scales is low because they only contain three and four items, respectively.

  12. Effect of a multi-dimensional flux function on the monotonicity of Euler and Navier-Stokes computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Van Leer, Bram; Roe, Philip L.

    1991-01-01

    A limiting method has been devised for a grid-independent flux function for use with the two-dimensional Euler and Navier-Stokes equations. This limiting is derived from a monotonicity analysis of the model and allows for solutions with reduced oscillatory behavior while still maintaining sharper resolution than a grid-aligned method. In addition to capturing oblique waves sharply, the grid-independent flux function also reduces the entropy generated over an airfoil in an Euler computation and reduces pressure distortions in the separated boundary layer of a viscous-flow airfoil computation. The model has also been extended to three dimensions, although no angle-limiting procedure for improving monotonicity characteristics has been incorporated.

  13. [Development of the Trait Respect-Related Emotions Scale for late adolescence].

    PubMed

    Muto, Sera

    2016-02-01

    This study developed a scale to measure the respect-related emotional traits (the Trait Respect-Related Emotions Scale) for late adolescence and examined the reliability and validity. In study 1,368 university students completed the items of the Trait Respect-Related Emotions Scale and other scales of theoretically important personality constructs including adult attachment style, the "Big Five," self-esteem, and two types of narcissistic personality. Factor analysis indicated that there are three factors of trait respect-related emotions: (a) trait (prototypical) respect; (b) trait idolatry (worship and adoration); and (c) trait awe. The three traits associated differentially with the daily experience (frequency) of the five basic respect-related emotions (prototypical respect, idolatry, awe, admiration, and wonder), and other constructs. In Study 2, a test-retest correlation of the new scale with 60 university students indicated good reliability. Both studies generally supported the reliability and validity of the new scale. These findings suggest that, at Ieast in late adolescence, there are large individual differences in respect-related emotion experiences and the trait of respect should be considered as multi-dimensional structure.

  14. Scale Development Research: A Content Analysis and Recommendations for Best Practices

    ERIC Educational Resources Information Center

    Worthington, Roger L.; Whittaker, Tiffany A.

    2006-01-01

    The authors conducted a content analysis on new scale development articles appearing in the "Journal of Counseling Psychology" during 10 years (1995 to 2004). The authors analyze and discuss characteristics of the exploratory and confirmatory factor analysis procedures in these scale development studies with respect to sample…

  15. Modulation analysis of large-scale discrete vortices.

    PubMed

    Cisneros, Luis A; Minzoni, Antonmaria A; Panayotaros, Panayotis; Smyth, Noel F

    2008-09-01

    The behavior of large-scale vortices governed by the discrete nonlinear Schrödinger equation is studied. Using a discrete version of modulation theory, it is shown how vortices are trapped and stabilized by the self-consistent Peierls-Nabarro potential that they generate in the lattice. Large-scale circular and polygonal vortices are studied away from the anticontinuum limit, which is the limit considered in previous studies. In addition numerical studies are performed on large-scale, straight structures, and it is found that they are stabilized by a nonconstant mean level produced by standing waves generated at the ends of the structure. Finally, numerical evidence is produced for long-lived, localized, quasiperiodic structures.

  16. Stochastic averaging and sensitivity analysis for two scale reaction networks

    NASA Astrophysics Data System (ADS)

    Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.

    2016-02-01

    In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.

  17. 'Scaling' analysis of the ice accretion process on aircraft surfaces

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.; Tabrizi, A. H.; Missimer, J. R.

    1982-01-01

    A comprehensive set of scaling parameters is developed for the ice accretion process by analyzing the energy equations of the dynamic freezing zone and the already frozen ice layer, the continuity equation associated with supercooled liquid droplets entering into and impacting within the dynamic freezing zone, and energy equation of the ice layer. No initial arbitrary judgments are made regarding the relative magnitudes of each of the terms. The method of intrinsic reference variables in employed in order to develop the appropriate scaling parameters and their relative significance in rime icing conditions in an orderly process, rather than utilizing empiricism. The significance of these parameters is examined and the parameters are combined with scaling criteria related to droplet trajectory similitude.

  18. Scale analysis of equatorial plasma irregularities derived from Swarm constellation

    NASA Astrophysics Data System (ADS)

    Xiong, Chao; Stolle, Claudia; Lühr, Hermann; Park, Jaeheung; Fejer, Bela G.; Kervalishvili, Guram N.

    2016-07-01

    In this study, we investigated the scale sizes of equatorial plasma irregularities (EPIs) using measurements from the Swarm satellites during its early mission and final constellation phases. We found that with longitudinal separation between Swarm satellites larger than 0.4°, no significant correlation was found any more. This result suggests that EPI structures include plasma density scale sizes less than 44 km in the zonal direction. During the Swarm earlier mission phase, clearly better EPI correlations are obtained in the northern hemisphere, implying more fragmented irregularities in the southern hemisphere where the ambient magnetic field is low. The previously reported inverted-C shell structure of EPIs is generally confirmed by the Swarm observations in the northern hemisphere, but with various tilt angles. From the Swarm spacecrafts with zonal separations of about 150 km, we conclude that larger zonal scale sizes of irregularities exist during the early evening hours (around 1900 LT).

  19. SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

    SciTech Connect

    MICHAEL T. ITAMUA AND CLIFFORD K. HO

    1998-06-04

    The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.

  20. A technique for multi-dimensional optimization of radiation dose, contrast dose, and image quality in CT imaging

    NASA Astrophysics Data System (ADS)

    Sahbaee, Pooyan; Abadi, Ehsan; Sanders, Jeremiah; Becchetti, Marc; Zhang, Yakun; Agasthya, Greeshma; Segars, Paul; Samei, Ehsan

    2016-03-01

    The purpose of this study was to substantiate the interdependency of image quality, radiation dose, and contrast material dose in CT towards the patient-specific optimization of the imaging protocols. The study deployed two phantom platforms. First, a variable sized phantom containing an iodinated insert was imaged on a representative CT scanner at multiple CTDI values. The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast to noise ratio (CNR), was calculated for different iodine-concentration levels. Second, the analysis was extended to a recently developed suit of 58 virtual human models (5D-XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was "imaged" using a CT simulation platform. 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The Sensitivity of Ratio (SR), defined as ratio of change in iodine-concentration versus dose to yield a constant change in CNR was calculated and compared at high and low radiation dose for both phantom platforms. The results show that sensitivity of CNR to iodine concentration is larger at high radiation dose (up to 73%). The SR results were highly affected by radiation dose metric; CTDI or organ dose. Furthermore, results showed that the presence of contrast material could have a profound impact on optimization results (up to 45%).

  1. Multi-dimensional modelling of electrostatic force distance curve over dielectric surface: Influence of tip geometry and correlation with experiment

    SciTech Connect

    Boularas, A. Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.

    2014-08-28

    Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip–sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.

  2. Construct Validation of the Translated Version of the Work-Family Conflict Scale for Use in Korea

    ERIC Educational Resources Information Center

    Lim, Doo Hun; Morris, Michael Lane; McMillan, Heather S.

    2011-01-01

    Recently, the stress of work-family conflict has been a critical workplace issue for Asian countries, especially within those cultures experiencing rapid economic development. Our research purpose is to translate and establish construct validity of a Korean-language version of the Multi-Dimensional Work-Family Conflict (WFC) scale used in the U.S.…

  3. Psychometric Analysis of Computer Science Help-Seeking Scales

    ERIC Educational Resources Information Center

    Pajares, Frank; Cheong, Yuk Fai; Oberman, Paul

    2004-01-01

    The purpose of this study was to develop scales to assess instrumental help seeking, executive help seeking, perceived benefits of help seeking, and avoidance of help seeking and to examine their psychometric properties by conducting factor and reliability analyses. As this is the first attempt to examine the latent structures underlying the…

  4. THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS

    EPA Science Inventory

    Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...

  5. Mental Models of Text and Film: A Multidimensional Scaling Analysis.

    ERIC Educational Resources Information Center

    Rowell, Jack A.; Moss, Peter D.

    1986-01-01

    Reports results of experiment to determine whether mental models are constructed of interrelationships and cross-relationships of character attributions drawn in themes of novels and films. The study used "Animal Farm" in print and cartoon forms. Results demonstrated validity of multidimensional scaling for representing both media.…

  6. Analysis of geomechanical behavior for the drift scale test

    SciTech Connect

    Blair, S C; Carlson, S R; Wagoner, J L

    2000-11-17

    The Yucca Mountain Site Characterization Project is conducting a drift scale heater test, known as the Drift Scale Test (DST), in an alcove of the Exploratory Studies Facility at Yucca Mountain, Nevada. The DST is a large-scale, long-term thermal test designed to investigate coupled thermal-mechanical-hydrological-chemical behavior in a fractured, welded tuff rock mass. The general layout of the DST is shown in Figure 1a, along with the locations of several of the boreholes being used to monitor deformation during the test. Electric heaters are being used to heat a planar region of rock that is approximately 50 m long and 27 m wide for 4 years, followed by 4 years of cooling. Both in-drift and ''wing'' heaters are being used to heat the rock. The heating portion of the DST was started in December, 1997, and the target drift wall temperature of 200 C was reached in summer 2000. A drift-scale distinct element model (DSDE) is being used to analyze the geomechanical response of the rock mass forming the DST. The distinct element method was chosen to permit explicit modeling of fracture deformations. Shear deformations and normal mode opening of fractures are expected to increase fracture permeability and thereby alter thermal-hydrologic behavior in the DST. This paper will describe the DSDE model and present preliminary results, including comparison of simulated and observed deformations, at selected locations within the test.

  7. Acquiescent Responding in Balanced Multidimensional Scales and Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Rodriguez-Fornells, Antoni

    2006-01-01

    Personality tests often consist of a set of dichotomous or Likert items. These response formats are known to be susceptible to an agreeing-response bias called acquiescence. The common assumption in balanced scales is that the sum of appropriately reversed responses should be reasonably free of acquiescence. However, inter-item correlation (or…

  8. A Factor Analysis of the Research Self-Efficacy Scale.

    ERIC Educational Resources Information Center

    Bieschke, Kathleen J.; And Others

    Counseling professionals' and counseling psychology students' interest in performing research seems to be waning. Identifying the impediments to graduate students' interest and participation in research is important if systematic efforts to engage them in research are to succeed. The Research Self-Efficacy Scale (RSES) was designed to measure…

  9. Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling

    PubMed Central

    Wu, Hua; Li, Zhao-Liang

    2009-01-01

    With the development of quantitative remote sensing, scale issues have attracted more and more the attention of scientists. Research is now suffering from a severe scale discrepancy between data sources and the models used. Consequently, both data interpretation and model application become difficult due to these scale issues. Therefore, effectively scaling remotely sensed information at different scales has already become one of the most important research focuses of remote sensing. The aim of this paper is to demonstrate scale issues from the points of view of analysis, processing and modeling and to provide technical assistance when facing scale issues in remote sensing. The definition of scale and relevant terminologies are given in the first part of this paper. Then, the main causes of scale effects and the scaling effects on measurements, retrieval models and products are reviewed and discussed. Ways to describe the scale threshold and scale domain are briefly discussed. Finally, the general scaling methods, in particular up-scaling methods, are compared and summarized in detail. PMID:22573986

  10. Welfare effects of natural disasters in developing countries: an examination using multi-dimensional socio-economic indicators

    NASA Astrophysics Data System (ADS)

    Mutter, J. C.; Deraniyagala, S.; Mara, V.; Marinova, S.

    2011-12-01

    informal economy and will not register disaster set backs in GDP accounts. The alterations to their lives can include loss of livelihood, loss of key assets such as livestock, loss of property and loss of savings, reduced life expectancy among survivors, increased poverty rates, increased inequality, greater subsequent maternal and child mortality (due to destruction of health care facilities), reduced education attainment (lack of school buildings), increased gender-based violence and psychological ailments. Our study enhances this literature in two ways. Firstly, it examines the effects of disasters on human development and poverty using cross-country econometric analysis with indicators of welfare that go beyond GDP. We aim to search the impact of disasters on human development and absolute poverty. Secondly we use Peak Ground Acceleration for earthquakes, a modified Palmer Drought Severity and Hurricane Energy rather than disaster event occurrence to account for the severity of the disaster.

  11. SAS027 Analysis of Smaller Scale Contingencies: Measures of Merit for Defense Resource Planning of Small-Scale Contingencies

    DTIC Science & Technology

    2002-10-09

    Scale Contingencies Application of Sphere Standards During Analysis Initial assessment Response strategy development Monitoring and evaluation progress...protein 17% of energy provided from fat Health Information Monitoring and Evaluation Decreasing death rate aiming towards less than 1/10,000/day Under...of a complex project clearly and succinctly Defines a set of relationships among providers and users Includes a monitoring and evaluation system for

  12. Scaling properties of the Arctic sea ice Deformation from Buoy Dispersion Analysis

    NASA Astrophysics Data System (ADS)

    Weiss, J.; Rampal, P.; Marsan, D.; Lindsay, R.; Stern, H.

    2007-12-01

    A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over time scales from 3 hours to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of a very large solid plate -the Arctic sea ice cover- stressed by heterogeneous forcing terms like winds and ocean currents. It shows that the sea ice deformation rate depends on the scales of observation following specific space and time scaling laws. These scaling properties share similarities with those observed for turbulent fluids, especially for the ocean and the atmosphere. However, in our case, the time scaling exponent depends on the spatial scale, and the spatial exponent on the temporal scale, which implies a time/space coupling. An analysis of the exponent values shows that Arctic sea ice deformation is very heterogeneous and intermittent whatever the scales, i.e. it cannot be considered as viscous-like, even at very large time and/or spatial scales. Instead, it suggests a deformation accommodated by a multi-scale fracturing/faulting processes.

  13. Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Bukkuru, S.

    2016-10-01

    Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.

  14. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  15. Multi-scale spatio-temporal analysis of human mobility.

    PubMed

    Alessandretti, Laura; Sapiezynski, Piotr; Lehmann, Sune; Baronchelli, Andrea

    2017-01-01

    The recent availability of digital traces generated by phone calls and online logins has significantly increased the scientific understanding of human mobility. Until now, however, limited data resolution and coverage have hindered a coherent description of human displacements across different spatial and temporal scales. Here, we characterise mobility behaviour across several orders of magnitude by analysing ∼850 individuals' digital traces sampled every ∼16 seconds for 25 months with ∼10 meters spatial resolution. We show that the distributions of distances and waiting times between consecutive locations are best described by log-normal and gamma distributions, respectively, and that natural time-scales emerge from the regularity of human mobility. We point out that log-normal distributions also characterise the patterns of discovery of new places, implying that they are not a simple consequence of the routine of modern life.

  16. Multi-scale spatio-temporal analysis of human mobility

    PubMed Central

    Alessandretti, Laura; Sapiezynski, Piotr; Lehmann, Sune; Baronchelli, Andrea

    2017-01-01

    The recent availability of digital traces generated by phone calls and online logins has significantly increased the scientific understanding of human mobility. Until now, however, limited data resolution and coverage have hindered a coherent description of human displacements across different spatial and temporal scales. Here, we characterise mobility behaviour across several orders of magnitude by analysing ∼850 individuals’ digital traces sampled every ∼16 seconds for 25 months with ∼10 meters spatial resolution. We show that the distributions of distances and waiting times between consecutive locations are best described by log-normal and gamma distributions, respectively, and that natural time-scales emerge from the regularity of human mobility. We point out that log-normal distributions also characterise the patterns of discovery of new places, implying that they are not a simple consequence of the routine of modern life. PMID:28199347

  17. Crater ejecta scaling laws - Fundamental forms based on dimensional analysis

    NASA Technical Reports Server (NTRS)

    Housen, K. R.; Schmidt, R. M.; Holsapple, K. A.

    1983-01-01

    Self-consistent scaling laws are developed for meteoroid impact crater ejecta. Attention is given to the ejection velocity of material as a function of the impact point, the volume of ejecta with a threshold velocity, and the thickness of ejecta deposit in terms of the distance from the impact. Use is made of recently developed equations for energy and momentum coupling in cratering events. Consideration is given to scaling of laboratory trials up to real-world events and formulations are developed for calculating the ejection velocities and ejecta blanket profiles in the gravity and strength regimes of crater formation. It is concluded that, in the gravity regime, the thickness of an ejecta blanket is the same in all directions if the thickness and range are expressed in terms of the crater radius. In the strength regime, however, the ejecta velocities are independent of crater size, thereby allowing for asymmetric ejecta blankets. Controlled experiments are recommended for the gravity/strength transition.

  18. A new analysis of intermittence, scale invariance and characteristic scales applied to the behavior of financial indices near a crash

    NASA Astrophysics Data System (ADS)

    Mariani, Maria Cristina; Liu, Yang

    2006-07-01

    This work is devoted to the study of the relation between intermittence and scale invariance, and applications to the behavior of financial indices near a crash. We developed a numerical analysis that predicts the critical date of a financial index, and we apply the model to the analysis of several financial indices. We were able to obtain optimum values for the critical date, corresponding to the most probable date of the crash. We only used data from before the true crash date in order to obtain the predicted critical date. The good numerical results validate the model.

  19. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  20. AdS and stabilized extra dimensions in multi-dimensional gravitational models with nonlinear scalar curvature terms R-1 and R4

    NASA Astrophysics Data System (ADS)

    Günther, Uwe; Zhuk, Alexander; Bezerra, Valdir B.; Romero, Carlos

    2005-08-01

    We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R4 model.