For comprehensive and current results, perform a real-time search at Science.gov.

1

NASA Astrophysics Data System (ADS)

We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

2

Development of a Multi-Dimensional Scale for PDD and ADHD

ERIC Educational Resources Information Center

A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…

Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya

2011-01-01

3

Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

2010-06-08

4

The three-level scaling approach was developed for the scientific design of an integral test facility and then it was applied to the design of the scaled facility known as the Purdue University Multi-Dimensional Integral Test Assembly (PUMA). The NRC Technical Program Group for severe accident scaling developed the conceptual framework for this scaling methodology. The present scaling method consists of

M. Ishii; S. T. Revankar; T. Leonardi; R. Dowlati; M. L. Bertodano; I. Babelli; W. Wang; H. Pokharna; V. H. Ransom; R. Viskanta; J. T. Han

1998-01-01

5

Multi-dimensional residual analysis of point process models for earthquake occurrences.

Multi-dimensional residual analysis of point process models for earthquake occurrences. Frederic Paik Schoenberg Department of Statistics, University of California, Los Angeles, CA 90095Â1554, USA. of Statistics 8142 Math-Science Building Los Angeles, CA 90095Â1554, USA. 1 #12;Abstract Residual analysis

Schoenberg, Frederic Paik (Rick)

6

ERIC Educational Resources Information Center

This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…

Chiou, Guo-Li; Anderson, O. Roger

2010-01-01

7

M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security

M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security Objective We developed graph-theoretic models to characterize an complex cyber system at multiple scales. The models-of-Networks Framework for Cyber Security." IEEE Intelligence and Security Informatics, 2013. 2. "Towards a Multiscale

8

Method of multi-dimensional moment analysis for the characterization of signal peaks

A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

2012-10-23

9

This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

2012-05-01

10

NASA Astrophysics Data System (ADS)

As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e

Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue

2011-12-01

11

Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students attending two school districts in the Northeast. Parents completed the 36-item Homework Performance Questionnaire – Parent Scale (HPQ-PS). Teachers completed the 22-item teacher scale (HPQ-TS) for each student for whom the HPQ-PS had been completed. A common factor analysis with principal axis extraction and promax rotation was used to analyze the findings. The results of the factor analysis of the HPQ-PS revealed three salient and meaningful factors: student task orientation/efficiency, student competence, and teacher support. The factor analysis of the HPQ-TS uncovered two salient and substantive factors: student responsibility and student competence. The findings of this study suggest that the HPQ is a promising set of measures for assessing student homework functioning and contextual factors that may influence performance. Directions for future research are presented. PMID:18516211

Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.

2007-01-01

12

NASA Astrophysics Data System (ADS)

We have developed an interactive web-based scheme for data-mining the spatio-temporal patterns of many earthquakes. This novel technique is based on cluster analysis of the multi-resolutional structures of earthquakes. The interactive scheme is based on a client-server paradigm in which we have used the off-screen rendering technique to facilitate the visual interrogation. A powerful 3-D visualization package Amira ( www.amiravis.com ) is also used to visualize the complex clusteral patte nrs in a reduced dimensional space. We have applied our method to observed and synthetic tic seismic catalogs. The observed data represent seismic activities situated around the Japanese islands in the 1997-2003 time interval. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by quasi-analytical 3-D elastic dislocation models .At the highest resolution, we analyze the local cluster structure in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches corresponding to neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space of the earthquake parameters. Using a non-hierarchical clustering algorithm and multi-dimensional scaling, we explore the multitudinous earthquakes by real-time 3-D visualization and inspection of multivariate clusters. At the resolutions characteristic of the earthquake parameters, all of the ongoing seismicity before and after largest events accumulate to a global structure consisting of a few separate clusters in the feature space . We show that by combining the clustering results from low and high resolution spaces, we can recognize precursory events more precisely. We will discuss how this WEB-IS ( Web-Interrrogative system ) would work. One can also access this by going to the URL http://boy.msi.umn.edu/web-is/. Its implementation and deployment in light of future GRID-computing will be discussed in terms of the recently developed Narada-Brokering (distributed messaging ) system of publishing and subscribing . This will provide a scalable infrastructure for several applications involving a set of nodes communicating with each other. .

Yuen, D. A.; Dzwinel, W.; Bollig, E. F.; Kadlec, B. F.; Ben-Zion, Y.; Yoshioka, S.

2003-12-01

13

Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.

Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus

2014-03-01

14

Multi-dimensional data scaling dynamical cascade Milan Jovovic, Geoffrey Fox

of signal distortion. A dynamical cascade computation diagrams result from the statistical physics model by conjoining the two parameters: the scale parameter of the signal distortion, and the spatial scale parameter. Accordingly, the reconstruction formula is derived based on the Laplacian system of the diffusion

15

Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

2014-01-01

16

Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

2014-01-01

17

Integrative analysis of multi-dimensional imaging genomics data for Alzheimer's disease prediction

In this paper, we explore the effects of integrating multi-dimensional imaging genomics data for Alzheimer's disease (AD) prediction using machine learning approaches. Precisely, we compare our three recent proposed feature selection methods [i.e., multiple kernel learning (MKL), high-order graph matching based feature selection (HGM-FS), sparse multimodal learning (SMML)] using four widely-used modalities [i.e., magnetic resonance imaging (MRI), positron emission tomography (PET), cerebrospinal fluid (CSF), and genetic modality single-nucleotide polymorphism (SNP)]. This study demonstrates the performance of each method using these modalities individually or integratively, and may be valuable to clinical tests in practice. Our experimental results suggest that for AD prediction, in general, (1) in terms of accuracy, PET is the best modality; (2) Even though the discriminant power of genetic SNP features is weak, adding this modality to other modalities does help improve the classification accuracy; (3) HGM-FS works best among the three feature selection methods; (4) Some of the selected features are shared by all the feature selection methods, which may have high correlation with the disease. Using all the modalities on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the best accuracies, described as (mean ± standard deviation)%, among the three methods are (76.2 ± 11.3)% for AD vs. MCI, (94.8 ± 7.3)% for AD vs. HC, (76.5 ± 11.1)% for MCI vs. HC, and (71.0 ± 8.4)% for AD vs. MCI vs. HC, respectively. PMID:25368574

Zhang, Ziming; Huang, Heng; Shen, Dinggang

2014-01-01

18

magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation

NASA Astrophysics Data System (ADS)

The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.

Angleraud, Christophe

2014-06-01

19

NASA Astrophysics Data System (ADS)

The integration of satellite data with physically based models can enable the characterization of earth systems and lead to improved management of natural resources at the catchment and regional scales. The reliability of simulations from physically based models depends on the accuracy of the forcing data and the model parameters. Forcing data obtained from satellites or other sources are often plagued with uncertainties and the model parameters require updates to capture the ever-changing environmental conditions. Although comprehensive data assimilation schemes for dual state and parameter updating have been proposed for improving the reliability of model simulations, their computational cost is sometimes prohibitively high. In this contribution, we propose a cost-effective and efficient alternative to handling complex multi-dimensional parameter and state improvement at the catchment scale. Our approach demystifies the complex multi-dimensional parameter estimation and state improvement problem by combining 1-dimensional exhaustive gridding with sensitivity-pushing, Newton-Raphson based guided random sampling and feedback from historical inverse estimates. In a numerical case study in the joint Rur and Erft Catchments in Germany, we apply our novel partial grid search approach to the estimation of soil surface roughness and vegetation opacity from disaggregated SMOS (Soil Moisture and Ocean Salinity Satellite) brightness temperature using the Community Microwave Emission Modeling platform (CMEM). Besides plausibly good estimates of the soil surface roughness and vegetation opacity at the catchment scale, our method also leads to improvement of the system states like soil surface moisture and soil temperature profile. Our method therefore has data assimilation capabilities without the associated computational cost incurred in ensemble-based data assimilation approaches. The partial grid search approach to parameter estimation is therefore a promising tool for multi-dimensional parameter estimation and state improvement in earth systems.

Miltin Mboh, Cho; Montzka, Carsten; Baatz, Roland; Vereecken, Harry

2014-05-01

20

This study specifies and tests a multi-dimensional model of publicness, building upon extant literature in this area. Publicness represents the degree to which an organization has "public" ties. An organization's degree ...

Merritt, Cullen

2014-05-31

21

NASA Astrophysics Data System (ADS)

The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being relatively less marked for information density, abstraction, and non-overt argumentation. In addition to the findings mentioned here, numerous other relationships among the texts exhibit complex patterns of variation related to a number of situational variables. Pedagogical implications are discussed in relation to General Education courses, differing student populations, and the reading and listening demands which students encounter in large introductory classes in the university.

Carkin, Susan

22

NASA Astrophysics Data System (ADS)

In this paper, a novel superpixel-based approach is introduced for unsupervised change detection using remote sensing images. The proposed approach contains three steps: 1) Superpixel segmentation. The simple linear iterative cluster (SLIC) algorithm is applied to obtain lattice-like homogenous superpixels. To avoid discordances of the superpixel boundaries obtained from bi-temporal images, the two images are firstly fused using principle component analysis. And then, the SLIC algorithm is applied on the first three principle components, which contain the main information of the two images. 2) For each superpixel, which is considered as the basic unit of the image space, the multi-dimensional change vector is computed from spectral, textural and structural features. 3) The superpixels are classified into two type: changed and unchanged through two progressive classification processes. The superpixels are firstly cataloged into three types: changed, unchanged and undefined by thresholding the change vectors and a voting process. And then the undefined superpixels are further classified into two classes: changed and unchanged, using a SVM-based classifier, which is trained by the derived changed and unchanged superpixels from the former step. The experiment using Indonesia data set has confirmed that the proposed approach is able to detect the changes automatically, by exploiting multiple change features.

Wu, Z.; Hu, Z.; Fan, Q.

2012-07-01

23

Fast, Multi-Dimensional and Simultaneous Kymograph-Like Particle Dynamics (SkyPad) Analysis

Background Kymograph analysis is a method widely used by researchers to analyze particle dynamics in one dimensional (1D) trajectories. Results Here we provide a Visual Basic-coded algorithm to use as a Microsoft Excel add-in that automatically analyzes particles in 2D trajectories with all the advantages of kymograph analysis. Conclusions This add-in, which we named SkyPad, leads to significant time saving and higher accuracy of particle analysis. Finally, SkyPad can also be used for 3D trajectories analysis. PMID:24586511

Cadot, Bruno; Gache, Vincent; Gomes, Edgar R.

2014-01-01

24

A multi-dimensional analysis of cue-elicited craving in heavy smokers and tobacco chippers

Aims This research examined the performance of a broad range of measures posited to relate to smoking craving. Design Heavy smokers and tobacco chippers, who were either deprived of smoking or not for 7 hours, were exposed to both smoking (a lit cigarette) and control cues. Participants Smokers not currently interested in trying to quit smoking (n = 127) were recruited. Heavy smokers (n = 67) averaged smoking at least 21 cigarettes/day and tobacco chippers (n = 60) averaged 1–5 cigarettes on at least 2 days/week. Measurements Measures included urge rating scales and magnitude estimations, a rating of affective valence, a behavioral choice task that assessed perceived reinforcement value of smoking, several smoking-related judgement tasks and a measure of cognitive resource allocation. Findings Results indicated that both deprivation state and smoker type tended to affect responses across these measurement domains. Conclusions Findings support the use of several novel measures of craving-related processes in smokers. PMID:11571061

SAYETTE, MICHAEL A.; MARTIN, CHRISTOPHER S.; WERTZ, JOAN M.; SHIFFMAN, SAUL; PERROTT, MICHAEL A.

2009-01-01

25

Graph OLAP: a multi-dimensional framework for graph data analysis

Databases and data warehouse systems have been evolving from handling normalized spreadsheets stored in relational databases,\\u000a to managing and analyzing diverse application-oriented data with complex interconnecting structures. Responding to this emerging\\u000a trend, graphs have been growing rapidly and showing their critical importance in many applications, such as the analysis of\\u000a XML, social networks, Web, biological data, multimedia data and spatiotemporal

Chen Chen; Xifeng Yan; Feida Zhu; Jiawei Han; Philip S. Yu

2009-01-01

26

Multi-dimensional edge detection operators

NASA Astrophysics Data System (ADS)

In remote sensing, modern sensors produce multi-dimensional images. For example, hyperspectral images contain hundreds of spectral images. In many image processing applications, segmentation is an important step. Traditionally, most image segmentation and edge detection methods have been developed for one-dimensional images. For multidimensional images, the output images of spectral band images are typically combined under certain rules or using decision fusions. In this paper, we proposed a new edge detection algorithm for multi-dimensional images using secondorder statistics. First, we reduce the dimension of input images using the principal component analysis. Then we applied multi-dimensional edge detection operators that utilize second-order statistics. Experimental results show promising results compared to conventional one-dimensional edge detectors such as Sobel filter.

Youn, Sungwook; Lee, Chulhee

2014-05-01

27

Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis

Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.

Jeong, Myong K [ORNL; Kong, Seong G [ORNL; Omitaomu, Olufemi A [ORNL

2008-09-01

28

NASA Astrophysics Data System (ADS)

Downie Slide, one of the world's largest landslides, is a massive, active, composite, extremely slow rockslide located on the west bank of the Revelstoke Reservoir in British Columbia. It is a 1.5 billion m3 rockslide measuring 2400 m along the river valley, 3300m from toe to headscarp and up to 245 m thick. Significant contributions to the field of landslide geomechanics have been made by analyses of spatially and temporally discriminated slope deformations, and how these are controlled by complex geological and geotechnical factors. Downie Slide research demonstrates the importance of delineating massive landslides into morphological regions in order to characterize global slope behaviour and identify localized events, which may or may not influence the overall slope deformation patterns. Massive slope instabilities do not behave as monolithic masses, rather, different landslide zones can display specific landslide processes occurring at variable rates of deformation. The global deformation of Downie Slide is extremely slow moving; however localized regions of the slope incur moderate to high rates of movement. Complex deformation processes and composite failure mechanism are contributed to by topography, non-uniform shear surfaces, heterogeneous rockmass and shear zone strength and stiffness characteristics. Further, from the analysis of temporal changes in landslide behaviour it has been clearly recognized that different regions of the slope respond differently to changing hydrogeological boundary conditions. State-of-the-art methodologies have been developed for numerical simulation of large landslides; these provide important tools for investigating dynamic landslide systems which account for complex three-dimensional geometries, heterogenous shear zone strength parameters, internal shear zones, the interaction of discrete landslide zones and piezometric fluctuations. Numerical models of Downie Slide have been calibrated to reproduce observed slope behaviour, and the calibration process has provided important insight to key factors controlling massive slope mechanics. Through numerical studies it has been shown that the three-dimensional interpretation of basal slip surface geometry and spatial heterogeneity in shear zone stiffness are important factors controlling large-scale slope deformation processes. The role of secondary internal shears and the interaction between landslide morphological zones has also been assessed. Further, numerical simulation of changing groundwater conditions has produced reasonable correlation with field observations. Calibrated models are valuable tools for the forward prediction of landslide dynamics. Calibrated Downie Slide models have been used to investigate how trigger scenarios may accelerate deformations at Downie Slide. The ability to reproduce observed behaviour and forward test hypothesized changes to boundary conditions has valuable application in hazard management of massive landslides. The capacity of decision makers to interpret large amounts of data, respond to rapid changes in a system and understand complex slope dynamics has been enhanced.

Kalenchuk, K. S.; Hutchinson, D.; Diederichs, M. S.

2013-12-01

29

NASA Astrophysics Data System (ADS)

The development of a reliable containment cooling system is one of the key areas in advanced nuclear reactor development. There are two categories of containment cooling: active and passive. The active containment cooling consists usually of systems that require active participation in their use. The passive systems have, in the past, been reliant on the supply of electrical power. This has instigated worldwide efforts in the development of passive containment cooling systems that are safer, more reliable, and simpler in their use. The passive containment cooling system's performance is deteriorated by noncondensable gases that come from the containment and from the gases produced by cladding/steam interaction during a severe accident. These noncondensable gases degrade the heat transfer capabilities of the condensers in the passive containment cooling systems since they provide a heat transfer resistance to the condensation process. There has been some work done in the area of modeling condensation heat transfer with noncondensable gases, but little has been done to apply the work to integral facilities. It is important to fully understand the heal transfer capabilities of the passive systems so a detailed assessment of the long term cooling capabilities can be performed. The existing correlations and models are for the through-flow of the mixture of steam and the noncondensable gases. This type of analysis may not be applicable to passive containment cooling systems, where there is no clear passage for the steam to escape. This allows the steam to accumulate in the lower header and tubes, where all of the steam condenses. The objective of this work was to develop a condensation heat transfer model for the downward cocurrent flow of a steam/air mixture through a condenser tube, taking into account the atypical characteristics of the passive containment cooling system. An empirical model was developed that depends solely on the inlet conditions to the condenser system, including the mixture Reynolds number and noncondensable gas concentration. This empirical model is applicable to the condensation heat transfer of the passive containment cooling system. This study was also used to characterize the local heat transfer coefficient with a noncondensable gas present.

Wilmarth de Leonardi, Tauna Lea

2000-10-01

30

AIAA 20012623 Multi-Dimensional Upwind

AIAA 2001Â2623 Multi-Dimensional Upwind Constrained Transport on Unstructured Grids for `Shallow Mathematics, University of Colorado at Boulder, USA 15th AIAA Computational Fluid Dynamics Conference 11 - 14

De Sterck, Hans

31

Progress in multi-dimensional upwind differencing

NASA Technical Reports Server (NTRS)

Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.

Vanleer, Bram

1992-01-01

32

A stable multi-dimensional magnetic levitator was characterized and implemented. This thesis contains a full analysis of the feedback specifications, a short summary of the circuits used in the design of the setup, and ...

Hlebowitsh, Paul Gerardus

2012-01-01

33

Sensitivity of Multi-dimensional Bayesian Classifiers

Sensitivity of Multi-dimensional Bayesian Classifiers Janneke H. Bolt Silja Renooij Technical Utrecht University P.O. Box 80.089 3508 TB Utrecht The Netherlands #12;Sensitivity of Multi results that support this observation were substantiated by a study of the sensitivity properties of naive

Utrecht, Universiteit

34

The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets

NASA Technical Reports Server (NTRS)

The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

Baurle, R. A.; Gaffney, R. L.

2007-01-01

35

The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets

NASA Technical Reports Server (NTRS)

The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

Baurle, Robert A.; Gaffney, Richard L., Jr.

2007-01-01

36

The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.

T. Downar

2009-03-31

37

NASA Astrophysics Data System (ADS)

In the past twenty 20 years considerable progress has been made in developing new methods for solving the multi-dimensional transport problem. However the effort devoted to the resonance self-shielding calculation has lagged, and much less progress has been made in enhancing resonance-shielding techniques for generating problem-dependent multi-group cross sections (XS) for the multi-dimensional transport calculations. In several applications, the error introduced by self-shielding methods exceeds that due to uncertainties in the basic nuclear data, and often they can be the limiting factor on the accuracy of the final results. This work is to improve the accuracy of the resonance self-shielding calculation by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. A new method has been developed, it can calculate the continuous-energy neutron fluxes for the whole two-dimensional domain, which can be utilized as weighting function to process the self-shielded multi-group cross sections for reactor analysis and criticality calculations, and during this process, the two-dimensional heterogeneous effect in the resonance self-shielding calculation can be fully included. A new code, GEMINEWTRN (Group and Energy-Pointwise Methodology Implemented in NEWT for Resonance Neutronics) has been developed in the developing version of SCALE [1], it combines the energy pointwise (PW) capability of the CENTRM [2] with the two-dimensional discrete ordinates transport capability of lattice physics code NEWT [14]. Considering the large number of energy points in the resonance region (typically more than 30,000), the computational burden and memory requirement for GEMINEWTRN is tremendously large, some efforts have been performed to improve the computational efficiency, parallel computation has been implemented into GEMINEWTRN, which can save the computation and memory requirement a lot; some energy points reducing techniques have also been developed, improving the computational efficiency at the meanwhile preserving the accuracy. These efforts make the new method much more feasible for practical use.

Zhong, Zhaopeng

38

Vlasov multi-dimensional model dispersion relation

A hybrid model of the Vlasov equation in multiple spatial dimension D?>?1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?{sup N}, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.

Lushnikov, Pavel M., E-mail: plushnik@math.unm.edu [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Rose, Harvey A. [Theoretical Division, Los Alamos National Laboratory, MS-B213, Los Alamos, New Mexico 87545 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States); Silantyev, Denis A.; Vladimirova, Natalia [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States)

2014-07-15

39

This paper aims to evaluate the validation of Schalock's quality of life multi-dimensional model (1996) in the Portuguese context. We also analyze the quality of life of disabled people by adding a political dimension (adapted from the Minorities' Rights Support Scale by Nata & Menezes, 2007) to this construct and seeking to understand the impact of discrimination. The sample is composed of 217 participants, most of whom have a physical disability, aged 16 to 81. Validation procedures of the Quality of Life Questionnaire (Schalock & Keith, 1993) and descriptive statistics and correlation analysis were conducted. Confirmatory Factor Analysis revealed good local and global fit indices, and the internal consistency of the scales was satisfactory. An adapted version of the instrument composed of five scales-satisfaction, competence, empowerment, equality of rights and positive discrimination-is proposed. The results reveal the importance of rights and empowerment for the quality of life of disabled people and indicate a strong critical consciousness concerning the experience of discrimination in different contexts. Taken together, the findings indicate the strong need for social and political changes in this domain. PMID:23866209

Loja, Ema; Costa, Maria Emília; Menezes, Isabel

2013-01-01

40

The effects of intracerebroventricular injection of the delta-selective opioid peptides, DADL (D-Ala2-D-Leu5-enkephalin) and DPLPE (D-Pen2-L-Pen5-enkephalin), on spontaneous locomotor activity were investigated in mice using multi-dimensional behavioral analysis, based upon a capacitance system. The analysers classified the movements into 9 sizes (1/1, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 and 1/256). Specific patterns of behavior were each registered on these sizes of movement. At 1.0 and 3.0 micrograms, DADL produced a significant increase in circling (1/4 size of movements) within 15 min after the start of measurements, while it produced a marked increase in linear locomotion (1/2 size), circling (1/4 size), rearing (1/16 size) and grooming (1/32, 1/64 and 1/128 sizes) within 15-30 min after the start. At 10.0 micrograms, DPLPE decreased linear locomotion (1/1 size) and conversely increased circling behavior (1/4 size) within 15 min after the start, whilst this peptide at 3.0 or 10.0 micrograms, produced a marked increase in linear locomotion (1/2 size), circling (1/4 size) and grooming (1/128 size) within 15-30 min after the start. The behavioral effects induced by DADL (3.0 micrograms) and DPLPE (10.0 micrograms) were completely reversed by naloxone (1.0 and 2.0 mg/kg). These results obtained with DPLPE, a delta-selective peptide and DADL, a less delta-selective peptide, indicate a common pattern of activity which was presumably delta receptor-mediated. However, one component (linear locomotion, at times immediately after administration of the peptide) did clearly differ between these two peptide analogues.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2554179

Ukai, M; Toyoshi, T; Kameyama, T

1989-10-01

41

Differential Privacy for Protecting Multi-dimensional Contingency Table Data

Differential Privacy for Protecting Multi-dimensional Contingency Table Data: Extensions from a multi-way contingency table. Privacy protection in such settings implicitly focuses on small of large sparse contingency tables encountered in statistical practice. Keywords and phrases: Efron

42

Star-ND (Multi-Dimensional Star-Identification)

In order to perform star-identification with lower processing requirements, multi-dimensional techniques are implemented in this research as a database search as well as to create star pattern parameters. New star pattern parameters are presented...

Spratling, Benjamin

2012-07-16

43

Towards a genuinely multi-dimensional upwind scheme

NASA Technical Reports Server (NTRS)

Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.

Powell, Kenneth G.; Vanleer, Bram; Roe, Philip L.

1990-01-01

44

Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806

Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

2013-01-01

45

Accurate, efficient and monotonic numerical methods for multi-dimensional compressible flows

The present papers deal with numerical methods toward the accurate and efficient computations of multi-dimensional steady\\/unsteady compressible flows. In Part I, a new spatial discretization technique is introduced to reduce excessive numerical dissipation in a non-flow-aligned grid system. Through the analysis of TVD limiters, a criterion is proposed to predict cell-interface states accurately both in smooth region and in discontinuous

Kyu Hong Kim; Chongam Kim

2005-01-01

46

Fully automated online multi-dimensional protein profiling system for complex mixtures

For high throughput proteome analysis of highly complex protein mixtures, we have constructed a fully automated online system for multi-dimensional protein profiling, which utilizes a combination of two-dimensional liquid chromatography and tandem mass spectrometry (2D-LC–MS–MS), based on our well-established offline system described previously [K. Fujii, T. Nakano, T. Kawamura, F. Usui, Y. Bando, R. Wang, T. Nishimura, J. Proteome Res.

Kiyonaga Fujii; Tomoyo Nakano; Hiroshi Hike; Fumihiko Usui; Yasuhiko Bando; Hiromasa Tojo; Toshihide Nishimura

2004-01-01

47

Subsonic Flows in a Multi-Dimensional Nozzle

NASA Astrophysics Data System (ADS)

In this paper, we study the global subsonic irrotational flows in a multi-dimensional ( n ? 2) infinitely long nozzle with variable cross sections. The flow is described by the inviscid potential equation, which is a second order quasilinear elliptic equation when the flow is subsonic. First, we prove the existence of the global uniformly subsonic flow in a general infinitely long nozzle for arbitrary dimension with sufficiently small incoming mass flux and obtain the uniqueness of the global uniformly subsonic flow. Then we show that there exists a critical value of the incoming mass flux such that a global uniformly subsonic flow exists uniquely, provided that the incoming mass flux is less than the critical value. This gives a positive answer to the problem of Bers on global subsonic irrotational flows in infinitely long nozzles for arbitrary dimension ( Bers in Surveys in applied mathematics, vol 3, Wiley, New York, 1958). Finally, under suitable asymptotic assumptions of the nozzle, we obtain the asymptotic behavior of the subsonic flow in far fields by means of a blow-up argument. The main ingredients of our analysis are methods of calculus of variations, the Moser iteration techniques for the potential equation and a blow-up argument for infinitely long nozzles.

Du, Lili; Xin, Zhouping; Yan, Wei

2011-09-01

48

Image matrix processor for fast multi-dimensional computations

An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

Roberson, G.P.; Skeate, M.F.

1996-10-15

49

Multi-Dimensional Search for Personal Information Management Systems

- count the approximation in each dimension to efficiently evaluate fuzzy multi-dimensional queries in personal information management systems by allowing users to provide fuzzy structure and metadata comprehensive than content-only searches as it considers three query dimensions (content, structure, metadata

50

Towards Semantic Web Services on Large, Multi-Dimensional Coverages

NASA Astrophysics Data System (ADS)

Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context

Baumann, P.

2009-04-01

51

Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

NASA Astrophysics Data System (ADS)

A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.

Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.

2015-01-01

52

Multi-dimensional Indoor Location Information Model

NASA Astrophysics Data System (ADS)

Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and IndoorGML. The goal of the model is to provide an exchange GML-based format for location needed for indoor routing and navigation. An elaborated user requirements analysis and investigation of state-of-the-art technology in expressing indoor location at home and abroad was completed to identify the manner humans specify location. The ultimate goal is to provide an ontology that will allow absolute and relative specification of location such as "in room 321", "on the second floor", as well as, "two meters from the second window", "12 steps from the door".

Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.

2013-11-01

53

Closed-form multi-dimensional multi-invariance ESPRIT

A closed-form multi-dimensional multi-invariance generalization of the ESPRIT algorithm is introduced to exploit the entire invariance structure underlying a (possibly) multiparametric data model, thereby greatly improving estimation performance. The multiple-invariance data structure that this proposed method can handle includes: (1) multiple occurrence of one size of invariance along one or multiple parametric dimensions, (2) multiple sizes of invariances along one

Kainam T. Wong; Michael D. Zoltowski

1997-01-01

54

Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus

Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.

Weizhen Mao; Jie Chen; William Watson

2005-11-30

55

Study of multi-dimensional radiative energy transfer in molecular gases

NASA Technical Reports Server (NTRS)

The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

Liu, Jiwen; Tiwari, S. N.

1993-01-01

56

Portable laser synthesizer for high-speed multi-dimensional spectroscopy

Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.

Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)

2012-05-29

57

Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes

NASA Technical Reports Server (NTRS)

The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.

Barth, Timothy J.; Kwak, Dochan (Technical Monitor)

1995-01-01

58

Scaling analysis of transient heating

NSDL National Science Digital Library

This problem is a simple case designed to show the power of scaling analysis to estimate the behavior of variables of interest without doing a detailed analysis. Here, internal heat generation heats a square part and the student is asked to find the dependence of the maximum temperature on time. The use of a scaling analysis encourages the student to think about the physics of the problem more than just solving the differential equation.

Krane, Matthew J.

2008-10-14

59

Multi-dimensional coordination in cross-country skiing analyzed using self-organizing maps.

This study sought to ascertain how multi-dimensional coordination patterns changed with five poling speeds for 12 National Standard cross-country skiers during roller skiing on a treadmill. Self-organizing maps (SOMs), a type of artificial neural network, were used to map the multi-dimensional time series data on to a two-dimensional output grid. The trajectories of the best-matching nodes of the output were then used as a collective variable to train a second SOM to produce attractor diagrams and attractor surfaces to study coordination stability. Although four skiers had uni-modal basins of attraction that evolved gradually with changing speed, the other eight had two or three basins of attraction as poling speed changed. Two skiers showed bi-modal basins of attraction at some speeds, an example of degeneracy. What was most clearly evident was that different skiers showed different coordination dynamics for this skill as poling speed changed: inter-skier variability was the rule rather than an exception. The SOM analysis showed that coordination was much more variable in response to changing speeds compared to outcome variables such as poling frequency and cycle length. PMID:24060219

Lamb, Peter F; Bartlett, Roger; Lindinger, Stefan; Kennedy, Gavin

2014-02-01

60

Multi-dimensional Longwave Forcing of Boundary Layer Cloud Systems

The importance of multi-dimensional (MD) longwave radiative effects on cloud dynamics is evaluated in a large eddy simulation (LES) framework employing multi-dimensional radiative transfer (Spherical Harmonics Discrete Ordinate Method —SHDOM). Simulations are performed for a case of unbroken, marine boundary layer stratocumulus and a broken field of trade cumulus. “Snapshot” calculations of MD and IPA (independent pixel approximation —1D) radiative transfer applied to LES cloud fields show that the total radiative forcing changes only slightly, although the MD effects significantly modify the spatial structure of the radiative forcing. Simulations of each cloud type employing MD and IPA radiative transfer, however, differ little. For the solid cloud case, relative to using IPA, the MD simulation exhibits a slight reduction in entrainment rate and boundary layer TKE relative to the IPA simulation. This reduction is consistent with both the slight decrease in net radiative forcing and a negative correlation between local vertical velocity and radiative forcing, which implies a damping of boundary layer eddies. Snapshot calculations of the broken cloud case suggest a slight increase in radiative cooling, though few systematic differences are noted in the interactive simulations. We attribute this result to the fact that radiative cooling is a relatively minor contribution to the total energetics. For the cloud systems in this study, the use of IPA longwave radiative transfer is sufficiently accurate to capture the dynamical behavior of BL clouds. Further investigations are required in order to generalize this conclusion for other cloud types and longer time integrations. 1

Mechem, David B.; Kogan, Y. L.; Ovtchinnikov, Mikhail; Davis, Anthony B; Evans, K. F.; Ellingson, Robert G.

2008-12-20

61

Multi-Dimensional Damage Detection for Surfaces and Structures

NASA Technical Reports Server (NTRS)

Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or sensory system, which may include a time domain reflectometer, resistivity monitoring hardware, or other resistance-based systems. To begin, a layered composite consisting of thin-film damage detection layers separated by non-damage detection layers is fabricated. The damage detection layers are attached to a detector that provides details regarding the physical health of each detection layer individually. If damage occurs to any of the detection layers, a change in the electrical properties of the detection layers damaged occurs, and a response is generated. Real-time analysis of these responses will provide details regarding the depth, location, and size estimation of the damage. Multiple damages can be detected, and the extent (depth) of the damage can be used to generate prognostic information related to the expected lifetime of the layered composite system. The detection system can be fabricated very easily using off-the-shelf equipment, and the detection algorithms can be written and updated (as needed) to provide the level of detail needed based on the system being monitored. Connecting to the thin film detection layers is very easy as well. The truly unique feature of the system is its flexibility; the system can be designed to gather as much (or as little) information as the end user feels necessary. Individual detection layers can be turned on or off as necessary, and algorithms can be used to optimize performance. The system can be used to generate both diagnostic and prognostic information related to the health of layer composite structures, which will be essential if such systems are utilized for space exploration. The technology is also applicable to other in-situ health monitoring systems for structure integrity.

Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah

2013-01-01

62

The evolution of anisotropic structures and turbulence in the multi-dimensional Burgers equation

The goal of the present paper is the investigation of the evolution of anisotropic regular structures and turbulence at large Reynolds number in the multi-dimensional Burgers equation. We show that we have local isotropization of the velocity and potential fields at small scale inside cellular zones. For periodic waves, we have simple decay inside of a frozen structure. The global structure at large times is determined by the initial correlations, and for short range correlated field, we have isotropization of turbulence. The other limit we consider is the final behavior of the field, when the processes of nonlinear and harmonic interactions are frozen, and the evolution of the field is determined only by the linear dissipation.

S. N. Gurbatov; A. Yu. Moshkov; A. Noullez

2008-08-20

63

ROOT — An object oriented data analysis framework

The ROOT system in an Object Oriented framework for large scale data analysis. ROOT written in C++, contains, among others, an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The command and

Rene Brun; Fons Rademakers

1997-01-01

64

Code Generation for Single-Dimension Software Pipelining of Multi-Dimensional Loops

Code Generation for Single-Dimension Software Pipelining of Multi-Dimensional Loops Hongbo Rong to traditional software pipelining, SSP handles two distinct repetitive patterns, and thus requires new code method to the multi-dimensional do- main. However, the approach has two major shortcomings. First

Gao, Guang R.

65

Balance properties of multi-dimensional words Val erie Berth e and Robert Tijdeman y

Balance properties of multi-dimensional words Val#19;erie Berth#19;e #3; and Robert Tijdeman y Abstract A word u is called 1-balanced if for any two factors v and w of u of equal length, we have 1 #20 v. The aim of this paper is to extend the notion of balance to multi-dimensional words. We #12;rst

Tijdeman, Robert

66

Simulation and Fitting of Multi-Dimensional X-ray Data

NASA Astrophysics Data System (ADS)

Astronomical data generally consists of 2 or more high-resolution axes, e.g. X and Y coordinates on the sky or wavelength and position along one axis (long-slit spectrometer). Analyzing these multi-dimension observations requires combining 3D source models (including velocity effects), instrument models, and multi-dimensional data comparison and fitting. A prototype of such a ``Beyond XSPEC'' (\\citet{noble08}) system is presented here using Chandra imaging and dispersed HETG grating data. Techniques used include: Monte Carlo event generation, chi-squared comparison, conjugate gradient fitting adapted to the Monte Carlo characteristics, and informative visualizations at each step. These simple baby steps of progress only scratch the surface of the computational potential that is available these days for astronomical analysis.

Dewey, D.; Noble, M. S.

2009-09-01

67

Multi-Dimensional Analysis of Dynamic Human Information Interaction

ERIC Educational Resources Information Center

Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…

Park, Minsoo

2013-01-01

68

Subsonic Flows in a Multi-Dimensional Nozzle

In this paper, we study the global subsonic irrotational flows in a multi-dimensional ($n\\geq 2$) infinitely long nozzle with variable cross sections. The flow is described by the inviscid potential equation, which is a second order quasilinear elliptic equation when the flow is subsonic. First, we prove the existence of the global uniformly subsonic flow in a general infinitely long nozzle for arbitrary dimension for sufficiently small incoming mass flux and obtain the uniqueness of the global uniformly subsonic flow. Furthermore, we show that there exists a critical value of the incoming mass flux such that a global uniformly subsonic flow exists uniquely, provided that the incoming mass flux is less than the critical value. This gives a positive answer to the problem of Bers on global subsonic irrotational flows in infinitely long nozzles for arbitrary dimension. Finally, under suitable asymptotic assumptions of the nozzle, we obtain the asymptotic behavior of the subsonic flow in far fields by a blow-up a...

Du, Lili; Yan, Wei

2011-01-01

69

Steps Toward a Large-Scale Solar Image Data Analysis to Differentiate Solar Phenomena

NASA Astrophysics Data System (ADS)

We detail the investigation of the first application of several dissimilarity measures for large-scale solar image data analysis. Using a solar-domain-specific benchmark dataset that contains multiple types of phenomena, we analyzed combinations of image parameters with different dissimilarity measures to determine the combinations that will allow us to differentiate between the multiple solar phenomena from both intra-class and inter-class perspectives, where by class we refer to the same types of solar phenomena. We also investigate the problem of reducing data dimensionality by applying multi-dimensional scaling to the dissimilarity matrices that we produced using the previously mentioned combinations. As an early investigation into dimensionality reduction, we investigate by applying multidimensional scaling (MDS) how many MDS components are needed to maintain a good representation of our data (in a new artificial data space) and how many can be discarded to enhance our querying performance. Finally, we present a comparative analysis of several classifiers to determine the quality of the dimensionality reduction achieved with this combination of image parameters, similarity measures, and MDS.

Banda, J. M.; Angryk, R. A.; Martens, P. C. H.

2013-11-01

70

In this thesis, we present a system for visualizing hierarchical, multi-dimensional, memory-intensive datasets. Specifically, we designed an interactive system to visualize data collected by high-throughput microscopy and ...

Kang, InHan

2006-01-01

71

Earthquake Clusters over Multi-dimensional Space, Visualization of E 2347 Earthquake Clusters over

Earthquake Clusters over Multi-dimensional Space, Visualization of E 2347 E Earthquake Clusters of Southern California, Los Angeles, USA 4 Department of Computer Science, University of Colorado, Boulder, USA Article Outline Glossary Definition of the Subject Introduction Earthquakes Clustering

Ben-Zion, Yehuda

72

Multi-dimensional ultra-high frequency passive radio frequency identification tag antenna designs

In this thesis, we present the design, simulation, and empirical evaluation of two novel multi-dimensional ultra-high frequency (UHF) passive radio frequency identification (RFID) tag antennas, the Albano-Dipole antenna ...

Delichatsios, Stefanie Alkistis

2006-01-01

73

Multi-Dimensional Separation of Concerns in Requirements Engineering Ana Moreira

Multi-Dimensional Separation of Concerns in Requirements Engineering Ana Moreira , Awais Rashid of their functional or non-functional nature. This makes it possible to project any particular set of requirements

74

Multi-dimensional Multiphase Modeling of Sediment Transport

NASA Astrophysics Data System (ADS)

Sediment transport driven by waves and currents is of great significance to further predict coastal morphodynamics. Eulerian two-phase models have been shown effective to study sheet flow sediment transport, though most of them are limited to Reynolds-averaged one-dimensional-vertical formulation. Hence, bedform, plug flow and turbulence cannot be resolved. Our goal is to develop four-way coupled multiphase models for multi-dimensional sediment transport under the numerical framework of OpenFOAM for Eulerian modeling and CFDEM for Euler-Lagrangian modeling. In the Eulerian modeling, particle-particle interaction is modeled using the kinetic theory for granular flow for binary collision and phenomenological closure for stresses of enduring contact. To improve the capability of the model for a range of grain sizes, a new closure for the fluid-particle velocity fluctuation correlation in the k-? equations is proposed. The model is validated by comparing the numerical results with laboratory experiments under steady flow and oscillatory flow for grain size ranging from 0.13~0.51 mm. To improve the closure of particle stress and studying poly-dispersed sediment transport processes, an Euler-Lagrangian solver called CFDEM, which couples OpenFOAM for the fluid phase and LIGGGHTS for particle phase, is modified for sand transport in oscillatory flow. Preliminary investigation suggests that even under sheet flow condition, small bed irregularities are observed during flow reversal. These small irregularities later encourage the formation of large sediment cloud during peak flow. 2D/3D simulation of the recent U-tube experiments at Naval Research Laboratory will be carried out to study instabilities in sheet flow and the poly-dispersed effects.

Cheng, Z.; d'Albignac, S.; Yu, X.; Hsu, T.; Sou, I.; Calantoni, J.

2012-12-01

75

Chemistry and Transport in a Multi-Dimensional Model

NASA Technical Reports Server (NTRS)

Our work has two primary scientific goals, the interannual variability (IAV) of stratospheric ozone and the hydrological cycle of the upper troposphere and lower stratosphere. Our efforts are aimed at integrating new information obtained by spacecraft and aircraft measurements to achieve a better understanding of the chemical and dynamical processes that are needed for realistic evaluations of human impact on the global environment. A primary motivation for studying the ozone layer is to separate the anthropogenic perturbations of the ozone layer from natural variability. Using the recently available merged ozone data (MOD), we have carried out an empirical orthogonal function EOF) study of the temporal and spatial patterns of the IAV of total column ozone in the tropics. The outstanding problem about water in the stratosphere is its secular increase in the last few decades. The Caltech/PL multi-dimensional chemical transport model (CTM) photochemical model is used to simulate the processes that control the water vapor and its isotopic composition in the stratosphere. Datasets we will use for comparison with model results include those obtained by the Total Ozone Mapping Spectrometer (TOMS), the Solar Backscatter Ultraviolet (SBUV and SBUV/2), Stratosphere Aerosol and Gas Experiment (SAGE I and II), the Halogen Occultation Experiment (HALOE), the Atmospheric Trace Molecular Spectroscopy (ATMOS) and those soon to be obtained by the Cirrus Regional Study of Tropical Anvils and Cirrus Layers Florida Area Cirrus Experiment (CRYSTAL-FACE) mission. The focus of the investigations is the exchange between the stratosphere and the troposphere, and between the troposphere and the biosphere.

Yung, Yuk L.

2004-01-01

76

Importance of multi-dimensional conductive heat flows in and around buildings

The modelling of certain structures in a manner consistent with current practice can lead to significant errors as a result of the non-treatment of multi-dimensional conductive heat flows. A series of multi-dimensional simulations using a modified whole building thermal simulation model have been performed in order to highlight this problem. As a result, it is suggested that all building thermal

M. Davies; A. Tindale; J. Littler

1995-01-01

77

Real-time metaphorical visualization of multi-dimensional environmental data

REAL-TIME METAPHORICAL VISUALIZATION OF MULTI- DIMENSIONAL ENVIRONMENTAL DATA A Thesis by ERIC BRIAN ALEY Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE May 2006 Major Subject: Visualization Sciences REAL-TIME METAPHORICAL VISUALIZATION OF MULTI- DIMENSIONAL ENVIRONMENTAL DATA A Thesis by ERIC BRIAN ALEY Submitted to the Office of Graduate Studies...

Aley, Eric Brian

2006-08-16

78

Multi-dimensional chromatography offers the increased resolution and peak capacity by coupling of multiple columns with the same or different separation mechanisms. In this work, a novel multi-channel multi-dimensional counter-current chromatography (CCC) has been successfully constructed and used for several two-dimensional (2D) and three-dimensional (3D) CCC separations including 2D A×B/A×C, A×B-C and A-B×C, and 3D A×B×C systems. These 2D and 3D CCC systems were further applied to separate the bioactive tanshinones from the extract of Tanshen (or Danshen, Salvia miltiorrhiza Bunge), a famous Traditional Chinese Medicine (TCM). As a result, the developed 2D and 3D CCC methods were successful and efficient for resolving the tanshinones from complex extracts. Compared to the 1D multiple columns CCC separation, the 2D and 3D CCC decrease analysis time, reduce solvent consumption and increase sample throughput significantly. It may be widely used for current drug development, metabolomic analysis and natural product isolation. PMID:24296293

Meng, Jie; Yang, Zhi; Liang, Junling; Zhou, Hui; Wu, Shihua

2014-01-01

79

NASA Astrophysics Data System (ADS)

We propose accurate explicit numerical schemes based on the lattice Boltzmann (LB) method for multi-dimensional diffusion equations. In LB schemes, the velocity models D2Q9 and D2Q13 are used for two-dimensional equations and D3Q19 and D3Q25 for three-dimensional equations. We introduce free parameters that characterize the weight of the equilibrium distribution functions to reduce numerical errors. Consistency analysis through the fourth-order Chapman-Ensgok expansion of the distribution functions gives an approximate diffusion equation with error terms up to fourth-order. The relaxation parameter and weight parameters are determined so that second-order error terms are eliminated in the approximate equation. Stability analysis shows that we can find a relaxation parameter so that each of the presented schemes is stable for given diffusion coefficients and discretizing parameters. Numerical experiments for the isotropic and anisotropic benchmark problems show that the presented schemes derived from the velocity models D2Q13 and D3Q25 are useful for numerical simulations of practical problems governed by two- and three-dimensional diffusion equations, respectively. In particular, schemes in which the value of the relaxation parameter is set to be 1 demonstrate a fourth-order accuracy under the stability condition.

Suga, Shinsuke

2014-11-01

80

Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling

ERIC Educational Resources Information Center

This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major…

Kim, Se-Kang; Davison, Mark L.; Frisby, Craig L.

2007-01-01

81

NASA Astrophysics Data System (ADS)

Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe can be loaded into an active Ricksep session, after which the user can navigate to any state in the sequence and modify the sequence from that state. Typical uses of this feature are undoing and redoing viewing commands and animating a sequence of viewing states. The theoretical discussion are carried out and several examples using real seismic data are provided to show how these new Ricksep features provide more convenient, accurate ways to manipulate multi-dimensional data sets.

Chen, D. M.; Clapp, R. G.; Biondi, B.

2006-12-01

82

Principal Component Analysis for Large Scale Problems

orthogonalization of A and S. 3 Principal Component Analysis with Missing Values Let us consider the same problemPrincipal Component Analysis for Large Scale Problems with Lots of Missing Values Tapani Raiko://www.cis.hut.fi/projects/bayes/ Abstract. Principal component analysis (PCA) is a well-known classi- cal data analysis technique

Karhunen, Juha

83

Scale-PC shielding analysis sequences

The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.

Bowman, S.M.

1996-05-01

84

Response function calculations for the solid propellant rocket motor are carried out using finite elements. The formulation is based on the Arrhenius law with a single step forward chemical reaction and the pressure coupling impressed by a harmonic acoustic disturbance of arbitrary wave incidence. Multi-dimensional nonlinear time dependent equations are employed with analytical boundary conditions at the flame and decomposition

T. J. Chung; P. K. Kim

1984-01-01

85

NASA Astrophysics Data System (ADS)

Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.

Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang

2014-04-01

86

Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity6

on unrelated machines in the context of algorithmic mechanism design. No truthful mechanisms with nonTruthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity6 Ron Lavi1 (also a multidimensional domain), where the processing time of a job on each machine is either "low

87

The Multi-Dimensional Hardy Uncertainty Principle and its Interpretation in Terms of the

The Multi-Dimensional Hardy Uncertainty Principle and its Interpretation in Terms of the Wigner 15, AT-1090 Wien February 1, 2008 Abstract We extend Hardy's uncertainty principle for a square. We use this extension to show that Hardy's uncertainty principle is equivalent to a statement

Feichtinger, Hans Georg

88

ERIC Educational Resources Information Center

The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…

Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd

2012-01-01

89

A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system

We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.

Asadzadeh, M.; Bartoszek, K. [Department of Mathematics, Chalmers University of Technology and University of Gothenburg SE-412 96 Goeteborg (Sweden)

2011-05-20

90

Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter

ERIC Educational Resources Information Center

We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…

Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.

2010-01-01

91

and aspect) called tuples, e.g. (CANNABIS,SMOKING,EFFECTS). We use f-LDA to model three factors of drug typeExperimenting with Drugs (and Topic Models): Multi-Dimensional Exploration of Recreational Drug of new recreational drugs and trends re- quires mining current information from non-traditional text

Dredze, Mark

92

Supporting Complex Multi-dimensional Queries in P2P Systems Wang-Chien Lee

query) in multi- dimensional data, but their desirable features make them fruitfully used by many third-uniformity in computational resources among peers. Towards this, super-peer networks [26] have emerged as a powerful balance such as KaZaa [24] and continue to grow with great potential. Therefore, enabling super-peer syst

Giles, C. Lee

93

Differential Privacy and the Risk-Utility Tradeoff for Multi-dimensional Contingency Tables

Differential Privacy and the Risk-Utility Tradeoff for Multi-dimensional Contingency Tables Stephen extends this approach to the release of a specified set of margins from a multi-way contingency table for sensible inferences from the released data. 1 Introduction Contingency tables, databases arising from

94

ERIC Educational Resources Information Center

Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…

Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen

2011-01-01

95

Address Decomposition for the Shaping of Multi-dimensional Signal Constellations

Address Decomposition for the Shaping of Multi-dimensional Signal Constellations A. K constellation. This scheme, called as the ad- dress decomposition, is based on decomposing the addressing. This is called a signal constel- lation. The constellation points are usually selected as a finite subset

Kabal, Peter

96

On the Arbitrariness and Robustness of MultiDimensional Poverty Rankings

It is often argued that multi-dimensional measures of well-being and poverty -- such as those based on the capability approach and related views -- are ad hoc. Rankings based on them are not, for this reason, robust to changes in the selection of weights used. In this paper, it is argued that the extent of potential arbitrariness and the range

Mozaffar Qizilbash

2004-01-01

97

Design and fabrication of multi-dimensional RF MEMS variable capacitors

In this work, a multi dimensional RF MEMS variable capacitor that utilizes electrostatic actuation is designed and fabricated on a 425um thick silicon substrate. Electrostatic actuation is preferred over other actuation mechanisms due to low power consumption. The RF MEMS variable capacitor is designed in a CPW topology, with multiple beams supported (1 - 7 beams) on a single pedestal.

Hariharasudhan T Kannan

2003-01-01

98

Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5

The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3.

Davis, C.B.

1998-01-01

99

Multi-dimensional liquid chromatography in proteomics--a review.

Proteomics is the large-scale study of proteins, particularly their expression, structures and functions. This still-emerging combination of technologies aims to describe and characterize all expressed proteins in a biological system. Because of upper limits on mass detection of mass spectrometers, proteins are usually digested into peptides and the peptides are then separated, identified and quantified from this complex enzymatic digest. The problem in digesting proteins first and then analyzing the peptide cleavage fragments by mass spectrometry is that huge numbers of peptides are generated that overwhelm direct mass spectral analyses. The objective in the liquid chromatography approach to proteomics is to fractionate peptide mixtures to enable and maximize identification and quantification of the component peptides by mass spectrometry. This review will focus on existing multidimensional liquid chromatographic (MDLC) platforms developed for proteomics and their application in combination with other techniques such as stable isotope labeling. We also provide some perspectives on likely future developments. PMID:20363391

Zhang, Xiang; Fang, Aiqin; Riley, Catherine P; Wang, Mu; Regnier, Fred E; Buck, Charles

2010-04-01

100

Finite element method for radiation heat transfer in multi-dimensional graded index medium

NASA Astrophysics Data System (ADS)

In graded index medium, ray goes along a curved path determined by Fermat principle, and curved ray-tracing is very difficult and complex. To avoid the complicated and time-consuming computation of curved ray trajectories, a finite element method based on discrete ordinate equation is developed to solve the radiative transfer problem in a multi-dimensional semitransparent graded index medium. Two particular test problems of radiative transfer are taken as examples to verify this finite element method. The predicted dimensionless net radiative heat fluxes are determined by the proposed method and compared with the results obtained by finite volume method. The results show that the finite element method presented in this paper has a good accuracy in solving the multi-dimensional radiative transfer problem in semitransparent graded index medium.

Liu, L. H.; Zhang, L.; Tan, H. P.

2006-02-01

101

Bayesian finite-size scaling analysis

The finite-size scaling analysis for phase transition phenomena is widely\\u000aused to determine the transition point and the universality class. As the\\u000amaximum entropy method for an analytic continuation of quantum Monte Carlo\\u000adata, we propose a Bayesian inference method for the finite-size scaling\\u000aanalysis. This method is based on a regression using a Gaussian process, which\\u000ahas been widely

Kenji Harada

2011-01-01

102

Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws

NASA Astrophysics Data System (ADS)

We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi-domain strategy. For the same RMI test, the hybrid computations on a single GPU, in double precision arithmetics, displays five- to six-fold speedup over the hybrid computations on a single CPU. The relative speedup of the hybrid computation over the WENO computations on GPUs is similar to that on CPUs, demonstrating the advantage of hybrid schemes technique on both CPUs and GPUs.

Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu

2013-11-01

103

Utility of the NEO-FFI in multi-dimensional assessment of orofacial pain conditions

The purpose of this study was to examine the utility of using the NEO-FFI personality assessment as part of multi-dimensional\\u000a psychological assessment in orofacial pain patients during the initial diagnostic visit. All patients completed an orofacial\\u000a pain questionnaire and a battery of psychological questionnaires that cover a wide range of symptoms and behaviors important\\u000a to developing a comprehensive treatment plan.

John E. SchmidtW; W. Michael Hooten; Charles R. Carlson

2011-01-01

104

Metarule-Guided Mining of MultiDimensional Association Rules Using Data Cubes

In this paper, we employ a novel approach to metarule-guided, multi-dimensional association rule mining which explores a data cube structure. We propose algorithms for metarule-guided min- ing: given a metarule containing p predicates, we compare mining on an n-dimensional (n-D) cube structure (where p < n) with mining on smaller multiple pdimensional cubes. In addition, we propose an efficient method

Micheline Kamber; Jiawei Han; Jenny Chiang

1997-01-01

105

MultiDimensional Validation Impact Tests on PZT 95\\/5 and ALOX

Multi-dimensional impact tests were conducted on the ferroelectric ceramic PZT 95\\/5 and alumina-loaded epoxy (ALOX) encapsulants, with the purpose of providing benchmarks for material models in the ALEGRA wavecode. Diagnostics used included line-imaging VISAR (velocity interferometry), a key diagnostic for such tests. Results from four tests conducted with ALOX cylinders impacted by nonplanar copper projectiles were compared with ALEGRA simulations.

M. D. Furnish; J. Robbins; W. M. Trott; L. C. Chhabildas; R. J. Lawrence; S. T. Montgomery

2002-01-01

106

Locating the Issue Public: The MultiDimensional Nature of Engagement with Health Care Reform

This research examines in detail the structure of the issue public for health care reform, drawing from extensive, nationally\\u000a representative survey data tapping general attentiveness to news and public affairs, specific interests in health care issues,\\u000a and motivations (e.g., personal health and financial conditions) to follow health care reform issues. We furthermore adopt\\u000a a multi-dimensional approach to defining the contours

Vincent Price; Clarissa David; Brian Goldthorpe; Marci McCoy Roth; Joseph N. Cappella

2006-01-01

107

Finite element method for radiation heat transfer in multi-dimensional graded index medium

In graded index medium, ray goes along a curved path determined by Fermat principle, and curved ray-tracing is very difficult and complex. To avoid the complicated and time-consuming computation of curved ray trajectories, a finite element method based on discrete ordinate equation is developed to solve the radiative transfer problem in a multi-dimensional semitransparent graded index medium. Two particular test

L. H. Liu; L. Zhang; H. P. Tan

2006-01-01

108

Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589

Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose

2014-01-01

109

Scale-Specific Multifractal Medical Image Analysis

Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

Braverman, Boris

2013-01-01

110

NASA Astrophysics Data System (ADS)

This paper addresses the extension of one-dimensional filters in two and three space dimensions. A new multi-dimensional extension is proposed for explicit and implicit generalized Shapiro filters. We introduce a definition of explicit and implicit generalized Shapiro filters that leads to very simple formulas for the analyses in two and three space dimensions. We show that many filters used for weather forecasting, high-order aerodynamic and aeroacoustic computations match the proposed definition. Consequently the new multi-dimensional extension can be easily implemented in existing solvers. The new multi-dimensional extension and the two commonly used methods are compared in terms of compactness, robustness, accuracy and computational cost. Benefits of the genuinely multi-dimensional extension are assessed for various computations using the compressible Euler equations.

Falissard, F.

2013-11-01

111

Scale Free Reduced Rank Image Analysis.

ERIC Educational Resources Information Center

In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

Horst, Paul

112

A multi-dimensional and multi-species reactive transport model was developed to aid in the analysis of natural attenuation design at chlorinated solvent sites. The model can simulate several simultaneously occurring attenuation processes including aerobic and anaerobic biological degradation processes. The developed model was applied to analyze field-scale transport and biodegradation processes occurring at the Area-6 site in Dover Air Force Base,

T. Prabhakar Clement; Christian D. Johnson; Yunwei Sun; Gary M. Klecka; Craig Bartlett

2000-01-01

113

2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme

NASA Technical Reports Server (NTRS)

A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.

Wood, William A.; Kleb, William L.

2001-01-01

114

A case study of nucleosynthesis in multi-dimensional supernova simulations

NASA Astrophysics Data System (ADS)

We present a case study of several multi-dimensional smoothed particle hydrodynamics simulations with large nuclear network post-processing in which the effects of asymmetries on nucleosynthesis in supernovae are assessed. The abundances and spatial distribution of the short-lived radionuclides 26Al, 41Ca, and 60Fe are evaluated along with the coproduced oxygen isotopes and the S/Si ratio, used as an observational tracer. We also examine 44Ti and 56Ni and the bulk abundances of key common elements. Particular attention is paid to the composition of the Rayleigh-Taylor Instability driven 'bullets' of material observed in young supernova remnants.

Sexton, Jack; Young, Patrick A.; Ellinger, Carola I.; Fryer, Chris; Rockefeller, Gabriel

2015-01-01

115

We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

Fawley, William M.

2002-03-25

116

Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

NASA Technical Reports Server (NTRS)

An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

Abarbanel, Saul; Ditkowski, Adi

1996-01-01

117

A multi-dimensional scale for repositioning public park and recreation services

to embrace all aspects of the marketing mix. Promotional messages convey the product?s or the service?s distinct image, but other variables in the marketing mix also must contribute to operationalizing the selected position. Lovelock (1996) highlights...), and that ?people make their decisions based on their individual perceptions of reality, rather than on [the marketer?s] definition of that reality? (Lovelock, 1996, p. 168). For this reason, marketers must adopt a customer perspective and understand how...

Kaczynski, Andrew Thomas

2004-09-30

118

Large-Scale Visual Data Analysis

NASA Astrophysics Data System (ADS)

Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

Johnson, Chris

2014-04-01

119

Background Abattoir detected pathologies are of crucial importance to both pig production and food safety. Usually, more than one pathology coexist in a pig herd although it often remains unknown how these different pathologies interrelate to each other. Identification of the associations between different pathologies may facilitate an improved understanding of their underlying biological linkage, and support the veterinarians in encouraging control strategies aimed at reducing the prevalence of not just one, but two or more conditions simultaneously. Results Multi-dimensional machine learning methodology was used to identify associations between ten typical pathologies in 6485 batches of slaughtered finishing pigs, assisting the comprehension of their biological association. Pathologies potentially associated with septicaemia (e.g. pericarditis, peritonitis) appear interrelated, suggesting on-going bacterial challenges by pathogens such as Haemophilus parasuis and Streptococcus suis. Furthermore, hepatic scarring appears interrelated with both milk spot livers (Ascaris suum) and bacteria-related pathologies, suggesting a potential multi-pathogen nature for this pathology. Conclusions The application of novel multi-dimensional machine learning methodology provided new insights into how typical pig pathologies are potentially interrelated at batch level. The methodology presented is a powerful exploratory tool to generate hypotheses, applicable to a wide range of studies in veterinary research. PMID:22937883

2012-01-01

120

NASA Astrophysics Data System (ADS)

Real-time optical surface imaging systems offer a non-invasive way to monitor intra-fraction motion of a patient's thorax surface during radiotherapy treatments. Due to lack of point correspondence in dynamic surface acquisition, such systems cannot currently provide 3D motion tracking at specific surface landmarks, as available in optical technologies based on passive markers. We propose to apply deformable mesh registration to extract surface point trajectories from markerless optical imaging, thus yielding multi-dimensional breathing traces. The investigated approach is based on a non-rigid extension of the iterative closest point algorithm, using a locally affine regularization. The accuracy in tracking breathing motion was quantified in a group of healthy volunteers, by pair-wise registering the thoraco-abdominal surfaces acquired at three different respiratory phases using a clinically available optical system. The motion tracking accuracy proved to be maximal in the abdominal region, where breathing motion mostly occurs, with average errors of 1.09 mm. The results demonstrate the feasibility of recovering multi-dimensional breathing motion from markerless optical surface acquisitions by using the implemented deformable registration algorithm. The approach can potentially improve respiratory motion management in radiation therapy, including motion artefact reduction or tumour motion compensation by means of internal/external correlation models.

Schaerer, Joël; Fassi, Aurora; Riboldi, Marco; Cerveri, Pietro; Baroni, Guido; Sarrut, David

2012-01-01

121

NASA Astrophysics Data System (ADS)

An interface capturing method with a continuous function is proposed within the framework of the volume-of-fluid (VOF) method. Being different from the traditional VOF methods that require a geometrical reconstruction and identify the interface by a discontinuous Heaviside function, the present method makes use of the hyperbolic tangent function (known as one of the sigmoid type functions) in the tangent of hyperbola interface capturing (THINC) method [F. Xiao, Y. Honma, K. Kono, A simple algebraic interface capturing scheme using hyperbolic tangent function, Int. J. Numer. Methods Fluids 48 (2005) 1023-1040] to retrieve the interface in an algebraic way from the volume-fraction data of multi-component materials. Instead of the 1D reconstruction in the original THINC method, a multi-dimensional hyperbolic tangent function is employed in the present new approach. The present scheme resolves moving interface with geometric faithfulness and compact thickness, and has at least the following advantages: (1) the geometric reconstruction is not required in constructing piecewise approximate functions; (2) besides a piecewise linear interface, curved (quadratic) surface can be easily constructed as well; and (3) the continuous multi-dimensional hyperbolic tangent function allows the direct calculations of derivatives and normal vectors. Numerical benchmark tests including transport of moving interface and incompressible interfacial flows are presented to validate the numerical accuracy for interface capturing and to show the capability for practical problems such as a stationary circular droplet, a drop oscillation, a shear-induced drop deformation and a rising bubble.

Ii, Satoshi; Sugiyama, Kazuyasu; Takeuchi, Shintaro; Takagi, Shu; Matsumoto, Yoichiro; Xiao, Feng

2012-03-01

122

Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions

Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s ? estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241

Li, Haoran; Xiong, Li; Jiang, Xiaoqian

2014-01-01

123

Multi-Dimensional Validation Impact Tests on PZT 95/5 and ALOX

NASA Astrophysics Data System (ADS)

Multi-dimensional impact tests were conducted on the ferroelectric ceramic PZT 95/5 and alumina-loaded epoxy (ALOX) encapsulants, with the purpose of providing benchmarks for material models in the ALEGRA wavecode. Diagnostics used included line-imaging VISAR (velocity interferometry), a key diagnostic for such tests. Results from four tests conducted with ALOX cylinders impacted by nonplanar copper projectiles were compared with ALEGRA simulations. The simulation produced approximately correct attenuations and divergence, but somewhat higher wave velocities. Several sets of tests conducted using PZT rods (length:diameter ratio = 5:1) encapsulated in ALOX, and diagnosed with line-imaging and point VISAR, were modeled as well. Significant improvement in wave arrival times and waveforms agreement for the two-material multi-dimensional experiments was achieved by simultaneous multiple parameter optimization on multiple one-dimensional experiments. Additionally, a variable friction interface was studied in these calculations. We conclude further parameter optimization is required for both material models.

Furnish, M. D.; Robbins, J.; Trott, W. M.; Chhabildas, L. C.; Lawrence, R. J.; Montgomery, S. T.

2002-07-01

124

On the Global Existence and Stability of a Multi-Dimensional Supersonic Conic Shock Wave

NASA Astrophysics Data System (ADS)

We establish the global existence and stability of a three-dimensional supersonic conic shock wave for a compactly perturbed steady supersonic flow past an infinitely long circular cone with a sharp angle. The flow is described by a 3-D steady potential equation, which is multi-dimensional, quasilinear, and hyperbolic with respect to the supersonic direction. Making use of the geometric properties of the pointed shock surface together with the Rankine-Hugoniot conditions on the conic shock surface and the boundary condition on the surface of the cone, we obtain a global uniform weighted energy estimate for the nonlinear problem by finding an appropriate multiplier and establishing a new Hardy-type inequality on the shock surface. Based on this, we prove that a multi-dimensional conic shock attached to the vertex of the cone exists globally when the Mach number of the incoming supersonic flow is sufficiently large. Moreover, the asymptotic behavior of the 3-D supersonic conic shock solution, which is shown to approach the corresponding background shock solution in the downstream domain for the uniform supersonic constant flow past the sharp cone, is also explicitly given.

Li, Jun; Ingo, Witt; Yin, Huicheng

2014-07-01

125

Multi-Dimensional Hydrodynamic Simulations with Non-Equilibrium Radiative Cooling Calculations

NASA Astrophysics Data System (ADS)

In the optically thin gas within the temperature range of 104 to a few times 106 K, radiative cooling due to line emission from abundant metal ions such as carbon, nitrogen, oxygen, neon, silicon, and iron ions can affect the gas dynamics and it becomes important to calculate the cooling rates accurately while running the hydrodynamic simulations. The accurate calculation should trace together the detailed processes of ionization and recombination for all the relevant ions of each metal at each hydrodynamic time step, i.e., in a non-equilibrium fashion. So far, due to the computational cost, it has been delayed to implement this non-equilibrium cooling calculation in the multi-dimensional hydrodynamic simulations, but it is now possible to do this thanks to the rapidly growing computing powers. By using the platform of the FLASH code, we have implemented the non-equilibrium radiative cooling calculation in the multi-dimensional hydrodynamic simulations. Here we present the code development process and the results of some test problems.

Kwak, Kyujin

2015-01-01

126

NASA Astrophysics Data System (ADS)

A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.

Haider, M. M.; Mamun, A. A.

2012-10-01

127

Scaling the Natural World Using Dimensional Analysis

NSDL National Science Digital Library

A unit that addresses the sheer volume of incomprehensible numbers (speed, distance, age) in the natural world, helping students to understand the scale of the world using the concepts of rates, proportions and dimensional analysis. Students learn to calculate problems such as: Measurements indicate that the continents of Europe and North America are separating (plate tectonics) at the rate of about 2 centimeters per year. If Columbus could repeat his famous voyage of 1492, about how many feet or yards farther must he travel?

Kass, Stephen

128

MULTIPLE LEBESGUE INTEGRATION ON TIME SCALES MARTIN BOHNER AND GUSEIN SH. GUSEINOV

, we introduce the definitions of Lebesgue multi-dimensional delta (nabla and mixed types) measures description of the CarathÂ´eodory construction of a Lebesgue measure in an ab- stract setting is given. Then the Lebesgue multi-dimensional delta measure on time scales is introduced and the Lebesgue delta measure of any

Bohner, Martin

129

Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.

Akhter, T.; Hossain, M. M.; Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka-1342 (Bangladesh)

2012-09-15

130

Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

NASA Technical Reports Server (NTRS)

The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

Liu, Jiwen; Tiwari, Surendra N.

1994-01-01

131

Background The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2], [3]. PMID:21246037

Gattol, Valentin; Sääksjärvi, Maria; Carbon, Claus-Christian

2011-01-01

132

Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525

Han, Xianlin; Yang, Kui; Gross, Richard W.

2011-01-01

133

A G-FDTD scheme for solving multi-dimensional open dissipative Gross-Pitaevskii equations

NASA Astrophysics Data System (ADS)

Behaviors of dark soliton propagation, collision, and vortex formation in the context of a non-equilibrium condensate are interesting to study. This can be achieved by solving open dissipative Gross-Pitaevskii equations (dGPEs) in multiple dimensions, which are a generalization of the standard Gross-Pitaevskii equation that includes effects of the condensate gain and loss. In this article, we present a generalized finite-difference time-domain (G-FDTD) scheme, which is explicit, stable, and permits an accurate solution with simple computation, for solving the multi-dimensional dGPE. The scheme is tested by solving a steady state problem in the non-equilibrium condensate. Moreover, it is shown that the stability condition for the scheme offers a more relaxed time step restriction than the popular pseudo-spectral method. The G-FDTD scheme is then employed to simulate the dark soliton propagation, collision, and the formation of vortex-antivortex pairs.

Moxley, Frederick Ira; Byrnes, Tim; Ma, Baoling; Yan, Yun; Dai, Weizhong

2015-02-01

134

NASA Astrophysics Data System (ADS)

RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here.

Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai

2014-03-01

135

A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.

Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip

2013-01-29

136

RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here. PMID:24671105

Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai

2014-01-01

137

A dynamic nuclear polarization strategy for multi-dimensional Earth's field NMR spectroscopy.

Dynamic nuclear polarization (DNP) is introduced as a powerful tool for polarization enhancement in multi-dimensional Earth's field NMR spectroscopy. Maximum polarization enhancements, relative to thermal equilibrium in the Earth's magnetic field, are calculated theoretically and compared to the more traditional prepolarization approach for NMR sensitivity enhancement at ultra-low fields. Signal enhancement factors on the order of 3000 are demonstrated experimentally using DNP with a nitroxide free radical, TEMPO, which contains an unpaired electron which is strongly coupled to a neighboring (14)N nucleus via the hyperfine interaction. A high-quality 2D (19)F-(1)H COSY spectrum acquired in the Earth's magnetic field with DNP enhancement is presented and compared to simulation. PMID:18926746

Halse, Meghan E; Callaghan, Paul T

2008-12-01

138

A new analytical method to solve the heat equation for a multi-dimensional composite slab

NASA Astrophysics Data System (ADS)

A novel analytical approach has been developed for heat conduction in a multi-dimensional composite slab subject to time-dependent boundary changes of the first kind. Boundary temperatures are represented as Fourier series. Taking advantage of the periodic properties of boundary changes, the analytical solution is obtained and expressed explicitly. Nearly all the published works necessitate searching for associated eigenvalues in solving such a problem even for a one-dimensional composite slab. In this paper, the proposed method involves no iterative computation such as numerically searching for eigenvalues and no residue evaluation. The adopted method is simple which represents an extension of the novel analytical approach derived for the one-dimensional composite slab. Moreover, the method of 'separation of variables' employed in this paper is new. The mathematical formula for solutions is concise and straightforward. The physical parameters are clearly shown in the formula. Further comparison with numerical calculations is presented.

Lu, X.; Tervola, P.; Viljanen, M.

2005-04-01

139

Racial-ethnic self-schemas: Multi-dimensional identity-based motivation

Prior self-schema research focuses on benefits of being schematic vs. aschematic in stereotyped domains. The current studies build on this work, examining racial-ethnic self-schemas as multi-dimensional, containing multiple, conflicting, and non-integrated images. A multidimensional perspective captures complexity; examining net effects of dimensions predicts within-group differences in academic engagement and well-being. When racial-ethnicity self-schemas focus attention on membership in both in-group and broader society, engagement with school should increase since school is not seen as out-group defining. When racial-ethnicity self-schemas focus attention on inclusion (not obstacles to inclusion) in broader society, risk of depressive symptoms should decrease. Support for these hypotheses was found in two separate samples (8th graders, n = 213, 9th graders followed to 12th grade n = 141). PMID:19122837

Oyserman, Daphna

2008-01-01

140

Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae

We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.

Tanaka, Masaomi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Maeda, Keiichi [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan); Max-Planck-Institut fuer Astrophysik, Garching (Germany); Mazzali, Paolo A. [Max-Planck-Institut fuer Astrophysik, Garching bei Muenchen (Germany); Istituto Nazionale di Astrofisica, OATs, Trieste (Italy); Nomoto, Ken'ichi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan)

2008-05-21

141

Multi-dimensional fiber-optic radiation sensor for ocular proton therapy dosimetry

NASA Astrophysics Data System (ADS)

In this study, we fabricated a multi-dimensional fiber-optic radiation sensor, which consists of organic scintillators, plastic optical fibers and a water phantom with a polymethyl methacrylate structure for the ocular proton therapy dosimetry. For the purpose of sensor characterization, we measured the spread out Bragg-peak of 120 MeV proton beam using a one-dimensional sensor array, which has 30 fiber-optic radiation sensors with a 1.5 mm interval. A uniform region of spread out Bragg-peak using the one-dimensional fiber-optic radiation sensor was obtained from 20 to 25 mm depth of a phantom. In addition, the Bragg-peak of 109 MeV proton beam was measured at the depth of 11.5 mm of a phantom using a two-dimensional sensor array, which has 10×3 sensor array with a 0.5 mm interval.

Jang, K. W.; Yoo, W. J.; Moon, J.; Han, K. T.; Park, B. G.; Shin, D.; Park, S.-Y.; Lee, B.

2012-12-01

142

Ionizing shocks in argon. Part II: Transient and multi-dimensional effects

We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.

Kapper, M. G.; Cambier, J.-L. [Air Force Research Laboratory, Edwards AFB, CA 93524 (United States)

2011-06-01

143

Giant Leaps and Monkey Branes in Multi-Dimensional Flux Landscapes

There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'monkey brane' that wraps the internal manifold and exhibits behavior not visible in the effective theory. The implications for the Bousso-Polchinski landscape are discussed.

Brown, Adam R

2010-01-01

144

Multi-dimensional instability of multi-ion acoustic solitary waves in a degenerate magnetized plasma

NASA Astrophysics Data System (ADS)

The multi-dimensional instability of obliquely propagating multi-ion acoustic (MIA) solitary structures was studied theoretically by the small-k (long wavelength plane wave) perturbation expansion technique in an ultra-relativistic degenerate magnetized plasma, which consists of inertia less electrons, inertial ions and stationary arbitrarily charged heavy ions. The Zakharov-Kuznetsov equation is derived by the reductive perturbation method and its solitary wave solution is analyzed. The basic properties of small but finite-amplitude MIA solitary waves have been modified significantly by the combined effects of the degenerate electron number density, heavy ion number density, external magnetic field and obliqueness. The underlying physics of the MIA solitary waves, which are relevant to space plasma situations, and the basic features, such as amplitude, width and growth rate, are briefly discussed.

Akter, S.; Haider, M. M.; Duha, S. S.; Salahuddin, M.; Mamun, A. A.

2013-07-01

145

ParaScale: Exploiting Parametric Timing Analysis for Real-Time Schedulers and Dynamic Voltage for dynamic power conservation by exploiting parametric loop bounds for ParaScale, our intra-task dynamic voltage scaling (DVS) approach. Our results demonstrate that the parametric approach to timing analysis

Mueller, Frank

146

Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport

NASA Astrophysics Data System (ADS)

We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.

Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun

2015-02-01

147

The purpose of this paper is to present a Dynamic Time Warping technique which reduces significantly the data processing time and memory size of multi-dimensional time series sampled by the biometric smart pen device BiSP. The acquisition device is a novel ballpoint pen equipped with a diversity of sensors for monitoring the kinematics and dynamics of handwriting movement. The DTW

Muzaffar Bashir; Jürgen Kempf

2008-01-01

148

The MoT-ICE: a new high-resolution wave-propagation algorithm for multi-dimensional

;cation code: 35L65, 35L67, 65M06, 65M25, 76N15 CONTENTS 1. Introduction. 2. Decomposition of multi-dimensional#27;erence of two #29;uxes is decomposed into one-dimensional waves. These one-dimensional schemes] present di#30;culties of Godunov's scheme for strong two-dimensional shockwaves arising in astrophysical

Noelle, Sebastian

149

. Keywords: multidimensional fusion; video fusion; pixel level fusion; super resolution; normalised to show key research in: single and multi-modal data fusion, image enhancement, feature detectionApplied Multi-Dimensional Fusion ASHER MAHMOOD1,*, PHILIP M. TUDOR1, WILLIAM OXFORD2, ROBERT

Nelson, James

150

Multi-dimensional harmonic balance applied to rotor dynamics Mikhail Guskov, Jean-Jacques Sinou-frequency systems is a generalization of the harmonic balance in rotor systems [SL07,PM97] for multi a generalized version of harmonic balance coupled with arc-length continuation, developed in order to study non

Boyer, Edmond

151

(Si) - pi, i.e., it is quasi-linear. For the objective of social surplus, the single-dimensional-agent that the agents' private preferences are given by a single value for receiving an abstract service, i.e., that agents' types are single dimensional. We now turn to multi-dimensional environments where the agents

Fiat, Amos

152

Minimum Sample Size Requirements for Mokken Scale Analysis

ERIC Educational Resources Information Center

An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…

Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas

2014-01-01

153

The spatial auto-regression (SAR) model is a popula r spatial data analysis technique which has been used in many applications with geo-spatial datasets. Howeve r, exact solutions for estimating SAR parameters are computationally expensive due to the need to compute all the eigen-values of a very large matrix . Therefore, serial solutions for the SAR model do not scale up

Shashi Shekhar; Baris M. Kazar; David J. Lilja

154

Numerical magnetohydrodynamics in astrophysics algorithm and tests for multi-dimensional flow

We present for astrophysical use a multi-dimensional numerical code to solve the equations for ideal magnetohydrodynamics (MHD). It is based on an explicit finite difference method on an Eulerian grid, called the Total Variation Diminishing (TVD) scheme, which is a second-order-accurate extension of the Roe-type upwind scheme. Multiple spatial dimensions are treated through a Strang-type operator splitting. The constraint of a divergence-free field is enforced exactly by calculating a correction via a gauge transformation in each time step. Results from two-dimensional shock tube tests show that the code captures correctly discontinuities in all three MHD waves families as well as contact discontinuities. The numerical viscosities and resistivity in the code, which are useful in order to understand simulations involving turbulent flows, are estimated through the decay of two-dimensional linear waves. Finally, the robustness of the code in two-dimensions is demonstrated through calculations of the Kelvin-Helmh...

Ryu, D; Frank, A I; Ryu, Dongsu; Frank, Adam

1995-01-01

155

Numerical Magnetohydrodynamics in Astrophysics: Algorithm and Tests for Multi-Dimensional Flow

We present for astrophysical use a multi-dimensional numerical code to solve the equations for ideal magnetohydrodynamics (MHD). It is based on an explicit finite difference method on an Eulerian grid, called the Total Variation Diminishing (TVD) scheme, which is a second-order-accurate extension of the Roe-type upwind scheme. Multiple spatial dimensions are treated through a Strang-type operator splitting. The constraint of a divergence-free field is enforced exactly by calculating a correction via a gauge transformation in each time step. Results from two-dimensional shock tube tests show that the code captures correctly discontinuities in all three MHD waves families as well as contact discontinuities. The numerical viscosities and resistivity in the code, which are useful in order to understand simulations involving turbulent flows, are estimated through the decay of two-dimensional linear waves. Finally, the robustness of the code in two-dimensions is demonstrated through calculations of the Kelvin-Helmholtz instability and the Orszag-Tang vortex.

Dongsu Ryu; T. W. Jones; Adam Frank

1995-05-17

156

NASA Astrophysics Data System (ADS)

Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.

Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng

2014-08-01

157

NASA Astrophysics Data System (ADS)

remote sensing technologies and improved computer performance now allow numerical flow modeling over large stream domains. However, there has been limited testing of whether channel topography can be remotely mapped with accuracy necessary for such modeling. We assessed the ability of the Experimental Advanced Airborne Research Lidar, to support a multi-dimensional fluid dynamics model of a small mountain stream. Random point elevation errors were introduced into the lidar point cloud, and predictions of water surface elevation, velocity, bed shear stress, and bed mobility were compared to those made without the point errors. We also compared flow model predictions using the lidar bathymetry with those made using a total station channel field survey. Lidar errors caused < 1 cm changes in the modeled water surface elevations. Effects of the point errors on other flow characteristics varied with both the magnitude of error and the local spatial density of lidar data. Shear stress errors were greatest where flow was naturally shallow and fast, and lidar errors caused the greatest changes in flow cross-sectional area. The majority of the stress errors were less than ± 5 Pa. At near bankfull flow, the predicted mobility state of the median grain size changed over ? 1.3% of the model domain as a result of lidar elevation errors and ? 3% changed mobility in the comparison of lidar and ground-surveyed topography. In this riverscape, results suggest that an airborne bathymetric lidar can map channel topography with sufficient accuracy to support a numerical flow model.

McKean, J.; Tonina, D.; Bohn, C.; Wright, C. W.

2014-03-01

158

Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

NASA Technical Reports Server (NTRS)

We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

2000-01-01

159

Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

NASA Technical Reports Server (NTRS)

We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

2000-01-01

160

This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4th year Secondary Education students - control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831

Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás

2014-01-01

161

In this paper, we address dynamic clustering in high dimensional data or feature spaces as an optimization problem where multi-dimensional particle swarm optimization (MD PSO) is used to find out the true number of clusters, while fractional global best formation (FGBF) is applied to avoid local optima. Based on these techniques we then present a novel and personalized long-term ECG classification system, which addresses the problem of labeling the beats within a long-term ECG signal, known as Holter register, recorded from an individual patient. Due to the massive amount of ECG beats in a Holter register, visual inspection is quite difficult and cumbersome, if not impossible. Therefore the proposed system helps professionals to quickly and accurately diagnose any latent heart disease by examining only the representative beats (the so called master key-beats) each of which is representing a cluster of homogeneous (similar) beats. We tested the system on a benchmark database where the beats of each Holter register have been manually labeled by cardiologists. The selection of the right master key-beats is the key factor for achieving a highly accurate classification and the proposed systematic approach produced results that were consistent with the manual labels with 99.5% average accuracy, which basically shows the efficiency of the system. PMID:21096010

Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef

2010-01-01

162

Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951

Vogt, Stefan; Ralle, Martina

2012-01-01

163

Detection of crossover time scales in multifractal detrended fluctuation analysis

NASA Astrophysics Data System (ADS)

Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

Ge, Erjia; Leung, Yee

2013-04-01

164

Independent and complementary methods for large-scale structural analysis

Independent and complementary methods for large-scale structural analysis of mammalian chromatin. Richter,4 Daniel G. Peterson,5 Oliver J. Rando,3 William S. Noble,2 and Robert E. Kingston1,7 1 Department large-scale analysis. We validated these assays using the known positions of nucleosomes on the mouse

Yuan, Guo-Cheng "GC"

165

Multidimensional Scaling versus Components Analysis of Test Intercorrelations.

ERIC Educational Resources Information Center

Considers the relationship between coordinate estimates in components analysis and multidimensional scaling. Reports three small Monte Carlo studies comparing nonmetric scaling solutions to components analysis. Results are related to other methodological issues surrounding research on the general ability factor, response tendencies in…

Davison, Mark L.

1985-01-01

166

Dynamical scaling analysis of plant callus growth

NASA Astrophysics Data System (ADS)

We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.

Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.

2003-07-01

167

The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.

EL-Shamy, E. F., E-mail: emadel-shamy@hotmail.com [Department of Physics, Faculty of Science, Damietta University, New Damietta 34517, Egypt and Department of Physics, College of Science, King Khalid University, Abha P.O. 9004 (Saudi Arabia)

2014-08-15

168

NASA Astrophysics Data System (ADS)

The solitary structures of multi-dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.

EL-Shamy, E. F.

2014-08-01

169

We study the problem of efficient integration of variational equations in multi-dimensional Hamiltonian systems. For this purpose, we consider a Runge-Kutta-type integrator, a Taylor series expansion method and the so-called `Tangent Map' (TM) technique based on symplectic integration schemes, and apply them to the Fermi-Pasta-Ulam $\\\\beta$ (FPU-$\\\\beta$) lattice of $N$ nonlinearly coupled oscillators, with $N$ ranging from 4 to 20.

Enrico Gerlach; Siegfried Eggl; Charalampos Skokos

2011-01-01

170

NASA Astrophysics Data System (ADS)

A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

Du, Wenbo

171

Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

NASA Astrophysics Data System (ADS)

A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. The scalar test cases include advected shear, circular advection, non-linear advection with coalescing shock and expansion fans, and advection-diffusion. For all scalar cases the fluctuation splitting scheme is more accurate, and the primary mechanism for the improved fluctuation splitting performance is shown to be the reduced production of artificial dissipation relative to DMFDSFV. The most significant scalar result is for combined advection-diffusion, where the present fluctuation splitting scheme is able to resolve the physical dissipation from the artificial dissipation on a much coarser mesh than DMFDSFV is able to, allowing order-of-magnitude reductions in solution time. Among the inviscid test cases the converging supersonic streams problem is notable in that the fluctuation splitting scheme exhibits superconvergent third-order spatial accuracy. For the inviscid cases of a supersonic diamond airfoil, supersonic slender cone, and incompressible circular bump the fluctuation splitting drag coefficient errors are typically half the DMFDSFV drag errors. However, for the incompressible inviscid sphere the fluctuation splitting drag error is larger than for DMFDSFV. A Blasius flat plate viscous validation case reveals a more accurate v-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

Wood, William Alfred, III

172

Multi-dimensional Features of Neutrino Transfer in Core-collapse Supernovae

NASA Astrophysics Data System (ADS)

We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S n method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ~20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (~2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.

Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S.

2015-01-01

173

Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

NASA Technical Reports Server (NTRS)

A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

2002-01-01

174

Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405

Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

2014-01-01

175

ALEGRA-HEDP Multi-Dimensional Simulations of Z-pinch Related Physics

NASA Astrophysics Data System (ADS)

The marriage of experimental diagnostics and computer simulations continues to enhance our understanding of the physics and dynamics associated with current-driven wire arrays. Early models that assumed the formation of an unstable, cylindrical shell of plasma due to wire merger have been replaced with a more complex picture involving wire material ablating non-uniformly along the wires, creating plasma pre-fill interior to the array before the bulk of the array collapses due to magnetic forces. Non-uniform wire ablation leads to wire breakup, which provides a mechanism for some wire material to be left behind as the bulk of the array stagnates onto the pre-fill. Once the bulk of the material has stagnated, electrical current can then shift back to the material left behind and cause it to stagnate onto the already collapsed bulk array mass. These complex effects impact the total radiation output from the wire array which is very important to application of that radiation for inertial confinement fusion. A detailed understanding of the formation and evolution of wire array perturbations is needed, especially for those which are three-dimensional in nature. Sandia National Laboratories has developed a multi-physics research code tailored to simulate high energy density physics (HEDP) environments. ALEGRA-HEDP has begun to simulate the evolution of wire arrays and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. Our three-dimensional code capability now provides us with the ability to solve for the magnetic field and current density distribution associated with the wire array and the evolution of three-dimensional effects seen experimentally. The insight obtained from these multi-dimensional simulations of wire arrays will be presented and specific simulations will be compared to experimental data.

Garasi, Christopher J.

2003-10-01

176

Determination of bromate in sea water using multi-dimensional matrix-elimination ion chromatography.

A multi-dimensional matrix-elimination ion chromatography approach has been applied to the determination of bromate in seawater samples. Two-dimensional and three-dimensional configurations were evaluated for their efficacy to eliminate the interference caused by the high concentration of ubiquitous ions present in seawater, such as chloride and sulfate. A two-dimensional approach utilising a high capacity second dimension separation comprising two Dionex AS24 columns connected in series was applied successfully and permitted the determination of bromate in undiluted seawater samples injected directly onto the ion chromatography system. Using this approach the limit of detection (LOD) for bromate based on a signal to noise ratio of 3 was 1050 ?g/L using a 500 ?L injection loop. Good linearity was obtained for bromate with correlation coefficients for the calibration curves of 0.9981 and 0.9996 based on peak height and area, respectively. A three-dimensional method utilising two 10-port switching valves to allow sharing of the second suppressor and detector between the second and third dimension separations showed better resolution and detection for bromate and reduced the LOD to 60 ?g/L for spiked seawater samples. Good linearity was maintained with correlation coefficients of 0.9991 for both peak height and area. Ozonated seawater samples were also analysed and exhibited a non-linear increase in bromate level on increasing ozonation time. A bromate concentration in excess of 1770 ?g/L was observed following ozonation of the seawater sample for 120 min. Recoveries for the three-dimensional system were 92% and 89% based on peak height and area, respectively, taken over 5 ozonated samples with 3 replicates per sample. PMID:22074647

Zakaria, Philip; Bloomfield, Carrie; Shellie, Robert A; Haddad, Paul R; Dicinoski, Greg W

2011-12-16

177

Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405

Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

2014-01-01

178

Developmental Analysis of the Wechsler Memory Scale.

ERIC Educational Resources Information Center

Examined which memory dimensions deteriorate with increasing age and examines the construct validity of the Wechsler Memory Scale (WMS) (N=1,264 males and 1,141 females at six age intervals). Visual-spatial memory tasks, remembering stories, and learning pairs of associated words proved more difficult with advanced age. (JAC)

Zagar, Robert; And Others

1984-01-01

179

Multi-dimensional analysis of hdl: an approach to understanding atherogenic hdl

the early onset of coronary artery disease (CAD). The research presented here focuses on the pairing of DGU with post-separatory techniques including matrix-assisted laser desorption mass spectrometry (MALDI-MS), liquid chromatography mass spectrometry (LC...

Johnson, Jr., Jeffery Devoyne

2009-05-15

180

Cormier-Michele , Angela DePacef , Michael B. Eiseng , Charless C. Fowlkesh , Cameron G. R. Geddesi , Hans of Computer Science, University of California, Davis, One Shields Avenue, Davis, CA 95616, USA. k

181

Local variance for multi-scale analysis in geomorphometry

NASA Astrophysics Data System (ADS)

Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.

Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas

2011-07-01

182

Local variance for multi-scale analysis in geomorphometry

Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas

2011-01-01

183

Local variance for multi-scale analysis in geomorphometry.

Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas

2011-07-15

184

Large Scale Numerical Analysis of Scaling Behaviour of the Anderson Transition

Large Scale Numerical Analysis of Scaling Behaviour of the Anderson Transition TOMI OHTSUKI numerical simulations have made this possible. Different Hamiltonians describing the disor- dered electron] and that the theory may be unsound. The above failures of the field theoretical ap- proach mean that numerical

Katsumoto, Shingo

185

Discrete implementations of scale transform

NASA Astrophysics Data System (ADS)

Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.

Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.

1999-11-01

186

Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

Hong, Y.-S.T.; Rosen, M.R.; Bhamidimarri, R.

2003-01-01

187

MOKKEN: Stata module: Mokken scale analysis

mokken is command for non-parametric scaling of dichotomuous items. It produces results similar to alpha, item A polytomuous version of mokken (due to Molenaar) is under construction, but it doesnot have high priority at this moment. For those with Stata v6 on an internet-accessible machine, install by typing .net cd http:\\/\\/www.fss.uu.nl\\/soc\\/iscore\\/stata\\/ then .net install mokken

Jeroen Weesie

1999-01-01

188

Genetic Analysis of Invasive Plant Populations at Different Spatial Scales

Measuring genetic diversity requires selection of a spatial scale of analysis. Different levels of genetic structuring are\\u000a revealed at different spatial scales, however, and the relative importance of factors driving genetic structuring varies along\\u000a the spatial scale continuum. Unequal gene flow is a major factor determining genetic structure in plant populations at the\\u000a local level, while the effect of selection

Sarah Ward

2006-01-01

189

Rasch Analysis of the Geriatric Depression Scale--Short Form

ERIC Educational Resources Information Center

Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…

Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.

2009-01-01

190

Functional Analysis of Large-scale DNA Strand Displacement Circuits

Theories that enables us to prove the functional correctness of DNA circuit designs for arbitrary inputsFunctional Analysis of Large-scale DNA Strand Displacement Circuits Boyan Yordanov, Christoph M prop- erties of large-scale DNA strand displacement (DSD) circuits based on Satisfiability Modulo

Hamadi, Yousseff

191

Why Multicast Protocols (Don't) Scale: An Analysis

Why Multicast Protocols (Don't) Scale: An Analysis of Multipoint Algorithms for Scalable Group, Yvonne Recendez, and Gail Stowers. To the army of people who have maintained the systems on which I rely

192

Scale Selection for the Analysis of Point-Sampled Curves

Scale Selection for the Analysis of Point-Sampled Curves: Extended Report Ranjith Unnikrishnan Jean-FrancÂ¸ois Lalonde Nicolas Vandapel Martial Hebert CMU-RI-TR-06-25 June 2006 Robotics Institute Carnegie Mellon

Gupta, Abhinav

193

Scaling analysis of flow in channel with viscous dissipation

NSDL National Science Digital Library

Scaling analysis is used to predict the functional dependence and order of magnitude of the maximum temperature difference between the fluid and the channel wall, for fully developed flow between parallel plates with viscous dissipation.

Krane, Matthew J.

2008-10-25

194

SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION

Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.

John Crepeau

2011-03-01

195

MHD Analysis of The Heliopause Scale

NASA Astrophysics Data System (ADS)

Voyager-1 (V1) was at ~122 AU from the Sun in August 2012, and V1 observations in Science papers in 2013 by Stone et al., Krimigis et al., & Burlaga et al. indicated that V1 is now located very near the heliopause. Here, we show by using MHD simulation that the simulated scale of the heliopause along the Sun-V1 line is ~125 AU. Our simulation also shows that the heliopause along the Sun-V2 line is ~105 AU which suggests that V2 is also approaching the heliopause at present time and has a possibility to cross the heliopause in one or two years. For this simulation, we have carefully checked the latitudinal dependence of the solar wind ram-pressure in interplanetary space by comparing Ulysses, OMNI and ACE data in years 1995 to 2009 during which Ulysses provided data, and have confirmed that the solar-wind ram-pressure has no evident latitudinal dependence during the time period of our simulation, although the first Ulysses paper (Philips et al., GRL, 1995) showed that the averaged ram-pressure was 1.4-1.5 times greater in high-latitudes than that in low-latitudes. The absence of the latitudinal dependence of the ram-pressure results in the heliopause scale consistent with V1 observations in our MHD simulation.

Washimi, H.; Zank, G. P.; Hu, Q.; Tanaka, T.; Munakata, K.

2013-12-01

196

Metal analysis of scales taken from Arctic grayling.

This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca, Ni, Zn, As, Se, Cd, Sb, Hg, and Pb) were measured in 10 to 16 ablation sites (5 microm radius) on each scale sample from Arctic grayling (Thymallus arcticus) (n = 10 fish). Ca, Mg, and Zn were at physiological levels in all scale samples. Se, Hg, and As were also detected in all scale samples. Only Cd was below detection limits of the LA-ICPMS for all samples, but some of the samples were below detection limits for Sb, Pb, and Ni. The mean scale concentrations for Se, Hg, and Pb were not significantly different from the muscle concentrations and individual fish values were within fourfold of each other. Cd was not detected in either muscle or scale tissue, whereas Sb was detected at low levels in some scale samples but not in any of the muscle samples. Similarly, As was detected in all scale samples but not in muscle, and Ni was detected almost all scale samples but only in one of the muscle samples. Therefore, there were good qualitative and quantitative agreements between the metal concentrations in scale and muscle tissues, with LA-ICPMS analysis of scales appearing to be a more sensitive method of detecting the body burden of Ni and As when compared with muscle tissue. Correlation analyses, performed for Pb, Hg, and Se concentrations, revealed that the scale concentrations for these three metals generally exceeded those of the muscle at low muscle concentrations. The LA-ICPMS analysis of scales had the capability to resolve significant spatial differences in metal concentrations within a fish scale. We conclude that metal analysis of fish scales using LA-ICPMS shows considerable promise as a nonlethal analytical tool to assess metal body burden in fish that could possibly generate a historic record of metal exposure. However, comprehensive validation experiments are still needed. PMID:11031313

Farrell, A P; Hodaly, A H; Wang, S

2000-11-01

197

Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data

NASA Astrophysics Data System (ADS)

Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).

Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.

2013-12-01

198

Background Health assessment measurements for patients with Rheumatoid arthritis (RA) have to be meaningful, valid and relevant. A commonly used questionnaire for patients with RA is the Stanford Health Assessment Questionnaire Disability Index (HAQ), which has been available in Swedish since 1988. The HAQ has been revised and improved several times and the latest version is the Multi Dimensional Health Assessment Questionnaire (MDHAQ). The aim of this study was to translate the MDHAQ to Swedish conditions and to test the validity and reliability of this version for persons with RA. Methods Translation and adaption of the MDHAQ were performed according to guidelines by Guillemin et al. The translated version was tested for face validity and test-retest in a group of 30 patients with RA. Content validity, criterion validity and internal consistency were tested in a larger study group of 83 patients with RA. Reliability was tested with test-retest and Cronbach´s alpha for internal consistency. Two aspects of validity were explored: content and criterion validity. Content validity was tested with a content validity index. Criterion validity was tested with concurrent validity by exploring the correlation between the MDHAQ-S and the AIMS2-SF. Floor and ceiling effects were explored. Results Test-retest with intra-class correlation coefficient (ICC) gave a coefficient of 0.85 for physical function and 0.79 for psychological properties. Reliability test with Cronbach´s alpha gave an alpha of 0.65 for the psychological dimension and an alpha of 0.88 for the physical dimension of the MDHAQ-S. The average sum of the content validity index for each item was of the MDHAQ-S was 0.94. The MDHAQ-S had mainly a moderate correlation with the AIMS2-SF, except for the social dimension of the AIMS2-SF, which had a very low correlation with the MDHAQ-S. Conclusions The MDHAQ-S was considered to be reliable and valid, but further research is needed concerning sensitivity to change. PMID:23734791

2013-01-01

199

Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.

In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental results show that the MD PSO evolves to optimum or near-optimum networks in general and has a superior generalization capability. Furthermore, the MD PSO naturally favors a low-dimension solution when it exhibits a competitive performance with a high dimension counterpart and such a native tendency eventually yields the evolution process to the compact network configurations in the architecture space rather than the complex ones, as long as the optimality prevails. PMID:19556105

Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef

2009-12-01

200

Aromatic amines are an important class of harmful components of cigarette smoke. Nevertheless, only few of them have been reported to occur in urine, which raises questions on the fate of these compounds in the human body. Here we report on the results of a new analytical method, in situ derivatization solid phase microextraction (SPME) multi-dimensional gas chromatography mass spectrometry (GCxGC-qMS), that allows for a comprehensive fingerprint analysis of the substance class in complex matrices. Due to the high polarity of amino compounds, the complex urine matrix and prevalence of conjugated anilines, pretreatment steps such as acidic hydrolysis, liquid-liquid extraction (LLE), and derivatization of amines to their corresponding aromatic iodine compounds are necessary. Prior to detection, the derivatives were enriched by headspace SPME with the extraction efficiency of the SPME fiber ranging between 65 % and 85 %. The measurements were carried out in full scan mode with conservatively estimated limits of detection (LOD) in the range of several ng/L and relative standard deviation (RSD) less than 20 %. More than 150 aromatic amines have been identified in the urine of a smoking person, including alkylated and halogenated amines as well as substituted naphthylamines. Also in the urine of a non-smoker, a number of aromatic amines have been identified, which suggests that the detection of biomarkers in urine samples using a more comprehensive analysis as detailed in this report may be essential to complement the approach of the use of classic biomarkers. PMID:25142049

Lamani, Xolelwa; Horst, Simeon; Zimmermann, Thomas; Schmidt, Torsten C

2015-01-01

201

Future CAD in multi-dimensional medical images--project on multi-organ, multi-disease CAD system.

A large research project on the subject of computer-aided diagnosis (CAD) entitled "Intelligent Assistance in Diagnosis of Multi-dimensional Medical Images" was initiated in Japan in 2003. The objective of this research project is to develop a multi-organ, multi-disease CAD system that incorporates anatomical knowledge of the human body and diagnostic knowledge of various types of diseases. The present paper provides an overview of the project and clarifies the trend of future CAD technologies in Japan. PMID:17382515

Kobatake, Hidefumi

2007-01-01

202

Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.

Kaethner, Christian, E-mail: kaethner@imt.uni-luebeck.de; Ahlborg, Mandy; Buzug, Thorsten M., E-mail: buzug@imt.uni-luebeck.de [Institute of Medical Engineering, Universität zu Lübeck, 23562 Lübeck (Germany); Knopp, Tobias [Thorlabs GmbH, 23562 Lübeck (Germany); Sattel, Timo F. [Philips Medical Systems DMC GmbH, 22335 Hamburg (Germany)

2014-01-28

203

NASA Astrophysics Data System (ADS)

The basic features and multi-dimensional instability of electrostatic (EA) solitary waves propagating in an ultra-relativistic degenerate dense magnetized plasma (containing inertia-less electrons, inertia-less positrons, and inertial ions) have been theoretically investigated by reductive perturbation method and small- k perturbation expansion technique. The Zakharov-Kuznetsov (ZK) equation has been derived, and its numerical solutions for some special cases have been analyzed to identify the basic features (viz. amplitude, width, instability, etc.) of these electrostatic solitary structures. The implications of our results in some compact astrophysical objects, particularly white dwarfs and neutron stars, are briefly discussed.

Masum Haider, M.; Akter, Suraya; Duha, Syed S.; Mamun, Abdullah A.

2012-10-01

204

Large-Scale Vehicle Sharing Systems: Analysis of Vélib'

A quantitative analysis of the pioneering large-scale bicycle sharing system, Vélib' in Paris, France is presented. This system involves a fleet of bicycles strategically located across the network. Users are free to check out a bicycle from close to their origin and drop it off close to their destination to complete their trip. The analysis provides key insights on the

Rahul Nair; Elise Miller-Hooks; Robert C. Hampshire; Ana Buši?

2012-01-01

205

LARGE-SCALE NORMAL COORDINATE ANALYSIS ON DISTRIBUTED

LARGE-SCALE NORMAL COORDINATE ANALYSIS ON DISTRIBUTED MEMORY PARALLEL SYSTEMS Chao Yang1 Padma substitution phase of the computation. 1 Introduction Normal Coordinate Analysis (NCA) (Wilson et al. 1955 the dynamic role protein vibration plays in the photosynthetic center of green plants (Renger 1998). The long

Raghavan, Padma

206

Parametric Timing Analysis and Its Application to Dynamic Voltage Scaling

25 Parametric Timing Analysis and Its Application to Dynamic Voltage Scaling SIBIN MOHAN and FRANK (WCETs) to determine if tasks meet deadlines. Static timing analysis derives bounds on WCETs but requires statically known loop bounds. This work removes the constraint on known loop bounds through parametric

Whalley, David

207

and Stoermer, 2000 Chapter 1 System analysis of the global water cycle 2- 3 Radiative balance of the EarthSWS 5182: Earth System Analysis Catalogue Description: Analysis of global-scale interdependences how climate, the carbon cycle, and nutrient dynamics interact in shaping global scale temperature

Ma, Lena

208

Large scale analysis of signal reachability

Motivation: Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. Results: We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. Availability: All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. Contact: atodor@cise.ufl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932011

Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer

2014-01-01

209

Scaling range of power laws that originate from fluctuation analysis

NASA Astrophysics Data System (ADS)

We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

Grech, Dariusz; Mazur, Zygmunt

2013-05-01

210

1 Multi-Scale Unit-Cell Analysis ofMulti-Scale Unit-Cell Analysis of Textile CompositesTextile for textile composites to facilitate structural analysis Enhanced understanding of textile composites. 3. Results on fiber-diameter scale 4. Results on textile unit-cell scale #12;3 Research Objectives

Swan Jr., Colby Corson

211

Assessment of RELAP5-3D multi-dimensional component model using data from LOFT Test L2-5

The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the Loss-of-Fluid Test (LOFT) L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident (LOCA). Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L21-5 experiment were performed using the RELAP5-3D computer code. The calculations simulated the blowdown, refill, and reflood portions of the transient. The calculated thermal-hydraulic response of the primary coolant system was generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP5/MOD3.

Davis, C.B.

1998-07-01

212

Background A common characteristic of environmental epidemiology is the multi-dimensional aspect of exposure patterns, frequently reduced to a cumulative exposure for simplicity of analysis. By adopting a flexible Bayesian clustering approach, we explore the risk function linking exposure history to disease. This approach is applied here to study the relationship between different smoking characteristics and lung cancer in the framework of a population based case control study. Methods Our study includes 4658 males (1995 cases, 2663 controls) with full smoking history (intensity, duration, time since cessation, pack-years) from the ICARE multi-centre study conducted from 2001-2007. We extend Bayesian clustering techniques to explore predictive risk surfaces for covariate profiles of interest. Results We were able to partition the population into 12 clusters with different smoking profiles and lung cancer risk. Our results confirm that when compared to intensity, duration is the predominant driver of risk. On the other hand, using pack-years of cigarette smoking as a single summary leads to a considerable loss of information. Conclusions Our method estimates a disease risk associated to a specific exposure profile by robustly accounting for the different dimensions of exposure and will be helpful in general to give further insight into the effect of exposures that are accumulated through different time patterns. PMID:24152389

2013-01-01

213

NASA Astrophysics Data System (ADS)

A rigorous theoretical investigation has been made on multi-dimensional instability of obliquely propagating electrostatic dust-ion-acoustic (DIA) solitary structures in a magnetized dusty electronegative plasma which consists of Boltzmann electrons, nonthermal negative ions, cold mobile positive ions, and arbitrarily charged stationary dust. The Zakharov-Kuznetsov (ZK) equation is derived by the reductive perturbation method, and its solitary wave solution is analyzed for the study of the DIA solitary structures, which are found to exist in such a dusty plasma. The multi-dimensional instability of these solitary structures is also studied by the small- k (long wave-length plane wave) perturbation expansion technique. The combined effects of the external magnetic field, obliqueness, and nonthermal distribution of negative ions, which are found to significantly modify the basic properties of small but finite-amplitude DIA solitary waves, are examined. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable DIA solitary waves. The basic features (viz. speed, amplitude, width, instability, etc.) and the underlying physics of the DIA solitary waves, which are relevant to many astrophysical situations (especially, auroral plasma, Saturn's E-ring and F-ring, Halley's comet, etc.) and laboratory dusty plasma situations, are briefly discussed.

Kundu, N. R.; Masud, M. M.; Ashrafi, K. S.; Mamun, A. A.

2013-01-01

214

Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR

The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.

Ishii, M.; Ravankar, S.T.; Dowlati, R. [Purdue Univ., Lafayette, IN (United States). School of Nuclear Engineering] [and others

1996-04-01

215

Geographical scale effects on the analysis of leptospirosis determinants.

Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536

Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães

2014-01-01

216

Geographical Scale Effects on the Analysis of Leptospirosis Determinants

Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536

Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães

2014-01-01

217

A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors

Allu, Srikanth [ORNL] [ORNL; Velamur Asokan, Badri [Exxon Mobil Research and Engineering] [Exxon Mobil Research and Engineering; Shelton, William A [Louisiana State University] [Louisiana State University; Philip, Bobby [ORNL] [ORNL; Pannala, Sreekanth [ORNL] [ORNL

2014-01-01

218

NASA Astrophysics Data System (ADS)

A generalized three dimensional computational model based on unified formulation of electrode-electrolyte system of an electric double layer supercapacitor has been developed. This model accounts for charge transport across the electrode-electrolyte system. It is based on volume averaging, a widely used technique in multiphase flow modeling ([1,2]) and is analogous to porous media theory employed for electrochemical systems [3-5]. A single-domain approach is considered in the formulation where there is no need to model the interfacial boundary conditions explicitly as done in prior literature ([6]). Spatio-temporal variations, anisotropic physical properties, and upscaled parameters from lower length-scale simulations and experiments can be easily introduced in the formulation. Model complexities like irregular geometric configuration, porous electrodes, charge transport and related performance characteristics of the supercapacitor can be effectively captured in higher dimensions. This generalized model also provides insight into the applicability of 1D models ([6]) and where multidimensional effects need to be considered. A sensitivity analysis is presented to ascertain the dependence of the charge and discharge processes on key model parameters. Finally, application of the formulation to non-planar supercapacitors is presented.

Allu, S.; Velamur Asokan, B.; Shelton, W. A.; Philip, B.; Pannala, S.

2014-06-01

219

Shielding analysis methods available in the scale computational system

Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

1986-01-01

220

Full-scale system impact analysis: Digital document storage project

NASA Technical Reports Server (NTRS)

The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.

1989-01-01

221

Scale analysis using X-ray microfluorescence and computed radiography

NASA Astrophysics Data System (ADS)

Scale deposits are the most common and most troublesome damage problems in the oil field and can occur in both production and injection wells. They occur because the minerals in produced water exceed their saturation limit as temperatures and pressures change. Scale can vary in appearance from hard crystalline material to soft, friable material and the deposits can contain other minerals and impurities such as paraffin, salt and iron. In severe conditions, scale creates a significant restriction, or even a plug, in the production tubing. This study was conducted to qualify the elements present in scale samples and quantify the thickness of the scale layer using synchrotron radiation micro-X-ray fluorescence (SR?XRF) and computed radiography (CR) techniques. The SR?XRF results showed that the elements found in the scale samples were strontium, barium, calcium, chromium, sulfur and iron. The CR analysis showed that the thickness of the scale layer was identified and quantified with accuracy. These results can help in the decision making about removing the deposited scale.

Candeias, J. P.; de Oliveira, D. F.; dos Anjos, M. J.; Lopes, R. T.

2014-02-01

222

Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

NASA Technical Reports Server (NTRS)

This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

2014-01-01

223

NASA Astrophysics Data System (ADS)

We propose to describe the variety of galaxies from the Sloan Digital Sky Survey by using only one affine parameter. To this aim, we construct the principal curve (P-curve) passing through the spine of the data point cloud, considering the eigenspace derived from Principal Component Analysis (PCA) of morphological, physical, and photometric galaxy properties. Thus, galaxies can be labeled, ranked, and classified by a single arc-length value of the curve, measured at the unique closest projection of the data points on the P-curve. We find that the P-curve has a "W" letter shape with three turning points, defining four branches that represent distinct galaxy populations. This behavior is controlled mainly by two properties, namely u - r and star formation rate (from blue young at low arc length to red old at high arc length), while most other properties correlate well with these two. We further present the variations of several important galaxy properties as a function of arc length. Luminosity functions vary from steep Schechter fits at low arc length to double power law and ending in lognormal fits at high arc length. Galaxy clustering shows increasing autocorrelation power at large scales as arc length increases. Cross correlation of galaxies with different arc lengths shows that the probability of two galaxies belonging to the same halo decreases as their distance in arc length increases. PCA analysis allows us to find peculiar galaxy populations located apart from the main cloud of data points, such as small red galaxies dominated by a disk, of relatively high stellar mass-to-light ratio and surface mass density. On the other hand, the P-curve helped us understand the average trends, encoding 75% of the available information in the data. The P-curve allows not only dimensionality reduction but also provides supporting evidence for the following relevant physical models and scenarios in extragalactic astronomy: (1) The hierarchical merging scenario in the formation of a selected group of red massive galaxies. These galaxies present a lognormal r-band luminosity function, which might arise from multiplicative processes involved in this scenario. (2) A connection between the onset of active galactic nucleus activity and star formation quenching as mentioned in Martin et al., which appears in green galaxies transitioning from blue to red populations.

Taghizadeh-Popp, M.; Heinis, S.; Szalay, A. S.

2012-08-01

224

Large-scale data analysis using the Wigner function

NASA Astrophysics Data System (ADS)

Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

2012-04-01

225

Confirmatory Factor Analysis of the Recent Exposure to Violence Scale

ERIC Educational Resources Information Center

The purpose of the current study is to advance the psychometric properties of the child-administered 22-item Recent Exposure to Violence Scale (REVS) using confirmatory factor analysis (CFA) across three large and ethnically diverse samples of children ranging in age from middle childhood through adolescence. Results of the CFA suggest that a…

van Dulmen, Manfred H. M.; Belliston, Lara M.; Flannery, Daniel J.; Singer, Mark

2008-01-01

226

Exploratory Factor Analysis of African Self-Consciousness Scale Scores

ERIC Educational Resources Information Center

This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…

Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.

2012-01-01

227

Analyzing Test Content Using Cluster Analysis and Multidimensional Scaling.

ERIC Educational Resources Information Center

A new method for evaluating the content representation of a test is illustrated. Item similarity ratings were obtained from three content domain experts to assess whether ratings corresponded to item groupings specified in the test blueprint. Multidimensional scaling and cluster analysis provided substantial information about the test's content…

Sireci, Stephen G.; Geisinger, Kurt F.

1992-01-01

228

A Multidimensional Scaling Analysis of the Development of Animal Names

ERIC Educational Resources Information Center

Dissimilarity judgments of all possible pairs of 10 animal names were obtained from first-, third-, and sixth-grade children, and college students. Analysis with multidimensional scaling procedures revealed a semantic space consisting of the features of size, domesticity, and predativity. (Author/JMB)

Howard, Darlene V.; Howard, James H. Jr.

1977-01-01

229

Large and small scale sensitivity analysis of optimum estimation algorithms

This paper presents the derivation and evaluation of algorithms for error analysis, large and small scale sensitivity of optimum filtering and fixed interval smoothing solutions to linear estimation problems. Model errors as well as ignorance of plant and measurement noise covariance matrices are considered. Results are presented for a simple scalar problem and for the problem of state estimation in

R. Griffin; A. Sage

1968-01-01

230

The Multidimensional Fear of Death Scale: An independent analysis

The factor structure and subscale reliabilities of the Multidimensional Fear of Death Scale were examined using the responses of 256 New Zealanders, predominantly undergraduates. Comparison with the results of a US study by J. W. Hoelter showed that both the subscale reliabilities and the factor structure were almost perfectly reproduced in the present analysis. Hoelter's claim of 8 effectively independent

Frank H. Walkey

1982-01-01

231

Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth

Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth Information and Computer Science for Statistical Methods in Medical Research, September 2000 1 #12;Abstract Modern data mininghas evolvedlargelyas aresult ofe orts bycomputer scientists to address the needs of data owners" in extracting useful

Smyth, Padhraic

232

DEVELOPMENT OF MOTIVATION SCALE - CLINICAL VALIDATION WITH ALCOHOL DEPENDENTS

This study focusses on the development of a comprehensive multi-dimensional scale for assessing motivation for change in the alcohol dependent population. After establishing face validity, the items evolved were administered to a normal sample of 600 male subjects in whom psychiatric illness was ruled out. The data thus obtained was subjected to factor analysis. Six factors were obtained which accounted for 55.2% of variance. These together formed a 80 item five point scale and norms were established on a sample of 600 normal subjects. Further clinical validation was established on 30 alcohol dependent subjects and 30 normals. The status of motivation was found to be inadequate in alcohol dependent individuals as compared to the normals. Split-half reliability was carried out and the tool was found to be highly reliable. PMID:21743674

Neeliyara, Teresa; Nagalakshmi, S.V.

1994-01-01

233

High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

NASA Technical Reports Server (NTRS)

We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

2002-01-01

234

NASA Astrophysics Data System (ADS)

This paper provides an efficient analytical tool for solving the heat conduction equation in a multi-dimensional composite slab subject to generally time-dependent boundary conditions. A temporal Laplace transformation and novel separation of variables are applied to the heat equation. The time-dependent boundary conditions are approximated with Fourier series. Taking advantage of the periodic properties of Fourier series, the corresponding analytical solution is obtained and expressed explicitly through employing variable transformations. For such conduction problems, nearly all the published works necessitate numerical work such as computing residues or searching for eigenvalues even for a one-dimensional composite slab. In this paper, the proposed method involves no numerical iteration. The final closed form solution is straightforward; hence, the physical parameters are clearly shown in the formula. The accuracy of the developed analytical method is demonstrated by comparison with numerical calculations.

Lu, X.; Tervola, P.; Viljanen, M.

2005-09-01

235

The requirement that individual cells be able to communicate with one another over a range of length scales is a fundamental prerequisite for the evolution of multicellular organisms. Often diffusible chemical molecules ...

Amadi, Ovid Charles

2013-01-01

236

SINEX: SCALE shielding analysis GUI for X-Windows

SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.

Browman, S.M.; Barnett, D.L.

1997-12-01

237

Golden-section search method Newton's method Multi-dimensional Unconstrained Optimization Analytical Different techniques Golden-section search method Question 13.3 (5th Edition) Solve for the value of x that maximize f(x) in Prob. 13.2 using the golden-section search. Employ initial guesses of xl = 0 and xu = 2

Wu, Xiaolin

238

In this paper, a statistical electromagnetic (abbr., EM) interference study is constructed mainly from a theoretical viewpoint on the basis of N-dimensional random walk problem in multi-dimensional signal space. First, a characteristic function of the Hankel transform type matched to this EM environmental study is introduced especially in an extended form of D. Middleton's basic result. Then, the probability density

M. Ohta; Y. Mitani; N. Nakasako

1998-01-01

239

New Criticality Safety Analysis Capabilities in SCALE 5.1

Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.

Bowman, Stephen M [ORNL; DeHart, Mark D [ORNL; Dunn, Michael E [ORNL; Goluoglu, Sedat [ORNL; Horwedel, James E [ORNL; Petrie Jr, Lester M [ORNL; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

2007-01-01

240

Scaling Laws in Canopy Flows: A Wind-Tunnel Analysis

NASA Astrophysics Data System (ADS)

An analysis of velocity statistics and spectra measured above a wind-tunnel forest model is reported. Several measurement stations downstream of the forest edge have been investigated and it is observed that, while the mean velocity profile adjusts quickly to the new canopy boundary condition, the turbulence lags behind and shows a continuous penetration towards the free stream along the canopy model. The statistical profiles illustrate this growth and do not collapse when plotted as a function of the vertical coordinate. However, when the statistics are plotted as function of the local mean velocity (normalized with a characteristic velocity scale), they do collapse, independently of the streamwise position and freestream velocity. A new scaling for the spectra of all three velocity components is proposed based on the velocity variance and integral time scale. This normalization improves the collapse of the spectra compared to existing scalings adopted in atmospheric measurements, and allows the determination of a universal function that provides the velocity spectrum. Furthermore, a comparison of the proposed scaling laws for two different canopy densities is shown, demonstrating that the vertical velocity variance is the most sensible statistical quantity to the characteristics of the canopy roughness.

Segalini, Antonio; Fransson, Jens H. M.; Alfredsson, P. Henrik

2013-08-01

241

Microbial community analysis of a full-scale DEMON bioreactor.

Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398

Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muñoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus

2014-09-23

242

Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders

ERIC Educational Resources Information Center

Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…

Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.

2013-01-01

243

Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems

systems specially when the clients have Service Level Agreements (SLAs) and the total profit in the system, memory requirement, and communication resources are considered as three dimensions in which optimization, commerce, education, manufacturing, and communication services. At the personal level, the wide scale

Pedram, Massoud

244

A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory

NASA Technical Reports Server (NTRS)

The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.

Prozan, R. J.

1982-01-01

245

On Multi-dimensional Steady Subsonic Flows Determined by Physical Boundary Conditions

NASA Astrophysics Data System (ADS)

In this thesis, we investigate an inflow-outflow problem for subsonic gas flows in a nozzle with finite length, aiming at finding intrinsic (physically acceptable) boundary conditions on upstream and downstream. We first characterize a set of physical boundary conditions that ensure the existence and uniqueness of a subsonic irrotational flow in a rectangle. Our results show that suppose we prescribe the horizontal incoming flow angle at the inlet and an appropriate pressure at the exit, there exists two positive constants m 0 and m1 with m0 < m1, such that a global subsonic irrotational flow exists uniquely in the nozzle, provided that the incoming mass flux m ? [m0, m 1). The maximum speed will approach the sonic speed as the mass flux m tends to m1. The new difficulties arise from the nonlocal term involved in the mass flux and the pressure condition at the exit. We first introduce an auxiliary problem with the Bernoulli's constant as a parameter to localize the nonlocal term and then establish a monotonic relation between the mass flux and the Bernoulli's constant to recover the original problem. To deal with the loss of obliqueness induced by the pressure condition at the exit, we employ the formulation in terms of the angular velocity and the density. A Moser iteration is applied to obtain the Linfinity estimate of the angular velocity, which guarantees that the flow possesses a positive horizontal velocity in the whole nozzle. As a continuation, we investigate the influence of the incoming flow angle and the geometry structure of the nozzle walls on subsonic flows in a finitely long curved nozzle. It turns out to be interesting that the incoming flow angle and the angles of inclination of nozzle walls play the same role as the end pressure. The curvatures of the nozzle walls play an important role. We also extend our results to subsonic Euler flows in the 2-D and 3-D asymmetric cases. Then it comes to the most interesting and difficult case--the 3-D subsonic Euler flow in a bounded nozzle, which is also the essential part of this thesis. The boundary conditions we have imposed in the 2-D case have a natural extension in the 3-D case. These important clues help us a lot to develop a new formulation to get some insights on the coupling structure between hyperbolic and elliptic modes in the Euler equations. The key idea in our new formulation is to use the Bernoulli's law to reduce the dimension of the velocity field by defining new variables (1,b2=u2u 1,b3=u3 u1) and replacing u1 by the Bernoulli's function B through u21=2B-h r1+ b22+b23 . In this way, we can explore the role of the Bernoulli's law in greater depth and hope that may simplify the Euler equations a little bit. We find a new conserved quantity for flows with a constant Bernoulli's function, which behaves like the scaled vorticity in the 2-D case. More surprisingly, a system of new conservation laws can be derived, which is never been observed before, even in the two dimensional case. We employ this formulation to construct a smooth subsonic Euler flow in a rectangular cylinder by assigning the incoming flow angles and the Bernoulli's function at the inlet and the end pressure at the exit, which is also required to be adjacent to some special subsonic states. The same idea can be applied to obtain similar information for the incompressible Euler equations, the self-similar Euler equations, the steady Euler equations with damping, the steady Euler-Poisson equations and the steady Euler-Maxwell equations. Last, we are concerned with the structural stability of some steady subsonic solutions for the Euler-Poisson system. A steady subsonic solution with subsonic background charge is proven to be structurally stable with respect to small perturbations of the background charge, the incoming flow angles and the end pressure, provided the background solution has a low Mach number and a small electric field. The new ingredient in our mathematical analysis is the solvability of a new second order elliptic system supplemented with oblique derivative conditio

Weng, Shangkun

246

Multi-scale analysis of bias correction of soil moisture

NASA Astrophysics Data System (ADS)

Remote sensing, in situ networks and models are now providing unprecedented information for environmental monitoring. To conjunctively use multi-source data nominally representing an identical variable, one must resolve biases existing between these disparate sources, and the characteristics of the biases can be non-trivial due to spatio-temporal variability of the target variable, inter-sensor differences with variable measurement supports. One such example is of soil moisture (SM) monitoring. Triple collocation (TC) based bias correction is a powerful statistical method that is increasingly being used to address this issue, but is only applicable to the linear regime, whereas the non-linear method of statistical moment matching is susceptible to unintended biases originating from measurement error. Since different physical processes that influence SM dynamics may be distinguishable by their characteristic spatio-temporal scales, we propose a multi-timescale linear bias model in the framework of a wavelet-based multi-resolution analysis (MRA). The joint MRA-TC analysis was applied to demonstrate scale-dependent biases between in situ, remotely sensed and modelled SM, the influence of various prospective bias correction schemes on these biases, and lastly to enable multi-scale bias correction and data-adaptive, non-linear de-noising via wavelet thresholding.

Su, C.-H.; Ryu, D.

2015-01-01

247

Multi-Scale Fractal Analysis of Image Texture and Pattern

NASA Technical Reports Server (NTRS)

Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

Emerson, Charles W.

1998-01-01

248

ECG scaling properties of cardiac arrhythmias using detrended fluctuation analysis.

We applied detrended fluctuation analysis to characterize at very short time scales during episodes of cardiac arrhythmias the raw electrocardiogram (ECG) waveform, aiming to get a global insight into its dynamical behaviour in patients who experienced sudden death. We found that in 15 recordings involving different types of arrhythmias (taken from PhysioNet's Sudden Cardiac Death Holter Database), the ECG waveform, besides showing a less-random dynamics, becomes more regular during bigeminy, ventricular tachycardia or even atrial fibrillation and ventricular fibrillation. The ECG waveform scaling properties thus suggest that reduced complexity dominates the underlying mechanisms of arrhythmias. Among other explanations, this may result from shorted or restricted (i.e. less diverse) pathways of conduction of the electrical activity within ventricles. PMID:18843162

Rodriguez, E; Lerma, C; Echeverria, J C; Alvarez-Ramirez, J

2008-11-01

249

Reactor Physics Methods and Analysis Capabilities in SCALE

The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

DeHart, Mark D [ORNL; Bowman, Stephen M [ORNL

2011-01-01

250

Two-Field Analysis of No-Scale Supergravity Inflation

Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary K\\"ahler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing $r$ to very small values $\\ll 0.1$. We also calculate the non-Gaussianity measure $f_{\\rm NL}$, finding that is well below the current experimental sensitivity.

John Ellis; Marcos A. G. Garcia; Dimitri V. Nanopoulos; Keith A. Olive

2014-09-29

251

SCALE 6: Comprehensive Nuclear Safety Analysis Code System

Version 6 of the Standardized Computer Analyses for Licensing Evaluation (SCALE) computer software system developed at Oak Ridge National Laboratory, released in February 2009, contains significant new capabilities and data for nuclear safety analysis and marks an important update for this software package, which is used worldwide. This paper highlights the capabilities of the SCALE system, including continuous-energy flux calculations for processing multigroup problem-dependent cross sections, ENDF/B-VII continuous-energy and multigroup nuclear cross-section data, continuous-energy Monte Carlo criticality safety calculations, Monte Carlo radiation shielding analyses with automated three-dimensional variance reduction techniques, one- and three-dimensional sensitivity and uncertainty analyses for criticality safety evaluations, two- and three-dimensional lattice physics depletion analyses, fast and accurate source terms and decay heat calculations, automated burnup credit analyses with loading curve search, and integrated three-dimensional criticality accident alarm system analyses using coupled Monte Carlo criticality and shielding calculations.

Bowman, Stephen M [ORNL

2011-01-01

252

Two-field analysis of no-scale supergravity inflation

NASA Astrophysics Data System (ADS)

Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index ns and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values ll 0.1. We also calculate the non-Gaussianity measure fNL, finding that is well below the current experimental sensitivity.

Ellis, John; García, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

2015-01-01

253

Underground tank vitrification: Field scale experiments and computational analysis

In situ vitrification (ISV) is a thermal waste remediation process developed by researchers at Pacific Northwest Laboratory (PNL) for stabilization and treatment of soils contaminated with hazardous, radioactive or mixed wastes. Many underground tanks containing radioactive and hazardous chemical wastes at US Department of Energy (DOE) sites will soon require remediation. Recent development activities have been pursued to determine if the ISV process is applicable to underground storage tanks. As envisioned, ISV will convert the tank, tank contents. and associated contaminated soil to a glass and crystalline block. Development activities include testing and demonstration on three scales and computational modeling and evaluation. A description of engineering solutions implemented on the field scale to mitigate unique problems posed by ISV of a confined underground structure, along with the associated computational analysis, is given in the paper.

Tixier, J.S.; Jeffs, J.T.; Thompson, L.E.

1992-06-01

254

Reactor Physics Methods and Analysis Capabilities in SCALE

The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.

Mark D. DeHart; Stephen M. Bowman

2011-05-01

255

Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model

NASA Technical Reports Server (NTRS)

We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.

Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)

1999-01-01

256

Scaling and dimensional analysis of acoustic streaming jets

NASA Astrophysics Data System (ADS)

This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.

Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.; Garandet, J.-P.

2014-09-01

257

An exploration into the multi-dimensionality of the 'sacred' in tourism

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ontology of the Selected Paradigm: Constructivism. . . . . Epistemological Standing. . Methodological Orientation. . Selected Method. Data Analysis and Interpretation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Judging... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. 8 VineDelora's Categories of Sacred Sites. . . . 72 3. 1 Tenants of Constructivism Over Positivism in Terms of This Study on the Sacred in Tourism. 3. 2 Contrasts Between Posqtositivist, Critical Theory, and Constructivist Paradigms...

Weiser, Monica Leigh

2012-06-07

258

Nonlinearity analysis of model-scale jet noise

NASA Astrophysics Data System (ADS)

This paper describes the use of a spectrally-based "nonlinearity indicator" to complement ordinary spectral analysis of jet noise propagation data. The indicator, which involves the cross spectrum between the temporal acoustic pressure and the square of the acoustic pressure, stems directly from ensemble averaging the generalized Burgers equation. The indicator is applied to unheated model-scale jet noise from subsonic and supersonic nozzles. The results demonstrate how the indicator can be used to interpret the evolution of power spectra in the transition from the geometric near to far field. Geometric near-field and nonlinear effects can be distinguished from one another, thus lending additional physical insight into the propagation.

Gee, Kent L.; Atchley, Anthony A.; Falco, Lauren E.; Shepherd, Micah R.

2012-09-01

259

Automated Sholl Analysis of Digitized Neuronal Morphology at Multiple Scales

Neuronal morphology plays a significant role in determining how neurons function and communicate1-3. Specifically, it affects the ability of neurons to receive inputs from other cells2 and contributes to the propagation of action potentials4,5. The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network6,7. Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease8, schizophrenia9,10, and mental retardation11. The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations12. The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1. PMID:21113115

Luo, Vincent; Lakdawala, Hersh; Firestein, Bonnie L.

2010-01-01

260

The Effects of Multi-Dimensional Competition on Education Market Outcomes

private schools BLS U.S. Bureau of Labor Statistics CBSA Core Based Statistical Area CCD Common Core of Data, National Center for Education Statistics CWI Comparable wage index DEA Data envelopment analysis f.o.b. Free-on-board GMM Generalized... ..................................................... 19 2.1.4. Monopsony in Education Labor Markets .................................... 21 2.2. Models of Education Personnel Wage Determination ................................ 25 2.2.1. Wages in an Oligopsony Model...

Karakaplan, Mustafa

2012-10-19

261

Knowledge discovery in urban environments from fused multi-dimensional imagery

With all the exciting advances in sensor fusion and data interpretation technologies in recent years, including co-registration, 3-D surface reconstruction, object recognition, spatial reasoning, and more, high-quality detailed and precise segmentation of remote sensing spectral images remains a much needed key component in the comprehensive analysis and understanding of surfaces. Urban surfaces are no exception. In fact, urban surfaces can

Erzsebet Merenyi; Beata Csatho; Kadim Tasdemir

2007-01-01

262

Multi-dimensional radiative transfer to analyze Hanle effect in Ca {\\sc ii} K line at 3933 \\AA\\,

Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on the line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magneto-hydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca {\\sc ii} K at 3933 \\AA\\ using multi-D polarized RT with Hanle effect and partial frequency redistribution in line scattering. We use an atmosphere which is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of an 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the...

Anusha, L S

2013-01-01

263

Many difficult problems arise in the numerical simulation of fluid flow processes within porous media in petroleum reservoir simulation and in subsurface contaminant transport and remediation. The authors develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of the initial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains. Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derived schemes, which are full mass conservative, naturally incorporate inflow boundary conditions into their formulations and do not need any artificial outflow boundary conditions. Moreover, they have regularly structured, well-conditioned, symmetric, and positive-definite coefficient matrices, which can be efficiently solved by the conjugate gradient method in an optimal order number of iterations without any preconditioning needed. Numerical results are presented to compare the performance of the ELLAM schemes with many well studied and widely used methods, including the upwind finite difference method, the Galerkin and the Petrov-Galerkin finite element methods with backward-Euler or Crank-Nicolson temporal discretization, the streamline diffusion finite element methods, the monotonic upstream-centered scheme for conservation laws (MUSCL), and the Minmod scheme.

Wang, H.; Man, S. [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mathematics] [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mathematics; Ewing, R.E. [Texas A and M Univ., College Station, TX (United States). Inst for Scientific Computation] [Texas A and M Univ., College Station, TX (United States). Inst for Scientific Computation; Qin, G.; Lyons, S.L. [Mobil Technology Co., Dallas, TX (United States). Upstream Strategic Research Center] [Mobil Technology Co., Dallas, TX (United States). Upstream Strategic Research Center; Al-Lawatia, M. [Sultan Qaboos Univ., Muscat (Oman). Dept. of Mathematics and Statistics] [Sultan Qaboos Univ., Muscat (Oman). Dept. of Mathematics and Statistics

1999-06-10

264

In the previous paper of this series, we presented a formulation of the polarized radiative transfer equation for resonance scattering with partial frequency redistribution (PRD) in multi-dimensional media for a two-level atom model with unpolarized ground level, using the irreducible spherical tensors T{sub Q}{sup K}(i, {Omega}) for polarimetry. We also presented a polarized approximate lambda iteration method to solve this equation using the Jacobi iteration scheme. The formal solution used was based on a simple finite volume technique. In this paper, we develop a faster and more efficient method which uses the projection techniques applied to the radiative transfer equation (the Stabilized Preconditioned Bi-Conjugate Gradient method). We now use a more accurate formal solver, namely the well-known two-dimensional (2D) short characteristics method. Using the numerical method developed in Paper I, we can consider only simpler cases of finite 2D slabs due to computational limitations. Using the method developed in this paper, we could compute PRD solutions in 2D media in the more difficult context of semi-infinite 2D slabs also. We present several solutions which may serve as benchmarks in future studies in this area.

Anusha, L. S.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bengaluru 560 034 (India); Paletou, F. [Laboratoire d'Astrophysique de Toulouse-Tarbes, Universite de Toulouse, CNRS, 14 Avenue E. Belin, F-31400 Toulouse (France)

2011-01-10

265

NASA Astrophysics Data System (ADS)

We develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of the initial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains. Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derived schemes, which are fully mass conservative, naturally incorporate inflow boundary conditions into their formulations and do not need any artificial outflow boundary conditions. Moreover, they have regularly structured, well-conditioned, symmetric, and positive-definite coefficient matrices, which can be efficiently solved by the conjugate gradient method in an optimal order number of iterations without any preconditioning needed. Numerical results are presented to compare the performance of the ELLAM schemes with many well studied and widely used methods, including the upwind finite difference method, the Galerkin and the Petrov-Galerkin finite element methods with backward-Euler or Crank-Nicolson temporal discretization, the streamline diffusion finite element methods, the monotonic upstream-centered scheme for conservation laws (MUSCL), and the Minmod scheme.

Wang, Hong; Ewing, Richard E.; Qin, Guan; Lyons, Stephen L.; Al-Lawatia, Mohamed; Man, Shushuang

1999-06-01

266

OBJECTIVE. The goal of this study was to develop unbiased risk-assessment models to be used for paying health plans on the basis of enrollee health status and use propensity. We explored the risk structure of adult employed HMO members using self-reported morbidities, functional status, perceived health status, and demographic characteristics. DATA SOURCES/STUDY SETTING. Data were collected on a random sample of members of a large, federally qualified, prepaid group practice, hospital-based HMO located in the Pacific Northwest. STUDY DESIGN. Multivariate linear nonparametric techniques were used to estimate risk weights on demographic, morbidity, and health status factors at the individual level. The dependent variable was annual real total health plan expense for covered services for the year following the survey. Repeated random split-sample validation techniques minimized outlier influences and avoided inappropriate distributional assumptions required by parametric techniques. DATA COLLECTION/EXTRACTION METHODS. A mail questionnaire containing an abbreviated medical history and the RAND-36 Health Survey was administered to a 5 percent sample of adult subscribers and their spouses in 1990 and 1991, with an overall 44 percent response rate. Utilization data were extracted from HMO automated information systems. Annual expenses were computed by weighting all utilization elements by standard unit costs for the HMO. PRINCIPAL FINDINGS. Prevalence of such major chronic diseases as heart disease, diabetes, depression, and asthma improve prediction of future medical expense; functional health status and morbidities are each better than simple demographic factors alone; functional and perceived health status as well as demographic characteristics and diagnoses together yield the best prediction performance and reduce opportunities for selection bias. We also found evidence of important interaction effects between functional/perceived health status scales and disease classes. CONCLUSIONS. Self-reported morbidities and functional health status are useful risk measures for adults. Risk-assessment research should focus on combining clinical information with social survey techniques to capitalize on the strengths of both approaches. Disease-specific functional health status scales should be developed and tested to capture the most information for prediction. PMID:8698586

Hornbrook, M C; Goodman, M J

1996-01-01

267

Crater ejecta scaling laws: fundamental forms based on dimensional analysis

A model of crater ejecta is constructed using dimensional analysis and a recently developed theory of energy and momentum coupling in cratering events. General relations are derived that provide a rationale for scaling laboratory measurements of ejecta to larger events. Specific expressions are presented for ejection velocities and ejecta blanket profiles in two limiting regimes of crater formation: the so-called gravity and strength regimes. In the gravity regime, ejectra velocities at geometrically similar launch points within craters vary as the square root of the product of crater radius and gravity. This relation implies geometric similarity of ejecta blankets. That is, the thickness of an ejecta blanket as a function of distance from the crater center is the same for all sizes of craters if the thickness and range are expressed in terms of crater radii. In the strength regime, ejecta velocities are independent of crater size. Consequently, ejecta blankets are not geometrically similar in this regime. For points away from the crater rim the expressions for ejecta velocities and thickness take the form of power laws. The exponents in these power laws are functions of an exponent, ..cap alpha.., that appears in crater radius scaling relations. Thus experimental studies of the dependence of crater radius on impact conditions determine scaling relations for ejecta. Predicted ejection velocities and ejecta-blanket profiles, based on measured values of ..cap alpha.., are compared to existing measurements of velocities and debris profiles.

Housen, K.R.; Schmidt, R.M.; Holsapple, K.A.

1983-03-10

268

A Multi-scale Approach to Urban Thermal Analysis

NASA Technical Reports Server (NTRS)

An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.

Gluch, Renne; Quattrochi, Dale A.

2005-01-01

269

We demonstrate how spectral shaping in coherent multidimensional spectroscopy can isolate specific signal pathways and directly access quantitative details. By selectively exciting pathways involving a coherent superposition of exciton states we are able to identify, isolate and analyse weak coherent coupling between spatially separated excitons in an asymmetric double quantum well. Analysis of the isolated signal elucidates details of the coherent interactions between the spatially separated excitons. With a dynamic range exceeding 10(4) in electric field amplitude, this approach facilitates quantitative comparisons of different signal pathways and a comprehensive description of the electronic states and their interactions. PMID:24664021

Tollerud, Jonathan O; Hall, Christopher R; Davis, Jeffrey A

2014-03-24

270

Investigation of Biogrout processes by numerical analysis at pore scale

NASA Astrophysics Data System (ADS)

Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be studied and eventually observations may be extrapolated to a larger continuum scale.

Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.

2013-04-01

271

Objective To evaluate the psychometric properties and clinical utility of Chinese Multidimensional Health Assessment Questionnaire (MDHAQ-C) in patients with rheumatoid arthritis (RA) in China. Methods 162 RA patients were recruited in the evaluation process. The reliability of the questionnaire was tested by internal consistency and item analysis. Convergent validity was assessed by correlations of MDHAQ-C with Health Assessment Questionnaire (HAQ), the 36-item Short-Form Health Survey (SF-36) and the Hospital anxiety and depression scales (HAD). Discriminant validity was tested in groups of patients with varied disease activities and functional classes. To evaluate the clinical values, correlations were calculated between MDHAQ-C and indices of clinical relevance and disease activity. Agreement with the Disease Activity Score (DAS28) and Clinical Disease Activity Index (CDAI) was estimated. Results The Cronbach's alpha was 0.944 in the Function scale (FN) and 0.768 in the scale of psychological status (PS). The item analysis indicated all the items of FN and PS are correlated at an acceptable level. MDHAQ-C correlated with the questionnaires significantly in most scales and scores of scales differed significantly in groups of different disease activity and functional status. MDHAQ-C has moderate to high correlation with most clinical indices and high correlation with a spearman coefficient of 0.701 for DAS 28 and 0.843 for CDAI. The overall agreement of categories was satisfying. Conclusion MDHAQ-C is a reliable, valid instrument for functional measurement and a feasible, informative quantitative index for busy clinical settings in Chinese RA patients. PMID:24848431

Song, Yang; Zhu, Li-an; Wang, Su-li; Leng, Lin; Bucala, Richard; Lu, Liang-Jing

2014-01-01

272

NASA Astrophysics Data System (ADS)

ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.

West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman

2009-02-01

273

Technical note: a multi-dimensional description of knee laxity using radial basis functions.

The net laxity of the knee is a product of individual ligament structures that provide constraint for multiple degrees of freedom (DOF). Clinical laxity assessments are commonly performed along a single axis of motion, and lack analyses of primary and coupled motions in terms of translations and rotations of the knee. Radial basis functions (RBFs) allow multiple DOF to be incorporated into a single method that accounts for all DOF equally. To evaluate this method, tibiofemoral kinematics were experimentally collected from a single cadaveric specimen during a manual laxity assessment. A radial basis function (RBF) analysis was used to approximate new points over a uniform grid space. The normalized root mean square errors of the approximated points were below 4% for all DOF. This method provides a unique approach to describing joint laxity that incorporates multiple DOF in a single model. PMID:25115564

Cyr, Adam J; Maletsky, Lorin P

2015-11-01

274

A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

NASA Technical Reports Server (NTRS)

The solution of the two-dimensional Euler equations is based on the two-dimensional linear convection equation and the Euler-equation decomposition developed by Hirsch et al. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative, and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.

Powell, Kenneth G.; Vanleer, Bram

1989-01-01

275

NASA Technical Reports Server (NTRS)

The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.

Shu, Chi-Wang

1992-01-01

276

Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

NASA Astrophysics Data System (ADS)

amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

Bartsch, H.; Erlebacher, G.

2003-12-01

277

Multi-dimensional models of circumstellar shells around evolved massive stars

NASA Astrophysics Data System (ADS)

Context. Massive stars shape their surrounding medium through the force of their stellar winds, which collide with the circumstellar medium. Because the characteristics of these stellar winds vary over the course of the evolution of the star, the circumstellar matter becomes a reflection of the stellar evolution and can be used to determine the characteristics of the progenitor star. In particular, whenever a fast wind phase follows a slow wind phase, the fast wind sweeps up its predecessor in a shell, which is observed as a circumstellar nebula. Aims: We make 2D and 3D numerical simulations of fast stellar winds sweeping up their slow predecessors to investigate whether numerical models of these shells have to be 3D, or whether 2D models are sufficient to reproduce the shells correctly. Methods: We use the MPI-AMRVAC code, using hydrodynamics with optically thin radiative losses included, to make numerical models of circumstellar shells around massive stars in 2D and 3D and compare the results. We focus on those situations where a fast Wolf-Rayet star wind sweeps up the slower wind emitted by its predecessor, being either a red supergiant or a luminous blue variable. Results: As the fast Wolf-Rayet wind expands, it creates a dense shell of swept up material that expands outward, driven by the high pressure of the shocked Wolf-Rayet wind. These shells are subject to a fair variety of hydrodynamic-radiative instabilities. If the Wolf-Rayet wind is expanding into the wind of a luminous blue variable phase, the instabilities will tend to form a fairly small-scale, regular filamentary lattice with thin filaments connecting knotty features. If the Wolf-Rayet wind is sweeping up a red supergiant wind, the instabilities will form larger interconnected structures with less regularity. The numerical resolution must be high enough to resolve the compressed, swept-up shell and the evolving instabilities, which otherwise may not even form. Conclusions: Our results show that 3D models, when translated to observed morphologies, give realistic results that can be compared directly to observations. The 3D structure of the nebula will help to distinguish different progenitor scenarios.

van Marle, A. J.; Keppens, R.

2012-11-01

278

Observation and Analysis of Small-scale Solar Magnetic Structure

NASA Astrophysics Data System (ADS)

Solar magnetic flux elements on spatial scales below 350 km (0\\arcsec.5) are analyzed using G-Band 4305 Angstroms, Ca II K-line, and 4686 Angstroms continuum filtegrams as well as Fe I 6302 Angstroms and 5250 Angstroms magnetograms acquired nearly simultaneously at the Swedish Solar Vacuum Telescope on La Palma. Spatial resolution is below 0\\arcsec.3 in the majority of images. Phase-diversity image restoration is applied to yield a 180 frame (78 minute) image set in which nearly every frame exhibits 0\\arcsec.2 spatial resolution. Image processing algorithms are developed which successfully segment the magnetic elements from the surrounding granulation for analysis. The FWHM of magnetic elements demarcated by G-band bright points in disk-center plage is log-normally distributed with a modal value of 220 km and an average value of 250 km. Average disk center contrast of magnetic elements in the G-band is 31% with maximum values frequently exceeding 70% relative to the quiet-Sun average. Simulataneous 4686 Angstroms continuum contrast is 2 to 3 times lower. The average G-band contrast of magnetic elements shows no size dependency over a range of 150---600 km in diameter. G-band bright points occur without exception on sites of isolated magnetic flux concentrations or peninsular concentrations extending from larger concentrations of flux; isolated magnetic flux concentrations are found without associated G-band bright points. Magnetic elements demarcated by G-band bright points occupy no more than 1---2% of plage and active network regions by area at any one time. Magnetic elements move in the intergranular flowfield at speeds from 0.5 to 5 km sec(-1) . The RMS speed is 2.4 km sec(-1) over an average range of 2100 km (3\\arcsec). Continual fragmentation and merging of magnetic elements is the normal evolutionary mode for small-scale magnetic elements. The time scale for the dynamics is approximately 6--8 minutes, but significant morphological changes occur on time scales as short as 100 seconds. Analysis of the tracks of individual elements yields a diffusion coefficient of 224.8+/-0.2 {km}(2{sec}(-1)) . Indications of anomolous diffusivity consistent with diffusion on a fractal geometry are found. This research was supported by the SOI-MDI NASA contract NAG5-3077 at Stanford University and NASA contract NAS8-39747 and independent research funds at Lockheed-Martin.

Berger, T.

1996-05-01

279

Multi-dimensional combustor flowfield analyses in gas-gas rocket engine

NASA Technical Reports Server (NTRS)

The objectives of the present research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustions processes. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an on-going experimental program at NASA LeRC. Detailed computational modeling requires the application of the full three-dimensional Navier Stokes equations, coupled with species diffusion equations. The numerical procedure is performed on both time-marching and time-accurate algorithms and using an LU approximate factorization in time, flux split upwinding differencing in space. The emphasis in this paper is focused on using numerical analysis to understand detailed combustor flowfields, including the shear layer dynamics created between fuel film cooling and the core gas in the vicinity on the nearby combustor wall; the integrity and effectiveness of the coolant film; three-dimensional fuel jets injection/mixing/combustion characteristics; and their impacts on global engine performance.

Tsuei, Hsin-Hua; Merkle, Charles L.

1994-01-01

280

Analysis of Large-scale Grid-based Monte Carlo Applications Analysis of Large-scale Grid-based Monte Carlo Applications Yaohang Li and Michael Mascagni Department of Computer Science and School-based Monte Carlo Applications Yaohang Li* Department of Computer Science and School of Computational Science

Mascagni, Michael

281

NASA Astrophysics Data System (ADS)

The method of anchored distributions (MAD, Rubin et al., Water Resour. Res., 2010) is a Bayesian inversion technique that combines geostatistical concepts with a strategy for localization of data that is indirectly related to the target variables, using anchors. Anchors are statistical distributions of the target variables (e.g., the hydraulic conductivity) at specific locations The variable field is described by the statistical distributions of structural parameters that characterize global features and by anchor distributions that intend to capture local effects. The posterior distributions of structural and anchor parameter sets are used to update the approximate spatial distribution of target variable and are generated by re-sampling the parameter sets using their normalized likelihood estimates as the probability of being selected. Increasing the dimension of the data, to include additional information in the likelihood estimate, increases the computational burden. Two measures are taken to accommodate the advantageous additional data without spurious side effects. (1) Partitioning parameter sets into hypercubes, based upon the similarity of the structural parameter values. (2) Principal component analysis, to reduce the dimensionality by discarding a certain percentage of principal components. As an additional feature for large sample sets, or faster calculation, a ‘bundling’ regime can be implemented. Bundling is employed immediately after partitioning the parameter sets into hypercubes. Bundling identifies spatial patterns amongst the realizations generated from the distributions defining the anchor parameters. The added organizational step allows data with reduced sample sizes to be passed to the PCA algorithm. The division of the data set allows for simple parallelization of the computation and our case study achieved a three-fold dimension reduction. Because of the high dimension involved in the calculation, without absurdly large sample sizes, it is reasonable to assume that the data sparsely populates the hyperspace. In order to avoid using an interpolation scheme that would average and smooth the likelihood distribution over extensive regions of unpopulated hyperspace, the data is scanned for clusters using the HOPACH algorithm authored by M. Van der Laan. The density is estimated, over the clusters, non-parametrically. The cluster approximations are summed up using a mixture model to achieve the final likelihood estimate.

Over, M. W.; Murakami, H.; Hahn, M. S.; Yang, Y.; Rubin, Y.

2010-12-01

282

NASA Technical Reports Server (NTRS)

Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

Krishnamurthy, Thiagarajan

2010-01-01

283

INUNDATION PATTERNS AND FATALITY ANALYSIS ON LARGE-SCALE FLOOD

NASA Astrophysics Data System (ADS)

In order to enhance the emergency preparedness for large-scale floods of the Ara River, we categorized the inundation patterns and calculated fatality estimates. We devised an effective continuous embankment elevation estimation method employing light detection and ranging data analysis. Drainage pump capabilities, in terms of operatable inundation depth and operatable duration limited by fuel supply logistics, were modeled from pump station data of eac h site along the rivers. Fatality reduction effects due to the enhancement of the drainage capabilities were calculated. We found proper operations of the drainage facilities can decrease the number of estimat ed fatalities considerably in some cases. We also estimated the difference of risk between floods with 200 years return period and those with 1000 years return period. In some of the 1000 years return period cases, we found the estimated fatalities jumped up whereas the populations in inundated areas changed only a little.

Ikeuchi, Koji; Ochi, Shigeo; Yasuda, Goro; Okamura, Jiro; Aono, Masashi

284

Large-Scale Quantitative Analysis of Painting Arts

NASA Astrophysics Data System (ADS)

Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

2014-12-01

285

Multidimensional Scaling Analysis of the Dynamics of a Country Economy

This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

Mata, Maria Eugénia

2013-01-01

286

It is well established that non-uniform sampling (NUS) allows acquisition of multi-dimensional NMR spectra at a resolution that cannot be obtained with traditional uniform acquisition through the indirect dimensions. However, the impact of NUS on the signal-to-noise ratio (SNR) and sensitivity are less well documented. SNR and sensitivity are essential aspects of NMR experiments as they define the quality and extent of data that can be obtained. This is particularly important for spectroscopy with low concentration samples of biological macromolecules. There are different ways of defining the SNR depending on how to measure the noise, and the distinction between SNR and sensitivity is often not clear. While there are defined procedures for measuring sensitivity with high concentration NMR standards, such as sucrose, there is no clear or generally accepted definition of sensitivity when comparing different acquisition and processing methods for spectra of biological macromolecules with many weak signals close to the level of noise. Here we propose tools for estimating the SNR and sensitivity of NUS spectra with respect to sampling schedule and reconstruction method. We compare uniformly acquired spectra with NUS spectra obtained in the same total measuring time. The time saving obtained when only 1/k of the Nyquist grid points are sampled is used to measure k-fold more scans per increment. We show that judiciously chosen NUS schedules together with suitable reconstruction methods can yield a significant increase of the SNR within the same total measurement time. Furthermore, we propose to define the sensitivity as the probability to detect weak peaks and show that time-equivalent NUS can significantly increase this detection sensitivity. The sensitivity gain increases with the number of NUS indirect dimensions. Thus, well-chosen NUS schedules and reconstruction methods can significantly increase the information content of multidimensional NMR spectra of challenging biological macromolecules. PMID:23274692

Hyberts, Sven G.; Robson, Scott A.; Wagner, Gerhard

2013-01-01

287

NASA Astrophysics Data System (ADS)

Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of Precambrian rocks.

Pandarinath, Kailasa

2014-12-01

288

Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

ERIC Educational Resources Information Center

International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…

Chow, Kui Foon; Kennedy, Kerry John

2014-01-01

289

Metal Analysis of Scales Taken from Arctic Grayling

This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-induc- tively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations re- flected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca,

A. P. Farrell; A. H. Hodaly; S. Wang

290

We give an outlook, how to realize the ideas of complex scaling from Grothaus, Vogel and Streit to phase space path integrals in the framework of White Noise Analysis. The idea of this scaling method goes back to Doss. Therefore we extend the concept complex scaling to scaling with suitable bounded operators.

Wolfgang Bock

2014-01-21

291

A confirmatory factor analysis of scores on the teacher efficacy scale

Validity studies on the Teacher Efficacy Scale provide us with neither clear evidence nor clear solutions to the factorial structure or the theoretical concepts underlying the scale’s items. This study tests different factor structures of the Teacher Efficacy Scale as found in the literature using confirmatory factor analysis on data from a sample of 540 practicing teachers. Four factorial models

André Brouwers; Welko Tomic; Sjef Stijnen

2002-01-01

292

Observation and Analysis of Small-Scale Solar Magnetic Structure

NASA Astrophysics Data System (ADS)

Properties of small-scale magnetic structures in the photosphere are analyzed in multi-spectral time-series image sets obtained at the 50 cm Swedish Solar Vacuum Telescope (SVST) on the island of La Palma, Spain. Several of the images are among the highest resolution images of the solar photosphere yet obtained. Sub-arcsecond-scale magnetic 'elements' are identified, segmented, and tracked using bright points found in very high spatial resolution G-band 4305 A filtergrams. Simultaneous images including Ca II K-line filtergrams, Fe I 6302 A magnetograms, and 4686 A broadband continuum filtergrams allow cross-wavelength comparison of properties. Angular resolution of the filtergrams is typically 0.25 '' and temporal resolution is in the range of 20-100 sec; magnetogram resolution approaches 0.3'' in some images and is generally below 0.5''. To above an 84% statistical confidence level, G-band bright points occur exclusively at sites of kilogauss, sub-arcsecond, magnetic flux concentrations in the photosphere; magnetic flux concentration is a necessary but not sufficient condition for the occurrence of G-band bright points. The measured distribution of magnetic element diameters in active region network is log-normal with a modal value of 220 km (0.3''). The smallest elements observed are 120 km (0.17'') in diameter; the largest are about 600 km (0.7'') in diameter. The average contrast with respect to quiet Sun of magnetic elements in the G-band is 30%: 2-3 times higher than the average continuum contrast. Magnetic element contrast does not vary with size within the size range of G-band bright point measurements. Average contrast increases with limbward heliocentric angle to a peak of about 80% at ? = /cos? = 0.3; there is evidence of a decrease with further increase in angle. Magnetic elements undergo a continual fragmentation/merging evolution driven by the granular convective flowfield of the photosphere; morphological time scales are on the order of 100 seconds. Velocities of individual elements range from 1-5 km s-1 with an RMS value of 2.4 km s-1. The range of motion is typically on granular and mesogranular scales (1000-2500 km) with an average value of 2100 km. Individual fragments from clusters have a characteristic lifetime on the order of the granulation correlation time (6-8 minutes). The lifetime of clusters associated with persistent sinks in the granular flowfield is on the order of hours. Classical statistical analysis of displacement versus time yields a diffusion coefficient for network magnetic elements of 224.8± 0.2 km s-1. In general, the results are inconsistent with the idea of small-scale magnetic flux in the photosphere being contained in stable, isolated, 'flux tubes' and emphasize the need for better understanding of the formation and the thermal (and? non-thermal) heating of magnetic regions in the photosphere.

Berger, Thomas Edward

1997-05-01

293

Metal Analysis of Scales Taken from Arctic Grayling

This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation–inductively coupled\\u000a plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal\\u000a concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of\\u000a the scales. Ten elements (Mg, Ca, Ni, Zn,

A. P. Farrell; A. H. Hodaly; S. Wang

2000-01-01

294

Dream Intensity Scale: Factors in the Phenomenological Analysis of Dreams

The present study aimed to develop a comprehensive assessment tool for measuring subjective dream intensity by revising the original probes and response scales of the Dream Intensity Inventory and incorporating new variables. The factor analyses suggested that 18 items of the new instrument, Dream Intensity Scale, could form four scales and six subscales. The revision of the probes and response

Calvin Kai-Ching Yu

2010-01-01

295

of FLOW model. We start from Longuet-Higgins (1970b) one-dimensional longshore momentum balance, with the wave forcing represented by an energy decay based on a monochromatic wave breaking on a planar beach: 25 6sin sin 2 cos by E E d V d d x... ?? ??? ?? ? ?? ? ?? ? ? ?? ? ?? ? ? ?? ?? ?? ? (16) Herein the O(1) problem is identical to the situation expressed in Longuet- Higgins (1970a,b); the solution is: 26 1 0 1 1 ; P V B X A X? ? 0 1X? ? (17) 2 0 2 ; P V B X? 1X ? (18) where 0 0...

Jiang, Boyang

2012-02-14

296

Analysis of small-scale rotor hover performance data

NASA Technical Reports Server (NTRS)

Rotor hover-performance data from a 1/6-scale helicopter rotor are analyzed and the data sets compared for the effects of ambient wind, test stand configuration, differing test facilities, and scaling. The data are also compared to full scale hover data. The data exhibited high scatter, not entirely due to ambient wind conditions. Effects of download on the test stand proved to be the most significant influence on the measured data. Small-scale data correlated resonably well with full scale data; the correlation did not improve with Reynolds number corrections.

Kitaplioglu, Cahit

1990-01-01

297

Large-scale fault kinematic analysis in Noctis Labyrinthus (Mars)

NASA Astrophysics Data System (ADS)

Noctis Labyrinthus (Mars) is characterized by many tectonic features, which represent brittle deformation of the crust. This tectonic setting was analysed by remote sensing of the Viking Mars Digital Image Model (MDIM) mosaic and Mars Orbiter Camera (MOC) global mosaic, in order to identify deformational events. The main features are normal faults producing horst-graben structures, strike-slip faults, and related en-echelon and pull-apart basins. Using the criterion of cross-cutting relationships and analysis of secondary structures, to infer sense of movement of faults, two deformational phases were identified in the Noctis Labyrinthus area. The first, D1, located mainly in the northern part, is characterized by transtensional faults (Noachian). The second, D2, recorded in the southern part of the Noctis Labyrinthus by an orthorhombic extensional fault pattern along NNE and WNW trends, is related to the Valles Marineris formation (Late Noachian-Early Hesperian). A third tectonic event, D3, represented by the partly known dextral NW strike-slip faults cross-cutting the Valles Marineris Canyon System (Late Hesperian?-Amazonian?), was not found in Noctis Labyrinthus at the scale and resolution considered.

Bistacchi, Nicola; Massironi, Matteo; Baggio, Paolo

2004-01-01

298

Acoustic modal analysis of a full-scale annular combustor

NASA Technical Reports Server (NTRS)

An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine. Previously announced in STAR as N83-21896

Karchmer, A. M.

1983-01-01

299

Acoustic modal analysis of a full-scale annular combustor

NASA Technical Reports Server (NTRS)

An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine.

Karchmer, A. M.

1982-01-01

300

MicroScale Thermophoresis: Interaction analysis and beyond

NASA Astrophysics Data System (ADS)

MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.

Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis

2014-12-01

301

MIXREGLS: A Program for Mixed-Effects Location Scale Analysis

MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062

Hedeker, Donald; Nordgren, Rachel

2013-01-01

302

\\u000a This paper introduces the architecture and algorithms of TCMiner: a high performance data mining system for multi-dimensional\\u000a data analysis of Traditional Chinese Medicine prescriptions. The system has the following competing advantages: (1) High Performance\\u000a (2) Multi-dimensional Data Analysis Capability (3) High Flexibility (4) Powerful Interoperability (5) Special Optimization\\u000a for TCM. This data mining system can work as a powerful assistant

Chuan Li; Changjie Tang; Jing Peng; Jianjun Hu; Lingming Zeng; Xiaoxiong Yin; Yongguang Jiang; Juan Liu

2004-01-01

303

Background and Objective Gingival crevicular fluid (GCF) has been of major interest for many decades as valuable body fluid that may serve as a source of biomarkers for both periodontal and systemic diseases. Because of its very small sample size, sub-?l level, identification of its protein composition by classical biochemical methods has been limited. The advent of highly sensitive mass spectrometric technology has permitted large-scale identification of protein components of many biological samples. This technology has been employed to identify protein composition of GCF from inflamed and periodontal sites. In this report we present a proteome dataset of GCF from healthy periodontium sites. Methods A combination of periopaper collection method with application of multidimensional protein separation and mass spectrometric (MS) technology led to a large-scale documentation of the proteome of GCF from healthy periodontium sites. Results The approaches utilized have culminated in identification of 199 proteins in GCF of periodontally healthy sites. The current GCF proteome from healthy sites was compared and contrasted with those proteomes of GCF from inflamed and periodontal sites as well as serum. The cross-correlation of the GCF and plasma proteomes permitted dissociation of the 199 identified GCF proteins into, 105 proteins (57%) that can be identified in plasma and 94 proteins (43%) which are distinct and unique to GCF microenvironment. Such analysis also revealed distinctions in protein functional categories between serum proteins and those specific to GCF microenvironment. Conclusion Firstly, the data presented herein provide the proteome of GCF from periodontally healthy sites through establishment of innovative analytical approaches for effective analysis of GCF from periopapers both at the level of complete elusion and removal of abundant albumin which restricts identification of low abundant proteins. Secondly, it adds significantly to the knowledge of GCF composition and highlights new groups of proteins specific to GCF microenvironment. PMID:22029670

Carneiro, Leandro G.; Venuleo, Caterina; Oppenheim, Frank G.; Salih, Erdjan

2011-01-01

304

Finite Element Analysis of Small Scale Continuous Calving

NASA Astrophysics Data System (ADS)

Ice shelves are floating ice masses, which are sensitive to climate changes. The main mechanisms for the mass loss of ice shelves around Antarctica are basal melting and calving. For an understanding of the mechanisms of calving the influence of environmental parameters needs to be investigated. We use a fracture mechanical approach to examine the nature and frequency of calving events. Ice responses to load in two ways: on long time scales ice reacts like a viscous fluid, and on short time scale like an elastic solid. As calving is a representation of the solid nature of ice, the elastic response is important and linear elastic fracture mechanics can be applied. However, gravity remains a long time load and hence, a viscous component needs to be taken into account as well. Therefore, we use a Kelvin-Voigt model for analyzing the transient response of an ice shelf to a calving event. In a simplified 2D-model the ice shelf is treated as a rectangular block, in which the gravity force is the only load in a first analysis. The stresses on the surface in the vicinity of the calving front are computed with the finite element software COMSOL. The boundary conditions are the water pressure at the front and bottom of the ice shelf and a constant displacement at the inflow. A stationary state will reappear until eventually the subsequent calving event occurs, the termination time is around 175days. Based on this time interval and the flow velocity of the ice shelf we estimate the calving rate. Different parameter studies reveal the influence of geometry and material parameters on the stresses for an elastic material model. The literature and measurements at the Ekstroem Ice Shelf, East Antarctica, provides the relevant parameter range. Due to the depth-dependent water pressure at the ice front, a bell shaped distribution of stresses on the surface is found. For this reason the location of the maximal stress denotes the most likely position for a calving event and is arranged in between 0.65H and 0.85H, with H the thickness at the ice front. The results of these studies are compared to the results for two cross-sections of measured geometries of the Ekstroem Ice Shelf.

Christmann, Julia; Müller, Ralf; Humbert, Angelika; Gross, Dietmar

2013-04-01

305

Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)

ERIC Educational Resources Information Center

Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…

Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.

2004-01-01

306

A priori and a posteriori error analysis for a large-scale ocean circulation finite element

-refinement 1 Introduction Different models have been proposed for large-scale horizontal ocean dynamics. AmongA priori and a posteriori error analysis for a large-scale ocean circulation finite element model J solution of the stream function-vorticity formulation for a large-scale ocean circulation model. First, we

RodrÃguez, Rodolfo

307

Scaling Analysis of Nanowire PhaseChange Memory

This letter analyzes the scaling property of nanowire (NW) phase-change memory (PCM) using analytic and numerical methods. The scaling scenarios of the three widely used NW PCM operation schemes (i.e., constant electric field, voltage, and cur- rent) are studied and compared. It is shown that if the device size is downscaled by a factor of 1\\/k ( k> 1), the

Jie Liu; Bin Yu; M. P. Anantram

2011-01-01

308

Confirmatory Factor Analysis of the Geriatric Depression Scale

ERIC Educational Resources Information Center

Purpose: The Geriatric Depression Scale (GDS) is widely used in clinical and research settings to screen older adults for depressive symptoms. Although several exploratory factor analytic structures have been proposed for the scale, no independent confirmation has been made available that would enable investigators to confidently identify scores…

Adams, Kathryn Betts; Matto, Holly C.; Sanders, Sara

2004-01-01

309

Bayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive. The inversion method is based on Bayesian Monte Carlo analysis applied to a regional-scale chemistry transport are attributed to individual Monte Carlo simulations by comparing them with observations from the AIRPARIF

Menut, Laurent

310

Large-Scale Sentiment Analysis for News and Blogs (System Demonstration)

Large-Scale Sentiment Analysis for News and Blogs (System Demonstration) Namrata Godbole Manjunath sentiment cues can provide a surprisingly mean- ingful sense of how the latest news impacts important entities. Here we demonstrate our large-scale sentiment analysis sys- tem for news and blog entities built

311

Examined sensory perceptions of dimensionally simple stimuli composed of 1 or 2 objectively measurable attributes of flavor (sweet, sour) and color (red) using multidimensional scaling analysis. In 3 experiments with 20–30 19–36 yr old Ss each, high correlations were found between the objectively measurable physical characteristics of the stimuli and the perceptual spaces obtained by multidimensional scaling analysis of expressed

James M. McCullough; Charlene S. Martinsen; Reza Moinpour

1978-01-01

312

QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web

QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering

Caverlee, James

313

QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web

1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping

Liu, Ling

314

Scaling parameters for PFBC cyclone separator system analysis

Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.

Gil, A.; Romeo, L.M.; Cortes, C.

1999-07-01

315

Differential rotation and cloud texture: Analysis using generalized scale invariance

The standard picture of atmospheric dynamics is that of an isotropic two-dimensional large scale and an isotropic three-dimensional small scale, the two separated by a dimensional transition called the [open quotes]mesoscale gap.[close quotes] Evidence now suggests that, on the contrary, atmospheric fields, while strongly anisotropic, are nonetheless scale invariant right through the mesoscale. Using visible and infrared satellite cloud images and the formalism of generalized scale invariance (GSI), the authors attempt to quantify the anisotropy for cloud radiance fields in the range 1-1000 km. To do this, the statistical translational invariance of the fields is exploited by studying the anisotropic scaling of lines of constant Fourier amplitude. This allows the investigation of the change in shape and orientation of average structures with scale. For the three texturally-and meteorologically-very different images analyzed, three different generators of anisotropy are found that generally reproduce well the Fourier space anisotropy. Although three cases are a small number from which to infer ensemble-averaged properties, the authors conclude that while cloud radiances are not isotropic (self-similar), they are nonetheless scaling. Since elsewhere (with the help of simulations) it is shown that the generator of the anisotropy is related to the texture, it is argued here that GSI could potentially provide a quantitative basis for cloud classification and modeling. 59 refs., 21 figs., 2 tabs.

Pflug, K.; Lovejoy, S. (McGill Univ., Montreal, Quebec (Canada)); Schertzer, D. (Universite Pierre et Marie Curie, Paris (France))

1993-02-14

316

Multidimensional analysis of the large-scale segregation of luminosity

The multidimensional or multifractal formalism has been applied to analyze the CfA catalog. The spectrum of scaling indices and the generalized dimensions D(q) have been found to be scale-invariant in certain scaling regions. This invariance means that the multidimensional formalism is a good tool to characterize galaxy distributions. By means of this formalism it has been found that CfA galaxies brighter than about M(c) = -20(H0 = 100 km/s/Mpc) are most clustered than fainter galaxies. 42 refs.

Dominguez-Tenreiro, R.; Martinez, V.J.

1989-04-01

317

NASA Astrophysics Data System (ADS)

We have developed multi-dimensional constrained covariant density functional theories (MDC-CDFT) for finite nuclei in which the shape degrees of freedom ??? with even ?, e.g., ?20, ?22, ?30, ?32, ?40, etc., can be described simultaneously. The functional can be one of the following four forms: the meson exchange or point-coupling nucleon interactions combined with the non-linear or density-dependent couplings. For the pp channel, either the BCS approach or the Bogoliubov transformation is implemented. The MDC-CDFTs with the BCS approach for the pairing (in the following labelled as MDC-RMF models with RMF standing for "relativistic mean field") have been applied to investigate multi-dimensional potential energy surfaces and the non-axial octupole Y32-correlations in N = 150 isotones. In this contribution we present briefly the formalism of MDC-RMF models and some results from these models. The potential energy surfaces with and without triaxial deformations are compared and it is found that the triaxiality plays an important role upon the second fission barriers of actinide nuclei. In the study of Y32-correlations in N = 150 isotones, it is found that, for 248Cf and 260Fm, ?32 > 0.03 and the energy is lowered by the ?32 distortion by more than 300 keV; while for 246Cm and 252No, the pocket with respect to ?32 is quite shallow.

Lu, Bing-Nan; Zhao, Jie; Zhao, En-Guang; Zhou, Shan-Gui

2014-03-01

318

Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer

NASA Astrophysics Data System (ADS)

The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.

Schnieders, Jana; Garbe, Christoph

2014-05-01

319

Data mining techniques for large-scale gene expression analysis

Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...

Palmer, Nathan Patrick

2011-01-01

320

The WAIS and Wechsler Memory Scale subtest scores of 256 neurologic and nonneurologic subjects were factor analyzed. The results supported the construct validity of the Wechsler Memory Scale as a measure of verbal learning and memory, attention and concentration, and orientation. Construct validity was not demonstrated for the Visual Reproduction subtest as a measure of visual memory. Suggestions are offered

Glenn J. Larrabee; Robert L. Kane; John R. Schuck

1983-01-01

321

An item response theory analysis of the Olweus Bullying scale.

In the present article, we used IRT (graded response) modeling as a useful technology for a detailed and refined study of the psychometric properties of the various items of the Olweus Bullying scale and the scale itself. The sample consisted of a very large number of Norwegian 4th-10th grade students (n?=?48 926). The IRT analyses revealed that the scale was essentially unidimensional and had excellent reliability in the upper ranges of the latent bullying tendency trait, as intended and desired. Gender DIF effects were identified with regard to girls' use of indirect bullying by social exclusion and boys' use of physical bullying by hitting and kicking but these effects were small and worked in opposite directions, having negligible effects at the scale level. Also scale scores adjusted for DIF effects differed very little from non-adjusted scores. In conclusion, the empirical data were well characterized by the chosen IRT model and the Olweus Bullying scale was considered well suited for the conduct of fair and reliable comparisons involving different gender-age groups. Information Aggr. Behav. 9999:XX-XX, 2014. © 2014 Wiley Periodicals, Inc. PMID:25460720

Breivik, Kyrre; Olweus, Dan

2014-12-01

322

ERIC Educational Resources Information Center

The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…

Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.

2010-01-01

323

ERIC Educational Resources Information Center

The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…

Schaffhauser, Dian

2009-01-01

324

Development of a scale down cell culture model using multivariate analysis as a qualification tool.

In characterizing a cell culture process to support regulatory activities such as process validation and Quality by Design, developing a representative scale down model for design space definition is of great importance. The manufacturing bioreactor should ideally reproduce bench scale performance with respect to all measurable parameters. However, due to intrinsic geometric differences between scales, process performance at manufacturing scale often varies from bench scale performance, typically exhibiting differences in parameters such as cell growth, protein productivity, and/or dissolved carbon dioxide concentration. Here, we describe a case study in which a bench scale cell culture process model is developed to mimic historical manufacturing scale performance for a late stage CHO-based monoclonal antibody program. Using multivariate analysis (MVA) as primary data analysis tool in addition to traditional univariate analysis techniques to identify gaps between scales, process adjustments were implemented at bench scale resulting in an improved scale down cell culture process model. Finally we propose an approach for small scale model qualification including three main aspects: MVA, comparison of key physiological rates, and comparison of product quality attributes. PMID:24124180

Tsang, Valerie Liu; Wang, Angela X; Yusuf-Makagiansar, Helena; Ryll, Thomas

2014-01-01

325

In the first part of this study, daylighting levles in an actualy classroom are compared to scale model measurements and to computer program predictions. Secondly, the daylighting effects in the building atrium are examined through the studies...

Kim, K. S.; Boyer, L. L.; Degelman, L. O.

1985-01-01

326

Scale Free Analysis and the Prime Number Theorem

We present an elementary proof of the prime number theorem. The relative error follows a golden ratio scaling law and respects the bound obtained from the Riemann's hypothesis. The proof is derived in the framework of a scale free nonarchimedean extension of the real number system exploiting the concept of relative infinitesimals introduced recently in connection with ultrametric models of Cantor sets. The extended real number system is realized as a completion of the field of rational numbers $Q$ under a {\\em new} nonarchimedean absolute value, which treats arbitrarily small and large numbers separately from a finite real number.

Dhurjati Prasad Datta; Anuja Roy Choudhuri

2010-01-10

327

Analysis of deep inelastic scattering with $z$-dependent scale

Evolution of the parton densities at NLO in $\\alpha_S$ using $\\tilde W^2 = Q^2 (1-z)/z$ instead of the usual $Q^2$ for the scale of the running coupling $\\alpha_S$ is investigated. While this renormalisation scale change was originally proposed as the relevant one for $x \\to 1$, we explore the consequences for all $x$ with this choice. While it leads to no improvement to the description of DIS data, the nature of the gluon at low $x$, low $Q^2$ is different, avoiding the need for a `valence-like' gluon.

R. G. Roberts

1999-04-13

328

SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.

MICHAEL T. ITAMUA AND CLIFFORD K. HO

1998-06-04

329

Field-aligned currents' scale analysis performed with the Swarm constellation

NASA Astrophysics Data System (ADS)

We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.

Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter

2015-01-01

330

A Multidimensional Scaling Analysis of Students' Attitudes about Science Careers

ERIC Educational Resources Information Center

To encourage students to seek careers in Science, Technology, Engineering and Mathematics (STEM) fields, it is important to gauge students' implicit and explicit attitudes towards scientific professions. We asked high school and college students to rate the similarity of pairs of occupations, and then used multidimensional scaling (MDS) to create…

Masnick, Amy M.; Valenti, S. Stavros; Cox, Brian D.; Osman, Christopher J.

2010-01-01

331

Introducing Scale Analysis by Way of a Pendulum

ERIC Educational Resources Information Center

Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…

Lira, Ignacio

2007-01-01

332

Psychometric Analysis of Computer Science Help-Seeking Scales

ERIC Educational Resources Information Center

The purpose of this study was to develop scales to assess instrumental help seeking, executive help seeking, perceived benefits of help seeking, and avoidance of help seeking and to examine their psychometric properties by conducting factor and reliability analyses. As this is the first attempt to examine the latent structures underlying the…

Pajares, Frank; Cheong, Yuk Fai; Oberman, Paul

2004-01-01

333

Exergy analysis of domestic-scale solar water heaters

Solar water heater is the most popular means of solar energy utilization because of technological feasibility and economic attraction compared with other kinds of solar energy utilization. Earlier assessments of domestic-scale solar water heaters were based on the first thermodynamic law. However, this kind of assessment cannot perfectly describe the performance of solar water heaters, since the essence of energy

Wang Xiaowu; Hua Ben

2005-01-01

334

Bohr model and dimensional scaling analysis of atoms and molecules

It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here we review recent developments of the Bohr model that connect it with dimensional scaling procedures adapted from quantum chromodynamics. This approach treats electrons as point particles whose positions are determined by

Anatoly Svidzinsky; Goong Chen; Siu Chin; Moochan Kim; Dongxia Ma; Robert Murawski; Alexei Sergeev; Marlan Scully; Dudley Herschbach

2008-01-01

335

Analysis of Large-Scale Traveling Ionospheric Disturbances

It is shown that an acoustic-gravity wave interpretation of Chan and Villard's observations of large-scale traveling ionospheric disturbances is consistent with theory. By using the dispersion curves and kinetic energy profiles for atmospheric gravity waves recently computed by Pfeifer and Zarichny, Pfeifer and Gersten, and earlier by Press and Harkrider, it is possible to assign a mode, period, and height

ARTHUR F. WICKERSHAMJR

1964-01-01

336

An Exploratory Factor Analysis of the Differential Ability Scales.

ERIC Educational Resources Information Center

The primary goal of this study was to investigate the underlying structure of the Differential Ability Scales (DAS) using Exploratory Principal Axis Factoring (PAF) with 62 nonclinical preschoolers. While previous factor analyses of the DAS Core subtests revealed the derivation of two distinct factors, the current results revealed only one factor,…

Dunham, Mardis D.; McIntosh, David E.

337

The Multidimensional Fear of Death Scale: An Independent Analysis.

ERIC Educational Resources Information Center

Examined the factor structure and subscale reliabilities of an eight-dimensional measure of fear of death (the Multidimensional Fear of Death Scale) using a New Zealand sample. Comparison with the results of a United States study showed that both the subscale reliabilities and the factor structure were almost perfectly reproduced. (Author)

Walkey, Frank H.

1982-01-01

338

A Rasch Analysis of the Teachers Music Confidence Scale

ERIC Educational Resources Information Center

This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…

Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria

2007-01-01

339

Analysis of Small-Scale Hydraulic Actuation Jicheng Xia

of force and power while at the same time being relatively light weight compared to the equivalent to an equivalent electromechanical system comprised of off-the-shelf components. Calculation results revealed that high operating pressures are needed for small-scale hydraulics to be lighter than the equivalent

Durfee, William K.

340

Analysis of Small-Scale Hydraulic Systems Jicheng Xia

system power density was analyzed with simple physics models and com- pared to an equivalent pressures are needed for small-scale hy- draulic power systems to be lighter than the equivalent elec to attain these levels of force and power while at the same time being rel- atively light weight compared

Durfee, William K.

341

A Reliability Analysis of Goal Attainment Scaling (GAS) Weights

ERIC Educational Resources Information Center

Goal attainment scaling (GAS) has been considered to be one of the most versatile and appealing evaluation protocols available for human services. Aspects of the protocol that make the method so appealing to practitioners--that is, collaboratively working with individual clients to identify and assign weights to goals they will work to…

Marson, Stephen M.; Wei, Guo; Wasserman, Deborah

2009-01-01

342

Inclusive electron scattering from nuclei at low momentum transfer (corresponding to x>1) and moderate Q^2 is dominated by quasifree scattering from nucleons. In the impulse approximation, the cross section can be directly connected to the nucleon momentum distribution via the scaling function F(y). The breakdown of the y-scaling assumptions in certain kinematic regions have prevented extraction of nucleon momentum distributions from such a scaling analysis. With a slight modification to the y-scaling assumptions, it is found that scaling functions can be extracted which are consistent with the expectations for the nucleon momentum distributions.

J. Arrington

2003-06-13

343

Considering general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 solar masses, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the Vertex-CoCoNuT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies of electron antineutrinos and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M>10 M_sun as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of the electron antineutrino mean energy with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ~10kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such "SASI neutrino chirps" reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

B. Müller; H. -Th. Janka

2014-06-26

344

Large-scale computations in analysis of structures

Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.

McCallen, D.B.; Goudreau, G.L.

1993-09-01

345

NASA Astrophysics Data System (ADS)

A scalable parallel and block-adaptive cubed-sphere grid simulation framework is described for solution of hyperbolic conservation laws in domains between two concentric spheres. In particular, the Euler and ideal magnetohydrodynamics (MHD) equations are considered. Compared to existing cubed-sphere grid algorithms, a novelty of the proposed approach involves the use of a fully multi-dimensional finite-volume method. This leads to important advantages when the treatment of boundaries and corners of the six sectors of the cubed-sphere grid is considered. Most existing finite-volume approaches use dimension-by-dimension differencing and require special interpolation or reconstruction procedures at ghost cells adjacent to sector boundaries in order to achieve an order of solution accuracy higher than unity. In contrast, in our multi-dimensional approach, solution blocks adjacent to sector boundaries can directly use physical cells from the adjacent sector as ghost cells while maintaining uniform second-order accuracy. This leads to important advantages in terms of simplicity of implementation for both parallelism and adaptivity at sector boundaries. Crucial elements of the proposed scheme are: unstructured connectivity of the six grid root blocks that correspond to the six sectors of the cubed-sphere grid, multi-dimensional k-exact reconstruction that automatically takes into account information from neighbouring cells isotropically and is able to automatically handle varying stencil size, and adaptive division of the solution blocks into smaller blocks of varying spatial resolution that are all treated exactly equally for inter-block communication, flux calculation, adaptivity and parallelization. The proposed approach is fully three-dimensional, whereas previous studies on cubed-sphere grids have been either restricted to two-dimensional geometries on the sphere or have grids and solution methods with limited capabilities in the third dimension in terms of adaptivity and parallelism. Numerical results for several problems, including systematic grid convergence studies, MHD bow-shock flows, and global modelling of solar wind flow are discussed to demonstrate the accuracy and efficiency of the proposed solution procedure, along with assessment of parallel computing scalability for up to thousands of computing cores.

Ivan, Lucian; De Sterck, Hans; Northrup, Scott A.; Groth, Clinton P. T.

2013-12-01

346

NASA Technical Reports Server (NTRS)

Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.

Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)

2001-01-01

347

An analysis of a large scale habitat monitoring application

Habitat and environmental monitoring is a driving application for wireless sensor networks. We present an analysis of data from a second generation sensor networks deployed during the summer and autumn of 2003. During a 4 month deployment, these networks, consisting of 150 devices, produced unique datasets for both systems and biological analysis. This paper focuses on nodal and network performance,

Robert Szewczyk; Alan M. Mainwaring; Joseph Polastre; John Anderson; David E. Culler

2004-01-01

348

Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis and

1 Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis not be regarded as the views of SRI or The University of Leeds. #12;3 Asset-based poverty analysis in rural The trend towards multi-dimensional poverty assessment ..................... 5 Principal component analysis

Mound, Jon

349

Survival Analysis for a Large-Scale Forest Health Issue: Missouri Oak Decline

Survival analysis methodologies provide novel approaches for forest mortality analysis that may aid in detecting, monitoring, and mitigating of large-scale forest health issues. This study examined survivor analysis for evaluating a regional forest health issue – Missouri oak decline. With a statewide Missouri forest inventory, log-rank tests of the effects of covariates on the survivor function and equality of the

C. W. Woodall; P. L. Grambsch; W. Thomas; W. K. Moser

2005-01-01

350

Interactive Exploration and Analysis of Large Scale Simulations Using Topology-based Data and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological

Tierny, Julien

351

ADVANCES IN MODAL ANALYSIS USING A ROBUST AND MULTI-SCALE METHOD Ccile Picard1

ADVANCES IN MODAL ANALYSIS USING A ROBUST AND MULTI-SCALE METHOD CÃ©cile Picard1 , Christian Frisson modal analysis approach that efficiently extracts modal parameters for plausible sound synthesis while be expen- sive for large complex systems. For this reason, modal analysis is performed in a preprocessing

Paris-Sud XI, UniversitÃ© de

352

Global-scale analysis of the carbon cycle sensitivity to drought on annual scales

NASA Astrophysics Data System (ADS)

We have optimized the Carnegie-Ames-Stanford-Approach (CASA) global biogeochemical model using FLUXNET data to better predict regional carbon budgets, with a strong focus on carbon release due to drought. Correlations between net primary productivity (NPP) derived from CASA driven by MODIS FPAR and FLUXNET gross primary productivity (GPP) were high in most European and North American sites as well as NPP production of our model shows high correlation with GPP measurement with a ratio between GPP and NPP of around 0.5 for most sites. We tested all the 60 FLUXNET sites for carbon cycle response to seasonal or annual scale drought. In particular, we focused on the drought event in Europe in 2003 which led to a substantial reduction in primary productivity. NPP generated by CASA was able to capture most of the FLUXNET signal and showed significant reduction during these drought events, illustrating that drought conditions reduced carbon uptake significantly, although, importantly, warmer spring and autumns may extend the growing season. After testing CASA's ability to model drought effect on terrestrial ecosystem for single sites including the response in heterotrophic respiration-, we ran CASA on a global scale. Our results show that the primary productivity reduced by drought shows a large variance contribution at annual scales. We also included better constraints and more physical realism in the soil moisture model which impacts both NPP and heterotrophic respiration in CASA. This significantly improved the predicted net ecosystem exchange (NEE) patterns in Europe and North America. Improvements were most significant at grassland sites, confirming that grass ecosystem may be more sensitive to soil moisture variability than other vegetation types.

Chen, Tiexi

2010-05-01

353

Multi-resolution analysis for ENO schemes

NASA Technical Reports Server (NTRS)

Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

Harten, Ami

1991-01-01

354

Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

Murray Gibson

2007-04-27

355

Analysis plan for 1985 large-scale tests. Technical report

The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

McMullan, F.W.

1983-01-01

356

Wavelet multiscale analysis for Hedge Funds: Scaling and strategies

NASA Astrophysics Data System (ADS)

The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.

Conlon, T.; Crane, M.; Ruskin, H. J.

2008-09-01

357

To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.

Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Shipman, Galen M [ORNL; Miller, Ross G [ORNL; Pugmire, Dave [ORNL; Smith, Brian E [ORNL; Steed, Chad A [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Koop, David [New York University; Ellqvist, Tommy [New York University; Poco, Jorge [Polytechnic Institute of New York University; Geveci, Berk [Kitware; Chaudhary, Aashish [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Kindig, Dave [Tech-X Corporation; Potter, Gerald [National Aeronautics and Space Administration (NASA); Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA)

2013-01-01

358

To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in interactive visual analysis of distributed large-scale data for the climate community.

Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Miller, Ross G [ORNL; Steed, Chad A [ORNL; Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Chaudhary, Aashish [Kitware; Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Pugmire, Dave [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Prabhat, [Lawrence Berkeley National Laboratory (LBNL); Geveci, Berk [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Poco, Jorge [Polytechnic Institute of New York University; Ellqvist, Tommy [New York University; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Potter, Gerald [National Aeronautics and Space Administration (NASA); Smith, Brian E [ORNL; Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA); Kindig, Dave [Tech-X Corporation; Koop, David [New York University

2013-01-01

359

A two-scale finite element formulation for the dynamic analysis of heterogeneous materials

In the analysis of heterogeneous materials using a two-scale Finite Element Method (FEM) the usual assumption is that the Representative Volume Element (RVE) of the micro-scale is much smaller than the finite element discretization of the macro-scale. However there are situations in which the RVE becomes comparable with, or even bigger than the finite element. These situations are considered in this article from the perspective of a two-scale FEM dynamic analysis. Using the principle of virtual power, new equations for the fluctuating fields are developed in terms of velocities rather than displacements. To allow more flexibility in the analysis, a scaling deformation tensor is introduced together with a procedure for its determination. Numerical examples using the new approach are presented.

Ionita, Axinte [Los Alamos National Laboratory

2008-01-01

360

Development and Initial Analysis of Multiple Sclerosis Self-Management Scale

This article describes the development and initial psychometric analysis of the Multiple Sclerosis Self-Management Scale (MSSM). The scale was developed to provide a comprehensive and psycho- metrically sound assessment of self-management knowledge and practices among adults with multi- ple sclerosis (MS). Items were developed based on a review of the MS and self-management litera- ture and professional consultation. The scale

Malachy Bishop; Michael Frain

361

Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

ERIC Educational Resources Information Center

Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

2011-01-01

362

Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists

Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists Michael Q arrays to monitor gene expression at a genome-wide scale constitutes a fundamental advance in biology. In particular, the expression pattern of all genes in Saccharomyces cerevisiae can be interrogated using

363

Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome

Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome Jin Wang1 , Qidong Zhang-scale density distribution of nucleotides from the complete Escherichia coli genome by applying the newly density distribution of bases from this genome was obtained. Especially we have discovered that G, C

Ren, Kui

364

Rasch Rating Scale Analysis of Quality Indicators of Elementary and Secondary School Performance.

ERIC Educational Resources Information Center

Types of quality indicators (QIs) for elementary schools and secondary schools in Texas, the selection of indicators by district superintendents in Texas, and the subsequent rating scale analysis using Rasch measurement procedures were studied. QIs were scaled from 1 to 7, with 1 representing "not important", and 7 representing "very important".…

Schumacker, Randall E.

365

ERIC Educational Resources Information Center

A taxometric analysis of 3 factor scales extracted from the Health Problem Overstatement (HPO) scale of the Psychological Screening Inventory (PSI; R. I. Lanyon, 1970, 1978) was performed on the data from 1,240 forensic and psychiatric patients. Mean above minus below a cut, maximum covariance, and latent-mode factor analyses produced results…

Walters, Glenn D.; Berry, David T. R.; Lanyon, Richard I.; Murphy, Michael P.

2009-01-01

366

Nat Genet . Author manuscript Large-scale genome-wide association analysis of bipolar disorder

Nat Genet . Author manuscript Page /1 12 Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4 Pamela Sklar 1 2 * , Stephan Ripke 2 3 , Laura J

Boyer, Edmond

367

Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii

This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.

Not Available

2012-03-01

368

Brief Psychometric Analysis of the Self-Efficacy Teacher Report Scale

ERIC Educational Resources Information Center

This study provides preliminary analysis of reliability and validity of scores on the Self-Efficacy Teacher Report Scale, which was designed to assess teacher perceptions of self-efficacy of students aged 8 to 17 years. (Contains 3 tables.)

Erford, Bradley T.; Duncan, Kelly; Savin-Murphy, Janet

2010-01-01

369

Analysis of Heating Systems and Scale of Natural Gas-Condensing Water Boilers in Northern Zones

ICEBO2006, Shenzhen, China Heating technologies for energy efficiency Vol.III-1-5 Analysis of Heating Systems and Scale of Natural Gas-Condensing Water Boilers in Northern Zones Yuanyuan Wu Suilin Wang Shuyuan Pan Yongzheng Shi...

Wu, Y.; Wang, S.; Pan, S.; Shi, Y.

2006-01-01

370

Enabling Large-Scale Biomedical Analysis in the Cloud

Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

2013-01-01

371

A tree swaying in a turbulent wind: a scaling analysis.

A tentative scaling theory is presented of a tree swaying in a turbulent wind. It is argued that the turbulence of the air within the crown is in the inertial regime. An eddy causes a dynamic bending response of the branches according to a time criterion. The resulting expression for the penetration depth of the wind yields an exponent which appears to be consistent with that pertaining to the morphology of the tree branches. An energy criterion shows that the dynamics of the branches is basically passive. The possibility of hydrodynamic screening by the leaves is discussed. PMID:25169247

Odijk, Theo

2015-01-01

372

SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis ROBERT ANNAN COOK Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement..., for the degree of MASTER OF SCIENCE August 1972 Ma)or Sub)oct: Geology SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis by ROBERT ANNAN COOK Approved as to style and content by...

Cook, Robert Annan

1972-01-01

373

Small-Scale Smart Grid Construction and Analysis

NASA Astrophysics Data System (ADS)

The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

Surface, Nicholas James

374

Large-scale temporal analysis of computer and information science

NASA Astrophysics Data System (ADS)

The main aim of the project reported in this paper was twofold. One of the primary goals was to produce an extensive source of network data for bibliometric analyses of field dynamics in the case of Computer and Information Science. To this end, we rendered the raw material of the DBLP computer and infoscience bibliography into a comprehensive collection of dynamic network data, promptly available for further statistical analysis. The other goal was to demonstrate the value of our data source via its use in mapping Computer and Information Science (CIS). An analysis of the evolution of CIS was performed in terms of collaboration (co-authorship) network dynamics. Dynamic network analysis covered three quarters of the XX. century (76 years, from 1936 to date). Network evolution was described both at the macro- and the mezo level (in terms of community characteristics). Results show that the development of CIS followed what appears to be a universal pattern of growing into a "mature" discipline.

Soos, Sandor; Kampis, George; Gulyás, László

2013-09-01

375

Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales

Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.

Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault

2011-05-01

376

Psychometric analysis of the empathy quotient (EQ) scale

The psychometric properties of the empathy quotient (EQ) measured by Baron-Cohen (2003) are examined. In particular, confirmatory factor analyses comparing a unifactorial structure and a three correlated factor structure suggest that the three factor structure proposed by Lawrence et al. (2004) is a better fit. Exploratory analysis using modification indices suggests that it might be possible to measure the three

Steven J. Muncer; Jonathan Ling

2006-01-01

377

Meta-Analysis of Scale Reliability Using Latent Variable Modeling

ERIC Educational Resources Information Center

A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…

Raykov, Tenko; Marcoulides, George A.

2013-01-01

378

Landscape structure analysis of Kansas at three scales

Recent research in landscape ecology has sought to define the underlying structure of landscape pattern as quantified by landscape pattern metrics. One method used by researchers to address this question involves statistical data reduction techniques. In this study, principal components analysis (PCA) was performed on 27 landscape pattern metrics derived from a Kansas land cover data base at three spatial

Jerry A. Griffith; Edward A. Martinko; Kevin P. Price

2000-01-01

379

Image processing and analysis techniques for reading kinetheodolite film scales

This report describes a series of techniques for processing and analyzing images. Although developed for a specific purpose, namely the automatic reading of angular information on Askania kinetheodolite film, most of the techniques are quite general, and potentially applicable to a wide variety of problems. The processes described include real time binarisation of a television signal, production and analysis of

A. M. Bagot

1982-01-01

380

Development of a statistical sampling method for uncertainty analysis with SCALE

A new statistical sampling sequence called Sampler has been developed for the SCALE code system. Random values for the input multigroup cross sections are determined by using the XSUSA program to sample uncertainty data provided in the SCALE covariance library. Using these samples, Sampler computes perturbed self-shielded cross sections and propagates the perturbed nuclear data through any specified SCALE analysis sequence, including those for criticality safety, lattice physics with depletion, and shielding calculations. Statistical analysis of the output distributions provides uncertainties and correlations in the desired responses. (authors)

Williams, M.; Wiarda, D.; Smith, H.; Jessee, M. A.; Rearden, B. T. [Oak Ridge National Laboratory, P.O Box 2008, Oak Ridge, TN 37831-6354 (United States); Zwermann, W.; Klein, M.; Pautz, A.; Krzykacz-Hausmann, B.; Gallner, L. [Gesellschaft fuer Anlagen- und Reaktorsicherheit GRS, Forschungszentrum, Boltzmannstrasse 14, 85748 Garching (Germany)

2012-07-01

381

Development of a Statistical Sampling Method for Uncertainty Analysis with SCALE

A new statistical sampling sequence called Sampler has been developed for the SCALE code system. Random values for the input multigroup cross sections are determined by using the XSUSA program to sample uncertainty data provided in the SCALE covariance library. Using these samples, Sampler computes perturbed self-shielded cross sections and propagates the perturbed nuclear data through any specified SCALE analysis sequence, including those for criticality safety, lattice physics with depletion, and shielding calculations. Statistical analysis of the output distributions provides uncertainties and correlations in the desired responses.

Williams, Mark L [ORNL; Wiarda, Dorothea [ORNL; Smith, Harold J [ORNL; Jessee, Matthew Anderson [ORNL; Rearden, Bradley T [ORNL; Klein, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS); Zwermann, W. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS); Pautz, Andreas [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS)

2012-01-01

382

Computational solutions to large-scale data management and analysis

Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155

Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.

2011-01-01

383

Three-dimensional analysis of free-electron laser performance using brightness scaled variables

A three-dimensional analysis of radiation generation in a free-electron laser (FEL) is performed in the small signal regime. The analysis includes beam conditioning, harmonic generation, flat beams, and a new scaling of the FEL equations using the six-dimensional beam brightness. The six-dimensional beam brightness is an invariant under Liouvillian flow; therefore, any nondissipative manipulation of the phase-space, performed, for example, in order to optimize FEL performance, must conserve this brightness. This scaling is more natural than the commonly-used scaling with the one-dimensional growth rate. The brightness-scaled equations allow for the succinct characterization of the optimal FEL performance under various additional constraints. The analysis allows for the simple evaluation of gain enhancement schemes based on beam phase space manipulations such as emittance exchange and conditioning. An example comparing the gain in the first and third harmonics of round or flat and conditioned or unconditioned beams is presented.

Penn, Gregory; Gullans, M.; Penn, G.; Wurtele, J.S.; Zolotorev, M.

2008-06-11

384

Construct Validation of the Translated Version of the Work-Family Conflict Scale for Use in Korea

ERIC Educational Resources Information Center

Recently, the stress of work-family conflict has been a critical workplace issue for Asian countries, especially within those cultures experiencing rapid economic development. Our research purpose is to translate and establish construct validity of a Korean-language version of the Multi-Dimensional Work-Family Conflict (WFC) scale used in the U.S.…

Lim, Doo Hun; Morris, Michael Lane; McMillan, Heather S.

2011-01-01

385

Allometric scaling of marbofloxacin pharmacokinetics: a retrospective analysis.

The association between physiologically dependent pharmacokinetic parameters (CL(B), T1/2beta, Vd(ss)) of marbofloxacin and body weight was studied in eight animal species based on allometric equation Y = aWb, where 'Y' is the pharmacokinetic parameter, 'W' is body weight, 'a' is allometric coefficient (intercept) and 'b' is the exponent that describes relation between pharmacokinetic parameter and body weight. The body clearance of marbofloxacin has shown significant (P < 0.0001) relation with size (Bwt) in various animal species. However, half-life and volume of distribution were not in association with body weight. Although half-life and volume of distribution were not in a good correlation with body weight, statistically significant association between the body clearance and body weight suggests validity of allometric scaling for predicting pharmacokinetic parameters of marbofloxacin in animal species that have not been studied yet. However further study considering large sample size and other parameters influencing pharmacokinetics of marbofloxacin is recommended. PMID:24724476

Yohannes, S; Hossain, Md Akil; Kim, J Y; Lee, S J; Kwak, D M; Suh, J W; Park, S C

2014-01-01

386

Analysis of Anxiety Scale and Related Elements in Endodontic Patients

INTRODUCTION: Anxiety of patients is one of the problems in dentistry which are considered in recent years, and it prevents them from having a treatment out of stress. This study was conducted to specify anxiety prevalence and related elements among endodontic patients. MATERIALS AND METHODS: The study was conducted on 150 patients referred to Endodontic department of dental school of Islamic Azad University, using a cross sectional descriptive method in 2006. Using background characteristics, the patients were classified as a matter of age, sex, education and related factors such as previous dental visit, unfavorable experience in dental office, and the most prevalent cause of referring to dentist. In this regard, Dental Fear Survey (DFS), questionnaire was used and patients were divided in three groups of anxiety level. The results were analyzed using Chi-square and Fisher exact tests. RESULTS: The findings showed highest anxiety scales among dental office referents were statistically significant for age group of 20-30, women, and under diploma education (P<0.05). CONCLUSION: Improving the knowledge about causes of anxiety and its preventive methods are suggested to dentists. They should also provide treatments without annoyance and trauma. PMID:24348655

Akhavan, Hengameh; Mehrvarzfar, Payman; Sheikholeslami, Mahshid; Dibaj, Masomeh; Eslami, Shahrooz

2007-01-01

387

Physical Analysis and Scaling of a Jet and Vortex Actuator

NASA Technical Reports Server (NTRS)

Our previous studies have shown that the Jet and Vortex Actuator generates free-jet, wall-jet, and near- wall vortex flow fields. That is, the actuator can be operated in different modes by simply varying the driving frequency and/or amplitude. For this study, variations are made in the actuator plate and wide-slot widths and sine/asymmetrical actuator plate input forcing (drivers) to further study the actuator induced flow fields. Laser sheet flow visualization, particle- image velocimetry, and laser velocimetry are used to measure and characterize the actuator induced flow fields. Laser velocimetry measurements indicate that the vortex strength increases with the driver repetition rate for a fixed actuator geometry (wide slot and plate width). For a given driver repetition rate, the vortex strength increases as the plate width decreases provided the wide-slot to plate-width ratio is fixed. Using an asymmetric plate driver, a stronger vortex is generated for the same actuator geometry and a given driver repetition rate. The nondimensional scaling provides the approximate ranges for operating the actuator in the free jet, wall jet, or vortex flow regimes. Finally, phase-locked velocity measurements from particle image velocimetry indicate that the vortex structure is stationary, confirming previous computations. Both the computations and the particle image velocimetry measurements (expectantly) show unsteadiness near the wide-slot opening, which is indicative of mass ejection from the actuator.

Lachowicz, Jason T.; Yao, Chung-Sheng; Joslin, Ronald D.

2004-01-01

388

Exploratory and Confirmatory Factor Analysis of the Metacognition Scale for Primary School Students

ERIC Educational Resources Information Center

The purpose of this study is to develop the Metacognition Scale (MS) which is designed for primary school students. The sample of the study consisted of 426 primary school students in Izmir, Turkey. In order to examine the construct validity of the MS, exploratory factor analysis and confirmatory factor analysis were performed. For the validity of…

Yildiz, Eylem; Akpinar, Ercan; Tatar, Nilgun; Ergin, Omer

2009-01-01

389

A Large-Scale Sentiment Analysis for Yahoo! Answers Onur Kucuktunc

A Large-Scale Sentiment Analysis for Yahoo! Answers Onur Kucuktunc The Ohio State University University Ankara, Turkey hakan@cs.bilkent.edu.tr ABSTRACT Sentiment extraction from online web documents has. By sentiment analysis, we refer to the problem of assigning a quantitative positive/negative mood to a short

Ferhatosmanoglu, Hakan

390

A new framework for positioning a moving target is introduced by utilizing time differences of arrival (TDOA) and frequency differences of arrival (FDOA) measurements collected using an array of passive sensors. It exploits the multidimensional scaling (MDS) analysis, which has been developed for data analysis in the field such as physics, geography and biology. Particularly, we present an accurate and

He-Wen Wei; Rong Peng; Qun Wan; Zhang-Xin Chen; Shang-Fu Ye

2010-01-01

391

An analysis and validation pipeline for large-scale RNAi-based screens

An analysis and validation pipeline for large-scale RNAi-based screens Michael Plank1 , Guang Hu2 pipeline to prioritize these candidates incorporating effect sizes, functional enrichment analysis associated with oxidative stress resistance, as a proof-of-concept of our pipeline we demonstrate

de MagalhÃ£es, JoÃ£o Pedro

392

Adaptive Online Transmission of 3D TexMesh Using Scale-Space Analysis

Efficient online visualization of 3D mesh and photo realistic texture is essential for a variety of applications, such as museum exhibits and medical images. In these applications synthetic texture and a predefined set of views is not an option. We propose using a mesh simplification algorithm based on scale-space analysis of the feature point distribution, combined with an associated analysis

L. Irene Cheng; Pierre Boulanger

2004-01-01

393

Interface positions measured at the lattice constant scale by Auger analysis on chemical bevels

L-999 Interface positions measured at the lattice constant scale by Auger analysis on chemical cristallines) sont bien rÃ©solus. Abstract. 2014 Experimental results of Auger profiles obtained on chemical) is the chemical bevelling, where a thin film analysis by in-depth profiling is obtained as a result of a surface

Paris-Sud XI, UniversitÃ© de

394

Listening Factors: A Large-Scale Principal Components Analysis of Long-Term Music Listening.butz@ifi.lmu.de ABSTRACT There are about as many strategies for listening to music as there are music enthusiasts an empirical analysis of long-term music listening histories from the last.fm web service. It gives insight

395

Multi-Scale Fractal Analysis of Image Texture and Pattern

NASA Technical Reports Server (NTRS)

Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.

Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.

1997-01-01

396

Brazilian version of the Jefferson Scale of Empathy: psychometric properties and factor analysis

Background Empathy is a central characteristic of medical professionalism and has recently gained attention in medical education research. The Jefferson Scale of Empathy is the most commonly used measure of empathy worldwide, and to date it has been translated in 39 languages. This study aimed to adapt the Jefferson Scale of Empathy to the Brazilian culture and to test its reliability and validity among Brazilian medical students. Methods The Portuguese version of the Jefferson Scale of Empathy was adapted to Brazil using back-translation techniques. This version was pretested among 39 fifth-year medical students in September 2010. During the final fifth- and sixth-year Objective Structured Clinical Examination (October 2011), 319 students were invited to respond to the scale anonymously. Cronbach’s alpha, exploratory factor analysis, item-total correlation, and gender comparisons were performed to check the reliability and validity of the scale. Results The student response rate was 93.7% (299 students). Cronbach’s coefficient for the scale was 0.84. A principal component analysis confirmed the construct validity of the scale for three main factors: Compassionate Care (first factor), Ability to Stand in the Patient’s Shoes (second factor), and Perspective Taking (third factor). Gender comparisons did not reveal differences in the scores between female and male students. Conclusions The adapted Brazilian version of the Jefferson Scale of Empathy proved to be a valid, reliable instrument for use in national and cross-cultural studies in medical education. PMID:22873730

2012-01-01

397

Multi-Scale Fractal Analysis of Image Texture and Pattern

NASA Technical Reports Server (NTRS)

Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

1999-01-01

398

Multi-Scale Fractal Analysis of Image Texture and Pattern

NASA Technical Reports Server (NTRS)

Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

1999-01-01

399

Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.

Wu, Hui-Chun [Los Alamos National Laboratory; Hegelich, B.M. [Los Alamos National Laboratory; Fernandez, J.C. [Los Alamos National Laboratory; Shah, R.C. [Los Alamos National Laboratory; Palaniyappan, S. [Los Alamos National Laboratory; Jung, D. [Los Alamos National Laboratory; Yin, L [Los Alamos National Laboratory; Albright, B.J. [Los Alamos National Laboratory; Bowers, K. [Guest Scientist of XCP-6; Huang, C. [Los Alamos National Laboratory; Kwan, T.J. [Los Alamos National Laboratory

2012-06-19

400

and reliable measure. The application of the measure indicated that disaster resilience is an important predictor of flood property damage and flood related deaths in the U.S. Gulf coast region. Also, the findings indicated that Florida counties are the most...

Mayunga, Joseph S.

2010-07-14

401

Analysis of Aspergillus nidulans metabolism at the genome-scale

Background Aspergillus nidulans is a member of a diverse group of filamentous fungi, sharing many of the properties of its close relatives with significance in the fields of medicine, agriculture and industry. Furthermore, A. nidulans has been a classical model organism for studies of development biology and gene regulation, and thus it has become one of the best-characterized filamentous fungi. It was the first Aspergillus species to have its genome sequenced, and automated gene prediction tools predicted 9,451 open reading frames (ORFs) in the genome, of which less than 10% were assigned a function. Results In this work, we have manually assigned functions to 472 orphan genes in the metabolism of A. nidulans, by using a pathway-driven approach and by employing comparative genomics tools based on sequence similarity. The central metabolism of A. nidulans, as well as biosynthetic pathways of relevant secondary metabolites, was reconstructed based on detailed metabolic reconstructions available for A. niger and Saccharomyces cerevisiae, and information on the genetics, biochemistry and physiology of A. nidulans. Thereby, it was possible to identify metabolic functions without a gene associated, and to look for candidate ORFs in the genome of A. nidulans by comparing its sequence to sequences of well-characterized genes in other species encoding the function of interest. A classification system, based on defined criteria, was developed for evaluating and selecting the ORFs among the candidates, in an objective and systematic manner. The functional assignments served as a basis to develop a mathematical model, linking 666 genes (both previously and newly annotated) to metabolic roles. The model was used to simulate metabolic behavior and additionally to integrate, analyze and interpret large-scale gene expression data concerning a study on glucose repression, thereby providing a means of upgrading the information content of experimental data and getting further insight into this phenomenon in A. nidulans. Conclusion We demonstrate how pathway modeling of A. nidulans can be used as an approach to improve the functional annotation of the genome of this organism. Furthermore we show how the metabolic model establishes functional links between genes, enabling the upgrade of the information content of transcriptome data. PMID:18405346

David, Helga; Özçelik, ?lknur ?; Hofmann, Gerald; Nielsen, Jens

2008-01-01

402

Scaling analysis applied to the NORVEX code development and thermal energy flight experiment

NASA Technical Reports Server (NTRS)

A scaling analysis is used to study the dominant flow processes that occur in molten phase change material (PCM) under 1 g and microgravity conditions. Results of the scaling analysis are applied to the development of the NORVEX (NASA Oak Ridge Void Experiment) computer program and the preparation of the Thermal Energy Storage (TES) flight experiment. The NORVEX computer program which is being developed to predict melting and freezing with void formation in a 1 g or microgravity environment of the PCM is described. NORVEX predictions are compared with the scaling and similarity results. The approach to be used to validate NORVEX with TES flight data is also discussed. Similarity and scaling show that the inertial terms must be included as part of the momentum equation in either the 1 g or microgravity environment (a creeping flow assumption is invalid). A 10(exp -4) environment was found to be a suitable microgravity environment for the proposed PCM.

Skarda, J. R. L.; Namkoong, David; Darling, Douglas

1991-01-01

403

Large scale air monitoring: lichen vs. air particulate matter analysis.

Biological indicator organisms have been widely used for monitoring and banking purposes for many years. Although the complexity of the interactions between organisms and their environment is generally not easily comprehensible, environmental quality assessment using the bioindicator approach offers some convincing advantages compared to direct analysis of soil, water, or air. Measurement of air particulates is restricted to experienced laboratories with access to expensive sampling equipment. Additionally, the amount of material collected generally is just enough for one determination per sampling and no multidimensional characterization might be possible. Further, fluctuations in air masses have a pronounced effect on the results from air filter sampling. Combining the integrating property of bioindicators with the world wide availability and particular matrix characteristics of air particulate matter as a prerequisite for global monitoring of air pollution is discussed. A new approach for sampling urban dust using large volume filtering devices installed in air conditioners of large hotel buildings is assessed. A first experiment was initiated to collect air particulates (300-500 g each) from a number of hotels during a period of 3-4 months by successive vacuum cleaning of used inlet filters from high volume air conditioning installations reflecting average concentrations per 3 months in different large cities. This approach is expected to be upgraded and applied for global monitoring. Highly positive correlated elements were found in lichens such as K/S, Zn/P, the rare earth elements (REE) and a significant negative correlation between Hg and Cu was observed in these samples. The ratio of concentrations of elements in dust and Usnea spp. is highest for Cz, Zn and Fe (400-200) and lowest for elements such as Ca, Rb, and Sr (20-10). PMID:10474261

Rossbach, M; Jayasekera, R; Kniewald, G; Thang, N H

1999-07-15

404

. Based on a new definition of dilation a scale discrete version of spherical multiresolution is described, starting from a\\u000a scale discrete wavelet transform on the sphere. Depending on the type of application, different families of wavelets are chosen.\\u000a In particular, spherical Shannon wavelets are constructed that form an orthogonal multiresolution analysis. Finally fully\\u000a discrete wavelet approximation is discussed

Willi Freeden; Michael Schreiner

1998-01-01

405

Spatial scaling: Its analysis and effects on animal movements in semiarid landscape mosaics

The research conducted under this agreement focused in general on the effects of envirorunental heterogeneity on movements of animals and materials in semiarid grassland landscapes, on the form of scale-dependency of ecological patterns and processes, and on approaches to extrapolating among spatial scales. The findings are summarized in a series of published and unpublished papers that are included as the main body of this report. We demonstrated the value of experimental model systems'' employing observations and experiments conducted in small-scale microlandscapes to test concepts relating to flows of individuals and materials through complex, heterogeneous mosaics. We used fractal analysis extensively in this research, and showed how fractal measures can produce insights and lead,to questions that do not emerge from more traditional scale-dependent measures. We developed new concepts and theory to deal with scale-dependency in ecological systems and with integrating individual movement patterns into considerations of population and ecosystem dynamics.

Wiens, J.A.

1992-09-01

406

Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

Kalkan, Erol; Chopra, Anil K.

2010-01-01

407

MULTI-SCALE MORPHOLOGICAL ANALYSIS OF SDSS DR5 SURVEY USING THE METRIC SPACE TECHNIQUE

Following the novel development and adaptation of the Metric Space Technique (MST), a multi-scale morphological analysis of the Sloan Digital Sky Survey (SDSS) Data Release 5 (DR5) was performed. The technique was adapted to perform a space-scale morphological analysis by filtering the galaxy point distributions with a smoothing Gaussian function, thus giving quantitative structural information on all size scales between 5 and 250 Mpc. The analysis was performed on a dozen slices of a volume of space containing many newly measured galaxies from the SDSS DR5 survey. Using the MST, observational data were compared to galaxy samples taken from N-body simulations with current best estimates of cosmological parameters and from random catalogs. By using the maximal ranking method among MST output functions, we also develop a way to quantify the overall similarity of the observed samples with the simulated samples.

Wu Yongfeng; Batuski, David J. [Department of Physics and Astronomy, University of Maine, Orono, ME 04469 (United States); Khalil, Andre, E-mail: yongfeng.wu@umit.maine.ed [Department of Mathematics and Statistics and Institute for Molecular Biophysics, University of Maine, Orono, ME 04469 (United States)

2009-12-20

408

Studying the time scale dependence of environmental variables predictability using fractal analysis.

Prediction of meteorological and air quality variables motivates a lot of research in the atmospheric sciences and exposure assessment communities. An interesting related issue regards the relative predictive power that can be expected at different time scales, and whether it vanishes altogether at certain ranges. An improved understanding of our predictive powers enables better environmental management and more efficient decision making processes. Fractal analysis is commonly used to characterize the self-affinity of time series. This work introduces the Continuous Wavelet Transform (CWT) fractal analysis method as a tool for assessing environmental time series predictability. The high temporal scale resolution of the CWT enables detailed information about the Hurst parameter, a common temporal fractality measure, and thus about time scale variations in predictability. We analyzed a few years records of half-hourly air pollution and meteorological time series from which the trivial seasonal and daily cycles were removed. We encountered a general trend of decreasing Hurst values from about 1.4 (good autocorrelation and predictability), in the sub-daily time scale to 0.5 (which implies complete randomness) in the monthly to seasonal scales. The air pollutants predictability follows that of the meteorological variables in the short time scales but is better at longer scales. PMID:20465249

Yuval; Broday, David M

2010-06-15

409

Intrinsic multi-scale analysis: a multi-variate empirical mode decomposition framework

A novel multi-scale approach for quantifying both inter- and intra-component dependence of a complex system is introduced. This is achieved using empirical mode decomposition (EMD), which, unlike conventional scale-estimation methods, obtains a set of scales reflecting the underlying oscillations at the intrinsic scale level. This enables the data-driven operation of several standard data-association measures (intrinsic correlation, intrinsic sample entropy (SE), intrinsic phase synchrony) and, at the same time, preserves the physical meaning of the analysis. The utility of multi-variate extensions of EMD is highlighted, both in terms of robust scale alignment between system components, a pre-requisite for inter-component measures, and in the estimation of feature relevance. We also illuminate that the properties of EMD scales can be used to decouple amplitude and phase information, a necessary step in order to accurately quantify signal dynamics through correlation and SE analysis which are otherwise not possible. Finally, the proposed multi-scale framework is applied to detect directionality, and higher order features such as coupling and regularity, in both synthetic and biological systems. PMID:25568621

Looney, David; Hemakom, Apit; Mandic, Danilo P.

2015-01-01

410

A new approach for modeling and analysis of molten salt reactors using SCALE

The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

2013-07-01

411

HyFinBall: a two-handed, hybrid 2D/3D desktop VR interface for multi-dimensional visualization

NASA Astrophysics Data System (ADS)

This paper presents the concept, working prototype and design space of a two-handed, hybrid spatial user interface for minimally immersive desktop VR targeted at multi-dimensional visualizations. The user interface supports dual button balls (6DOF isotonic controllers with multiple buttons) which automatically switch between 6DOF mode (xyz + yaw,pitch,roll) and planar-3DOF mode (xy + yaw) upon contacting the desktop. The mode switch automatically switches a button ball's visual representation between a 3D cursor and a mouse-like 2D cursor while also switching the available user interaction techniques (ITs) between 3D and 2D ITs. Further, the small form factor of the button ball allows the user to engage in 2D multi-touch or 3D gestures without releasing and re-acquiring the device. We call the device and hybrid interface the HyFinBall interface which is an abbreviation for `Hybrid Finger Ball.' We describe the user interface (hardware and software), the design space, as well as preliminary results of a formal user study. This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and interactions

Cho, Isaac; Wang, Xiaoyu; Wartell, Zachary J.

2013-12-01

412

A multi-dimensional fractionation and characterization scheme was developed for fast acquisition of the relevant molecular properties for protein separation from crude biological feedstocks by ion-exchange chromatography (IEX), hydrophobic interaction chromatography (HIC), and size-exclusion chromatography. In this approach, the linear IEX isotherm parameters were estimated from multiple linear salt-gradient IEX data, while the nonlinear IEX parameters as well as the HIC isotherm parameters were obtained by the inverse method under column overloading conditions. Collected chromatographic fractions were analyzed by gel electrophoresis for estimation of molecular mass, followed by mass spectrometry for protein identification. The usefulness of the generated molecular properties data for rational decision-making during downstream process development was equally demonstrated. Monoclonal antibody purification from crude hybridoma cell culture supernatant was used as case study. The obtained chromatographic parameters only apply to the employed stationary phases and operating conditions, hence prior high throughput screening of different chromatographic resins and mobile phase conditions is still a prerequisite. Nevertheless, it provides a quick, knowledge-based approach for rationally synthesizing purification cascades prior to more detailed process optimization and evaluation. PMID:22688729

Nfor, Beckley K; Ahamed, Tangir; Pinkse, Martijn W H; van der Wielen, Luuk A M; Verhaert, Peter D E M; van Dedem, Gijs W K; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel

2012-12-01

413

NASA Astrophysics Data System (ADS)

In this paper, we study the simulation of nonlinear Schrödinger equation in one, two and three dimensions. The proposed method is based on a time-splitting method that decomposes the original problem into two parts, a linear equation and a nonlinear equation. The linear equation in one dimension is approximated with the Chebyshev pseudo-spectral collocation method in space variable and the Crank-Nicolson method in time; while the nonlinear equation with constant coefficients can be solved exactly. As the goal of the present paper is to study the nonlinear Schrödinger equation in the large finite domain, we propose a domain decomposition method. In comparison with the single-domain, the multi-domain methods can produce a sparse differentiation matrix with fewer memory space and less computations. In this study, we choose an overlapping multi-domain scheme. By applying the alternating direction implicit technique, we extend this efficient method to solve the nonlinear Schrödinger equation both in two and three dimensions, while for the solution at each time step, it only needs to solve a sequence of linear partial differential equations in one dimension, respectively. Several examples for one- and multi-dimensional nonlinear Schrödinger equations are presented to demonstrate high accuracy and capability of the proposed method. Some numerical experiments are reported which show that this scheme preserves the conservation laws of charge and energy.

Taleei, Ameneh; Dehghan, Mehdi

2014-06-01

414

NASA Astrophysics Data System (ADS)

Many significant problems in rock engineering require consideration of fluid flow through natural fractures in rock or the mechanical response of a fractured rock mass. Accurate prediction of flow volumes, rates and mass transport through natural fracture systems, and their mechanical response, is critical for design and licensing of nuclear waste repositories, optimization of recovery from many petroleum reservoirs, and also in solution mining, groundwater resource development and protection, and hardrock civil engineering projects. Many of these projects require a large-scale, 3D numerical model for flow, transport or mechanical simulation or visualization. A fundamental problem in constructing these models is that fracture data from wells, boreholes, geophysical profiles or surface outcrops represents a small portion of the rock volume under consideration. Not only does the data represent a very small proportion of the reservoir or rock mass, it also represents fracturing in very restricted size scales. Thus scaling analysis is critical to accurately constructing a fracture model from the data. This paper describes, through two case examples, how scaling analysis techniques have been used to develop models of natural fracturing to support the design and licensing of a high level nuclear waste repository in Finland, and for optimization of a tertiary recovery project in an aging oil field in the US. A new technique for scaling fracture sizes is presented. Together, these two examples illustrate the importance of the scaling analyses, pitfalls in carrying out the analyses, and new methods to improve the 3D characterization of naturally fractured rock masses.

La Pointe, P. R.

2001-12-01

415

Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

2014-01-01

416

Large-scale nonlinear structural analysis simulation on CRAY parallel\\/vector supercomputers

Two large-scale nonlinear structural analysis simulations of a nine degree model of the Space Shuttle Redesigned Solid Rocket Motor (RSRM) factory joint are described. The first analysis simulates a burst pressure test where a pressure load was incrementally applied from 0 to 1500 p.s.i. This simulation was used to assist in evaluating and defining an error criterion based on correlation

Eugene Poole; John Bauer; Troy Stratton; Tom Weidner

1993-01-01

417

SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems

Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.

Mueller, Don [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL

2009-01-01

418

A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360{degree}, 180{degree}, and 90{degree} sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360{degree}, 180{degree}, and 90{degree} cases, respectively.

Lopez, C.; Koski, J.A.; Razani, A.

2000-01-06

419

Motor skill acquisition across short and long time scales: a meta-analysis of neuroimaging data.

In this systematic review and meta-analysis, we explore how the time scale of practice affects patterns of brain activity associated with motor skill acquisition. Fifty-eight studies that involved skill learning with healthy participants (117 contrasts) met inclusion criteria. Two meta-contrasts were coded: decreases: peak coordinates that showed decreases in brain activity over time; increases: peak coordinates that showed increases in activity over time. Studies were grouped by practice time scale: short (?1 h; 25 studies), medium (>1 and ?24 h; 18 studies), and long (>24h to 5 weeks; 17 studies). Coordinates were analyzed using Activation Likelihood Estimation to show brain areas that were consistently activated for each contrast. Across time scales, consistent decreases in activity were shown in prefrontal and premotor cortex, the inferior parietal lobules, and the cerebellar cortex. Across the short and medium time scales there were consistent increases in supplementary and primary motor cortex and dentate nucleus. At the long time scale, increases were seen in posterior cingulate gyrus, primary motor cortex, putamen, and globus pallidus. Comparisons between time scales showed that increased activity in M1 at medium time scales was more spatially consistent across studies than increased activity in M1 at long time scales. Further, activity in the striatum (viz. putamen and globus pallidus) was consistently more rostral in the medium time scale and consistently more caudal in the long time scale. These data support neurophysiological models that posit that both a cortico-cerebellar system and a cortico-striatal system are active, but at different time points, during motor learning, and suggest there are associative/premotor and sensorimotor networks active within each system. PMID:24831923

Lohse, K R; Wadden, K; Boyd, L A; Hodges, N J

2014-07-01

420

The Use of Weighted Graphs for Large-Scale Genome Analysis

There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061

Zhou, Fang; Toivonen, Hannu; King, Ross D.

2014-01-01

421

Confirmatory factor analysis of the Brunel Mood Scale for use with water-skiing competition.

The purpose of the present study was to investigate the factorial validity of the Brunel Mood Scale, which measures anger, confusion, depression, fatigue, tension, and vigor, for water-skiers. Participants were 345 water-skiers (age range 16 to 39 years, men: n=311, women: n=34) who completed the scale approximately 1 hour before a water-skiing competition. Confirmatory factor analysis indicated support for the validity of the 6-factor model, with a Comparative Fit Index of .90 and Root Mean Squared Error of Approximation of .07. Internal consistency coefficients were above the .70 criterion. It is suggested that the Brunel Mood Scale shows factorial validity for use with water-skiers and that researchers should continue to assess validation of the Brunel Mood Scale with other measures and with specific appropriate samples. PMID:14620257

Fazackerley, Richie; Lane, Andrew M; Mahoney, Craig

2003-10-01

422

Automatic scaling of F layer from ionograms based on image processing and analysis

NASA Astrophysics Data System (ADS)

This paper presents a novel method for automatic scaling of the F layer from ionograms based on image processing and analysis techniques. The proposed method converts ionospheric vertical sounding data to a binary image. By extracting the F layer trace through segmentation of the F layer image, the ordinary and extraordinary traces used to scale ionospheric parameters can be separated automatically. We applied the method to ionograms recorded by the digital ionosonde developed at China Research Institute of Radiowave Propagation in which the ordinary and extraordinary modes are recorded together. Tests were performed on random ionograms with different qualities obtained at three ionospheric stations in different seasons and time and comparison of the results with those scaled by the standard manual method was given. The experiments show that the scaled parameters are valid and our method is feasible.

Zheng, Haiyong; Ji, Guangrong; Wang, Guoyu; Zhao, Zhenwei; He, Shaohong

2013-12-01

423

ERIC Educational Resources Information Center

The Antisocial Features (ANT) scale of the Personality Assessment Inventory (PAI) was subjected to taxometric analysis in a group of 2,135 federal prison inmates. Scores on the three ANT subscales--Antisocial Behaviors (ANT-A), Egocentricity (ANT-E), and Stimulus Seeking (ANT-S)--served as indicators in this study and were evaluated using the…

Walters, Glenn D.; Diamond, Pamela M.; Magaletta, Philip R.; Geyer, Matthew D.; Duncan, Scott A.

2007-01-01

424

ERIC Educational Resources Information Center

This study focuses on the analysis of the behavior of unbound aggregates to offset wheel loads. Test data from full-scale aircraft gear loading conducted at the National Airport Pavement Test Facility (NAPTF) by the Federal Aviation Administration (FAA) are used to investigate the effects of wander (offset loads) on the deformation behavior of…

Donovan, Phillip Raymond

2009-01-01

425

Comparative Analysis of Balanced Winnow and SVM in Large Scale Patent Categorization

Comparative Analysis of Balanced Winnow and SVM in Large Scale Patent Categorization Katrien Beuls techniques, a collection of 1.2 million patent applications is used to build a classifier that is able). Contrary to SVM, Balanced Winnow is frequently applied in today's patent categorization systems. Results

Steels, Luc

426

Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However,

Peer-Timo Bremer; Gunther H. Weber; Julien Tierny; Valerio Pascucci; Marcus S. Day; John B. Bell

2011-01-01

427

MULTI-SCALE ANALYSIS OF SKIN HYPER-PIGMENTATION EVOLUTION Sylvain Prigent1

MULTI-SCALE ANALYSIS OF SKIN HYPER-PIGMENTATION EVOLUTION Sylvain Prigent1 , Xavier Descombes1 to quantify the evolution of skin hyper-pigmentation lesions under treatment. We show that statistical inference, multi-spectral image, skin, hyper-pigmentation. 1. INTRODUCTION In dermatology, spectral

Boyer, Edmond

428

Generalized singular value decomposition for comparative analysis of genome-scale expression

Generalized singular value decomposition for comparative analysis of genome-scale expression data or the other, by using generalized singular value decomposition. This framework enables comparative with a comparison of yeast and human cell-cycle expression data sets. DNA microarrays cell cycle yeast Saccharomyces

Utah, University of

429

Cluster Analysis of the Wechsler Preschool and Primary Scale of Intelligence

The intercorrelations among the 11 subtests of the WPPSI were analyzed for each of the six age groups in the standardization sample, and clusters were found that clearly corresponded to the Verbal and Performance Scales. The results were in general agreement with those of previous research in which factor analysis was employed with the same data. Alternative interpretations are offered

A. B. Silverstein

1986-01-01

430

Knowledge-guided multi-scale independent component analysis for biomarker identification

BACKGROUND: Many statistical methods have been proposed to identify disease biomarkers from gene expression profiles. However, from gene expression profile data alone, statistical methods often fail to identify biologically meaningful biomarkers related to a specific disease under study. In this paper, we develop a novel strategy, namely knowledge-guided multi-scale independent component analysis (ICA), to first infer regulatory signals and then

Li Chen; Jianhua Xuan; Chen Wang; Ie-ming Shih; Yue Wang; Zhen Zhang; Eric Hoffman; Robert Clarke

2008-01-01

431

A Computational Pipeline for Protein Structure Prediction and Analysis at Genome Scale

1 A Computational Pipeline for Protein Structure Prediction and Analysis at Genome Scale Manesh that they can complement the existing experimental techniques. In this paper, we present an automated pipeline for protein structure prediction. The centerpiece of the pipeline is a threading-based protein structure

432

LARGE-SCALE BIOLOGY ARTICLE Systems and Trans-System Level Analysis Identi es

LARGE-SCALE BIOLOGY ARTICLE Systems and Trans-System Level Analysis Identi es Conserved Iron De Angeles, California 90095 We surveyed the iron nutrition-responsive transcriptome of Chlamydomonas oxidative stress response pathways, among hundreds of other genes. The output from the transcriptome

Clarke, Steven

433

Data Mining: Data Analysis on a Grand Scale? \\Lambda Padhraic Smyth

Data Mining: Data Analysis on a Grand Scale? \\Lambda Padhraic Smyth Information and Computer data mining has evolved largely as a result of efforts by computer scientists to address the needs of this historical context, data mining to date has largely focused on computational and algorithmic issues rather

Smyth, Padhraic

434

Analysis and Design of Large-scale Online Classes June 26, 2014

Massive Open Online Courses (MOOCs) have become an emergent paradigm of large-scale knowledge distribution, asymptotic analysis, signal processing, MOOCs, learning analytics. 1. INTRODUCTION Massive Open Online Courses (MOOCs), as an emergent paradigm of massive knowledge distribution, took off in 2012 and attracted

Vetterli, Martin

435

ERIC Educational Resources Information Center

Multilevel confirmatory factor analysis was used to evaluate the factor structure underlying the 12-item, three-factor "Interagency Collaboration Activities Scale" (ICAS) at the informant level and at the agency level. Results from 378 professionals (104 administrators, 201 service providers, and 73 case managers) from 32 children's mental health…

Dedrick, Robert F.; Greenbaum, Paul E.

2011-01-01

436

The Flow Experience: A Rasch Analysis of Jackson's Flow State Scale.

ERIC Educational Resources Information Center

Studied the validity of the Flow State Scale (S. Jackson and H. Marsh, 1996) by subjecting data from the original sample of 394 young athletes and a sample of 398 older athletes to Rasch analysis. Rasch analyses show that flow dimensions may be conceptualized as a continuum and support the validity and generalizability of the measure. (SLD)

Tenenbaum, Gershon; Fogarty, Gerald J.; Jackson, Susan A.

1999-01-01

437

Earthquake scaling for inelastic dynamic analysis of reinforced concrete ductile framed structures

2004 NZSEE Conference ABSTRACT: An earthquake (or a suite of earthquakes) is needed when carrying out inelastic dynamic analysis of a reinforced concrete ductile framed structure. The earthquake should be scaled so as to make its spectral accelerations to match the elastic design acceleration spectrum for the structure to match the elastic code design base shear implied in New Zealand

P. Dong; A. J. Carr; P. J. Moss

2004-01-01

438

Multi-modal Analysis of Music: A large-scale Evaluation

Multi-modal Analysis of Music: A large-scale Evaluation Rudolf Mayer Institute of Software different types of content modalities. Music specifically inherits e.g. audio at its core, text in the form of lyrics, images by means of album covers, or video in the form of music videos. Yet, in many Music

439

Global meta-analysis reveals no net change in local-scale plant biodiversity over time

that ecosystem functions will decline due to biodiversity loss in the real world rests on the untested assumptionGlobal meta-analysis reveals no net change in local-scale plant biodiversity over time Mark biodiversity is in decline. This is of concern for aesthetic and ethical reasons, but possibly also

Vellend, Mark

440

Up to recently, economists have had no good tools to measure the returns to scale of individual corporations in an industry. Data envelopment analysis (DEA) is a linear programming technique for determining the efficiency frontier (the envelope) to the inputs and outputs of a collection of individual corporations or other productive units. While DEA offers an avenue for calculating the

Sten Thore

1996-01-01

441

Maggie's Day: A Small-Scale Analysis of English Education Policy

ERIC Educational Resources Information Center

Policy sociologists typically research at large scale. This paper presents an example of a policy analysis which illuminates how policy is embedded in single incidents, lives and places. The case in point concerns the policy fetish for "closing the gap and raising the bar". This rhetoric is taken to mean improving the learning of all students,…

Thomson, Pat; Hall, Christine; Jones, Ken

2010-01-01

442

Large scale estimation of arterial traffic and structural analysis of traffic patterns using probe and analyzing traffic conditions on large arterial networks is an inherently difficult task. The first goal of this article is to demonstrate how arterial traffic conditions can be estimated using sparsely sampled GPS

443

Decision Envelopment Analysis (DEA), the detailed quantitative comparison of alternative economic systems, is used to compare the technical efficiency of the large-scale power systems needed to meet the growing energy needs of terrestrial society. The Lunar Power System (LPS) captures sunlight on the moon, converts it to microwaves and beams the power to receivers on earth that output electricity. In

David R. Criswell; Russell G. Thompson

1992-01-01

444

ERIC Educational Resources Information Center

Objective: The objective of this study was to test the factor structure of the "Nurturant Fathering Scale" (NFS) among an African American sample in the mid-Atlantic region that have neither Caribbean heritage nor immigration experiences but who do have diverse family structures (N = 212). Method: A confirmatory factor analysis (CFA) was conducted…

Doyle, Otima; Pecukonis, Edward; Harrington, Donna

2011-01-01

445

Object-based image analysis through nonlinear scale-space filtering

NASA Astrophysics Data System (ADS)

In this research, an object-oriented image classification framework was developed which incorporates nonlinear scale-space filtering into the multi-scale segmentation and classification procedures. Morphological levelings, which possess a number of desired spatial and spectral properties, were associated with anisotropically diffused markers towards the construction of nonlinear scale spaces. Image objects were computed at various scales and were connected to a kernel-based learning machine for the classification of various earth-observation data from both active and passive remote sensing sensors. Unlike previous object-based image analysis approaches, the scale hierarchy is implicitly derived from scale-space representation properties. The developed approach does not require the tuning of any parameter—of those which control the multi-scale segmentation and object extraction procedure, like shape, color, texture, etc. The developed object-oriented image classification framework was applied on a number of remote sensing data from different airborne and spaceborne sensors including SAR images, high and very high resolution panchromatic and multispectral aerial and satellite datasets. The very promising experimental results along with the performed qualitative and quantitative evaluation demonstrate the potential of the proposed approach.

Tzotsos, Angelos; Karantzalos, Konstantinos; Argialas, Demetre

446

NASA Astrophysics Data System (ADS)

For the discrimination of four tectonic settings of island arc, continental arc, within-plate (continental rift and ocean island together), and collision, we present three sets of new diagrams obtained from linear discriminant analysis of natural logarithm transformed ratios of major elements, immobile major and trace elements, and immobile trace elements in acid magmas. The use of discordant outlier-free samples prior to linear discriminant analysis had improved the success rates by about 3% on the average. Success rates of these new diagrams were acceptably high (about 69% to 97% for the first set, about 69% to 99% for the second set, and about 60% to 96% for the third set). Testing of these diagrams for acid rock samples (not used for constructing them) from known tectonic settings confirmed their overall good performance. Application of these new diagrams to Precambrian case studies provided the following generally consistent results: a continental arc setting for the Caribou greenstone belt (Canada) at about 3000 Ma, São Francisco craton (Brazil) at about 3085-2983 Ma, Penakacherla greenstone terrane (Dharwar craton, India) at about 2700 Ma, and Adola (Ethiopia) at about 885-765 Ma; a transitional continental arc to collision setting for the Rio Maria terrane (Brazil) at about 2870 Ma and Eastern felsic volcanic terrain (India) at about 2700 Ma; a collision setting for the Kolar suture zone (India) at about 2610 Ma and Korpo area (Finland) at about 1852 Ma; and a within-plate (likely a continental rift) setting for Malani igneous suite (India) at about 745-700 Ma. These applications suggest utility of the new discrimination diagrams for all four tectonic settings. In fact, all three sets of diagrams were shown to be robust against post-emplacement compositional changes caused by analytical errors, element mobility related to low or high temperature alteration, or Fe-oxidation caused by weathering.

Verma, Surendra P.; Pandarinath, Kailasa; Verma, Sanjeet K.; Agrawal, Salil

2013-05-01

447

Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)

Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512

Shea, Tracey L; Tennant, Alan; Pallant, Julie F

2009-01-01

448

A systematic review with meta-analysis was completed to determine the capacity of risk assessment scales and nurses' clinical judgment to predict pressure ulcer (PU) development. Electronic databases were searched for prospective studies on the validity and predictive capacity of PUs risk assessment scales published between 1962 and 2010 in English, Spanish, Portuguese, Korean, German, and Greek. We excluded gray literature sources, integrative review articles, and retrospective or cross-sectional studies. The methodological quality of the studies was assessed according to the guidelines of the Critical Appraisal Skills Program. Predictive capacity was measured as relative risk (RR) with 95% confidence intervals. When 2 or more valid original studies were found, a meta-analysis was conducted using a random-effect model and sensitivity analysis. We identified 57 studies, including 31 that included a validation study. We also retrieved 4 studies that tested clinical judgment as a risk prediction factor. Meta-analysis produced the following pooled predictive capacity indicators: Braden (RR = 4.26); Norton (RR = 3.69); Waterlow (RR = 2.66); Cubbin-Jackson (RR = 8.63); EMINA (RR = 6.17); Pressure Sore Predictor Scale (RR = 21.4); and clinical judgment (RR = 1.89). Pooled analysis of 11 studies found adequate risk prediction capacity in various clinical settings; the Braden, Norton, EMINA (mEntal state, Mobility, Incontinence, Nutrition, Activity), Waterlow, and Cubbin-Jackson scales showed the highest predictive capacity. The clinical judgment of nurses was found to achieve inadequate predictive capacity when used alone, and should be used in combination with a validated scale. PMID:24280770

García-Fernández, Francisco Pedro; Pancorbo-Hidalgo, Pedro L; Agreda, J Javier Soldevilla

2014-01-01

449

Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 3-Surry Unit 1 Cycle 2

The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using selected critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations in this report is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies in the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and to provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of two reactor critical configurations for Surry Unit 1 Cycle 2. This unit and cycle were chosen for a previous analysis using a different methodology because detailed isotopics from multidimensional reactor calculations were available from the Virginia Power Company. These data permitted a direct comparison of criticality calculations using the utility-calculated isotopics with those using the isotopics generated by the SCALE-4 SAS2H sequence. These reactor critical benchmarks have been reanalyzed using the methodology described above. The two benchmark critical calculations were the beginning-of-cycle (BOC) startup at hot, zero-power (HZP) and an end-of-cycle (EOC) critical at hot, full-power (HFP) critical conditions. These calculations were used to check for consistency in the calculated results for different burnup, downtime, temperature, xenon, and boron conditions. The k{sub eff} results were 1.0014 and 1.0113, respectively, with a standard deviation of 0.0005.

Bowman, S.M.

1995-01-01

450

NASA Astrophysics Data System (ADS)

Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip-sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.

Boularas, A.; Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.

2014-08-01

451

We investigate the performance of pressure retarded osmosis (PRO) at the module scale, accounting for the detrimental effects of reverse salt flux, internal concentration polarization, and external concentration polarization. Our analysis offers insights on optimization of three critical operation and design parameters--applied hydraulic pressure, initial feed flow rate fraction, and membrane area--to maximize the specific energy and power density extractable in the system. For co- and counter-current flow modules, we determine that appropriate selection of the membrane area is critical to obtain a high specific energy. Furthermore, we find that the optimal operating conditions in a realistic module can be reasonably approximated using established optima for an ideal system (i.e., an applied hydraulic pressure equal to approximately half the osmotic pressure difference and an initial feed flow rate fraction that provides equal amounts of feed and draw solutions). For a system in counter-current operation with a river water (0.015 M NaCl) and seawater (0.6 M NaCl) solution pairing, the maximum specific energy obtainable using performance properties of commercially available membranes was determined to be 0.147 kWh per m(3) of total mixed solution, which is 57% of the Gibbs free energy of mixing. Operating to obtain a high specific energy, however, results in very low power densities (less than 2 W/m(2)), indicating that the trade-off between power density and specific energy is an inherent challenge to full-scale PRO systems. Finally, we quantify additional losses and energetic costs in the PRO system, which further reduce the net specific energy and indicate serious challenges in extracting net energy in PRO with river water and seawater solution pairings. PMID:25222561

Straub, Anthony P; Lin, Shihong; Elimelech, Menachem

2014-10-21