Multi-Scale Multi-Dimensional Ion Battery Performance Model
Energy Science and Technology Software Center (ESTSC)
2007-05-07
The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore »usage by solving cell-level electron and heat transports in a macro grid system.« less
M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security
M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security Objective We. The algorithms in the software framework include multi-scale graph modeling, spectral analysis, role mining very fast computation of essential security postures and cost/benefit metrics. By accounting for both
Development of a Multi-Dimensional Scale for PDD and ADHD
ERIC Educational Resources Information Center
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-06-08
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
Grey-Scale Measurements in Multi-Dimensional Digitized Images
van Vliet, Lucas J.
................................ 65 3.5. Spherical Objects in 3D and Higher Dimensions ............. 68 #12;VIII TABLE OF CONTENTS 3 SPIN (Project Three-Dimensional Image Analysis) and Imagenetics (Naperville IL, USA). Published, The Netherlands. Printed in The Netherlands. #12;Aan Tineke Aan mijn ouders #12;#12;vii Table of Contents Part I
How Fitch-Margoliash Algorithm can Benefit from Multi Dimensional Scaling
Lespinats, Sylvain; Grando, Delphine; Maréchal, Eric; Hakimi, Mohamed-Ali; Tenaillon, Olivier; Bastien, Olivier
2011-01-01
Whatever the phylogenetic method, genetic sequences are often described as strings of characters, thus molecular sequences can be viewed as elements of a multi-dimensional space. As a consequence, studying motion in this space (ie, the evolutionary process) must deal with the amazing features of high-dimensional spaces like concentration of measured phenomenon. To study how these features might influence phylogeny reconstructions, we examined a particular popular method: the Fitch-Margoliash algorithm, which belongs to the Least Squares methods. We show that the Least Squares methods are closely related to Multi Dimensional Scaling. Indeed, criteria for Fitch-Margoliash and Sammon’s mapping are somewhat similar. However, the prolific research in Multi Dimensional Scaling has definitely allowed outclassing Sammon’s mapping. Least Square methods for tree reconstruction can now take advantage of these improvements. However, “false neighborhood” and “tears” are the two main risks in dimensionality reduction field: “false neighborhood” corresponds to a widely separated data in the original space that are found close in representation space, and neighbor data that are displayed in remote positions constitute a “tear”. To address this problem, we took advantage of the concepts of “continuity” and “trustworthiness” in the tree reconstruction field, which limit the risk of “false neighborhood” and “tears”. We also point out the concentration of measured phenomenon as a source of error and introduce here new criteria to build phylogenies with improved preservation of distances and robustness. The authors and the Evolutionary Bioinformatics Journal dedicate this article to the memory of Professor W.M. Fitch (1929–2011). PMID:21697992
ERIC Educational Resources Information Center
Chiou, Guo-Li; Anderson, O. Roger
2010-01-01
This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…
Nguyen, Lan K.; Degasperi, Andrea; Cotter, Philip; Kholodenko, Boris N.
2015-01-01
Biochemical networks are dynamic and multi-dimensional systems, consisting of tens or hundreds of molecular components. Diseases such as cancer commonly arise due to changes in the dynamics of signalling and gene regulatory networks caused by genetic alternations. Elucidating the network dynamics in health and disease is crucial to better understand the disease mechanisms and derive effective therapeutic strategies. However, current approaches to analyse and visualise systems dynamics can often provide only low-dimensional projections of the network dynamics, which often does not present the multi-dimensional picture of the system behaviour. More efficient and reliable methods for multi-dimensional systems analysis and visualisation are thus required. To address this issue, we here present an integrated analysis and visualisation framework for high-dimensional network behaviour which exploits the advantages provided by parallel coordinates graphs. We demonstrate the applicability of the framework, named “Dynamics Visualisation based on Parallel Coordinates” (DYVIPAC), to a variety of signalling networks ranging in topological wirings and dynamic properties. The framework was proved useful in acquiring an integrated understanding of systems behaviour. PMID:26220783
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
2012-05-01
This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.
Multi-dimensional PARAFAC2 component analysis of multi-channel EEG data including temporal tracking.
Weis, Martin; Jannek, Dunja; Roemer, Florian; Guenther, Thomas; Haardt, Martin; Husar, Peter
2010-01-01
The identification of signal components in electroencephalographic (EEG) data originating from neural activities is a long standing problem in neuroscience. This area has regained new attention due to the possibilities of multi-dimensional signal processing. In this work we analyze measured visual-evoked potentials on the basis of the time-varying spectrum for each channel. Recently, parallel factor (PARAFAC) analysis has been used to identify the signal components in the space-time-frequency domain. However, the PARAFAC decomposition is not able to cope with components appearing time-shifted over the different channels. Furthermore, it is not possible to track PARAFAC components over time. In this contribution we derive how to overcome these problems by using the PARAFAC2 model, which renders it an attractive approach for processing EEG data with highly dynamic (moving) sources. PMID:21096263
NASA Astrophysics Data System (ADS)
Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue
2011-12-01
As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.
Jarraya, Mohamed; Guermazi, Ali; Niu, Jingbo; Duryea, Jeffrey; Lynch, John A; Roemer, Frank W
2015-11-01
The aim of this study has been to test reproducibility of fractal signature analysis (FSA) in a young, active patient population taking into account several parameters including intra- and inter-reader placement of regions of interest (ROIs) as well as various aspects of projection geometry. In total, 685 patients were included (135 athletes and 550 non-athletes, 18-36 years old). Regions of interest (ROI) were situated beneath the medial tibial plateau. The reproducibility of texture parameters was evaluated using intraclass correlation coefficients (ICC). Multi-dimensional assessment included: (1) anterior-posterior (A.P.) vs. posterior-anterior (P.A.) (Lyon-Schuss technique) views on 102 knees; (2) unilateral (single knee) vs. bilateral (both knees) acquisition on 27 knees (acquisition technique otherwise identical; same A.P. or P.A. view); (3) repetition of the same image acquisition on 46 knees (same A.P. or P.A. view, and same unitlateral or bilateral acquisition); and (4) intra- and inter-reader reliability with repeated placement of the ROIs in the subchondral bone area on 99 randomly chosen knees. ICC values on the reproducibility of texture parameters for A.P. vs. P.A. image acquisitions for horizontal and vertical dimensions combined were 0.72 (95% confidence interval (CI) 0.70-0.74) ranging from 0.47 to 0.81 for the different dimensions. For unilateral vs. bilateral image acquisitions, the ICCs were 0.79 (95% CI 0.76-0.82) ranging from 0.55 to 0.88. For the repetition of the identical view, the ICCs were 0.82 (95% CI 0.80-0.84) ranging from 0.67 to 0.85. Intra-reader reliability was 0.93 (95% CI 0.92-0.94) and inter-observer reliability was 0.96 (95% CI 0.88-0.99). A decrease in reliability was observed with increasing voxel sizes. Our study confirms excellent intra- and inter-reader reliability for FSA, however, results seem to be affected by acquisition technique, which has not been previously recognized. PMID:26343866
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Large-Scale Multi-Dimensional Document Clustering on GPU Clusters
Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E
2010-01-01
Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.
Nitrogen deposition and multi-dimensional plant diversity at the landscape scale
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat
2015-01-01
Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1?km2 plots covering most habitat types of Central Europe and an elevational range of 2900?m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640
Nitrogen deposition and multi-dimensional plant diversity at the landscape scale.
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat
2015-04-01
Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1?km(2) plots covering most habitat types of Central Europe and an elevational range of 2900?m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640
J. I. Brand; M. S. Hallbeck; S. M. Ryan
2001-01-18
Multi-dimensional meta-analysis (MDMA) is an innovative technique for investigating complex scientific problems influenced by "external" factors, such as social, medical, economic, political or climatic trends. MDMA extends traditional meta-analysis by identifying significant data from diverse and independent disciplines ("orthogonal dimensions") and incorporating truth tables and non-parametric analysis methods in the interpretation protocol. In this paper, we outline the methodology of MDMA. We then demonstrates how to apply the method to a specific problem: the relationship between asthma and air particulates. The conclusions from the example show that the further reduction of atmospheric particulate levels is not necessarily the answer to the increasing asthma incidence. This example also demonstrates the strength of this method of analysis for complex problems.
Facile multi-dimensional profiling of chemical gradients at the millimetre scale.
Chen, Chih-Lin; Hsieh, Kai-Ta; Hsu, Ching-Fong; Urban, Pawel L
2016-01-01
A vast number of conventional physicochemical methods are suitable for the analysis of homogeneous samples. However, in various cases, the samples exhibit intrinsic heterogeneity. Tomography allows one to record approximate distributions of chemical species in the three-dimensional space. Here we develop a simple optical tomography system which enables performing scans of non-homogeneous samples at different wavelengths. It takes advantage of inexpensive open-source electronics and simple algorithms. The analysed samples are illuminated by a miniature LCD/LED screen which emits light at three wavelengths (598, 547 and 455 nm, corresponding to the R, G, and B channels, respectively). On presentation of every wavelength, the sample vial is rotated by ?180°, and videoed at 30 frames per s. The RGB values of pixels in the obtained digital snapshots are subsequently collated, and processed to produce sinograms. Following the inverse Radon transform, approximate quasi-three-dimensional images are reconstructed for each wavelength. Sample components with distinct visible light absorption spectra (myoglobin, methylene blue) can be resolved. The system was used to follow dynamic changes in non-homogeneous samples in real time, to visualize binary mixtures, to reconstruct reaction-diffusion fronts formed during the reduction of 2,6-dichlorophenolindophenol by ascorbic acid, and to visualize the distribution of fungal mycelium grown in a semi-solid medium. PMID:26541202
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
Merritt, Cullen
2014-05-31
This study specifies and tests a multi-dimensional model of publicness, building upon extant literature in this area. Publicness represents the degree to which an organization has "public" ties. An organization's degree ...
NASA Astrophysics Data System (ADS)
Carkin, Susan
The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being relatively less marked for information density, abstraction, and non-overt argumentation. In addition to the findings mentioned here, numerous other relationships among the texts exhibit complex patterns of variation related to a number of situational variables. Pedagogical implications are discussed in relation to General Education courses, differing student populations, and the reading and listening demands which students encounter in large introductory classes in the university.
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states. PMID:26046669
NASA Astrophysics Data System (ADS)
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-06-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.
Garrett, Aaron
a neural network architecture to fuse four spectral bands from Landsat 5 data. The four bands correspond navigation and data analysis. Our results demonstrate the natural appearance of the fused imagery and natural means of conveying imaging and non-imaging information to both experts and non-experts. The main
NASA Astrophysics Data System (ADS)
Merritt, Elizabeth; Doss, Forrest; Loomis, Eric; Flippo, Kirk; Devolder, Barbara; Welser-Sherrill, Leslie; Fincke, James; Kline, John
2014-10-01
The counter-propagating shear campaign is examining instability growth and its transition to turbulence relevant to mix in ICF capsules. Experimental platforms on both OMEGA and NIF use anti-symmetric flows about a shear interface to examine isolated Kelvin-Helmholtz instability growth. Measurements of interface (an Al or Ti tracer layer) dynamics are used to benchmark the LANL RAGE hydrocode with BHR turbulence model. The tracer layer does not expand uniformly, but breaks up into multi-dimensional structures that are initially quasi-2D due to the target geometry. We are developing techniques to analyze the multi-D structure growth along the tracer surface with a focus on characterizing the time-dependent structures' spectrum of scales in order to appraise a transition to turbulence in the system and potentially provide tighter constraints on initialization schemes for the BHR model. To this end, we use a wavelet based analysis to diagnose single-time radiographs of the tracer layer surface (w/low and amplified roughness for random noise seeding) with observed spatially non-repetitive features, in order to identify spatial and temporal trends in radiographs taken at different times across several experimental shots. This work conducted under the auspices of the U.S. Department of Energy by LANL under Contract DE-AC52-06NA25396.
Gordon, Scott M; Deng, Jingyuan; Tomann, Alex B; Shah, Amy S; Lu, L Jason; Davidson, W Sean
2013-11-01
The distribution of circulating lipoprotein particles affects the risk for cardiovascular disease (CVD) in humans. Lipoproteins are historically defined by their density, with low-density lipoproteins positively and high-density lipoproteins (HDLs) negatively associated with CVD risk in large populations. However, these broad definitions tend to obscure the remarkable heterogeneity within each class. Evidence indicates that each class is composed of physically (size, density, charge) and compositionally (protein and lipid) distinct subclasses exhibiting unique functionalities and differing effects on disease. HDLs in particular contain upward of 85 proteins of widely varying function that are differentially distributed across a broad range of particle diameters. We hypothesized that the plasma lipoproteins, particularly HDL, represent a continuum of phospholipid platforms that facilitate specific protein-protein interactions. To test this idea, we separated normal human plasma using three techniques that exploit different lipoprotein physicochemical properties (gel filtration chromatography, ionic exchange chromatography, and preparative isoelectric focusing). We then tracked the co-separation of 76 lipid-associated proteins via mass spectrometry and applied a summed correlation analysis to identify protein pairs that may co-reside on individual lipoproteins. The analysis produced 2701 pairing scores, with the top hits representing previously known protein-protein interactions as well as numerous unknown pairings. A network analysis revealed clusters of proteins with related functions, particularly lipid transport and complement regulation. The specific co-separation of protein pairs or clusters suggests the existence of stable lipoprotein subspecies that may carry out distinct functions. Further characterization of the composition and function of these subspecies may point to better targeted therapeutics aimed at CVD or other diseases. PMID:23882025
Multi-dimensional analysis of combustion instabilities in liquid rocket motors
NASA Technical Reports Server (NTRS)
Grenda, Jeffrey M.; Venkateswaran, Sankaran; Merkle, Charles L.
1992-01-01
A three-dimensional analysis of combustion instabilities in liquid rocket engines is presented based on a mixed finite difference/spectral solution methodology for the gas phase and a discrete droplet tracking formulation for the liquid phase. Vaporization is treated by a simplified model based on an infinite thermal conductivitiy assumption for spherical liquid droplets of fuel in a convective environment undergoing transient heating. A simple two parameter phenomenological combustion response model is employed for validation of the results in the small amplitude regime. The computational procedure is demonstrated to capture the phenomena of wave propagation within the combustion chamber accurately. Results demonstrate excellent amplitude and phase agreement with analytical solutions for properly selected grid resolutions under both stable and unstable operating conditions. Computations utilizing the simplified droplet model demonstrate stable response to arbitrary pulsing. This is possibly due to the assumption of uniform droplet temperature which removes the thermal inertia time-lag response of the vaporization process. The mixed-character scheme is sufficiently efficient to allow solutions on workstations at a modest increase in computational time over that required for two-dimensional solutions.
NASA Astrophysics Data System (ADS)
De Masi, A.
2015-09-01
The paper describes reading criteria for the documentation for important buildings in Milan, Italy, as a case study of the research on the integration of new technologies to obtain 3D multi-scale representation architectures. In addition, affords an overview of the actual optical 3D measurements sensors and techniques used for surveying, mapping, digital documentation and 3D modeling applications in the Cultural Heritage field. Today new opportunities for an integrated management of data are given by multiresolution models, that can be employed for different scale of representation. The goal of multi-scale representations is to provide several representations where each representation is adapted to a different information density with several degrees of detail. The Digital Representation Platform, along with the 3D City Model, are meant to be particularly useful to heritage managers who are developing recording, documentation, and information management strategies appropriate to territories, sites and monuments. Digital Representation Platform and 3D City Model are central activities in a the decision-making process for heritage conservation management and several urban related problems. This research investigates the integration of the different level-of-detail of a 3D City Model into one consistent 4D data model with the creation of level-of-detail using algorithms from a GIS perspective. In particular, such project is based on open source smart systems, and conceptualizes a personalized and contextualized exploration of the Cultural Heritage through an experiential analysis of the territory.
Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis
Jeong, Myong K; Kong, Seong G; Omitaomu, Olufemi A
2008-09-01
Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.
Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine
2013-06-01
Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic Development Scale (KEDS) designed to assess some of the core affective, cognitive and behavioural components of empathy concurrently. The KEDS assesses responses to picture scenarios depicting a range of individual and interpersonal situations differing in social complexity. Results from 220 children indicate the KEDS measures three related but distinct aspects of empathy that are also related to existing measures of empathy and cognitive development. Scores on the KEDS show age and some gender-related differences in the expected direction. PMID:23659893
Liu, Baizhan; Chen, Chaoying; Wu, Da; Su, Qingde
2008-04-01
A fully automated multi-dimensional gas chromatography (MDGC) system with a megabore precolumn and cyclodextrin-based analytical column was developed to analyze the enantiomeric compositions of anatabine, nornicotine and anabasine in commercial tobacco. The enantiomer abundances of anatabine and nornicotine varied among different tobacco. S-(-)-anatabine, as a proportion of total anatabine, was 86.6% for flue-cured, 86.0% for burley and 77.5% for oriental tobacco. S-(-)-nornicotine, as a proportion of total nornicotine, was 90.8% in oriental tobacco and higher than in burley (69.4%) and flue-cured (58.7%) tobacco. S-(-)-anabasine, as a proportion of total anabasine, was relatively constant for flue-cured (60.1%), burley (65.1%) and oriental (61.7%) tobacco. A simple solvent extraction with dichloromethane followed by derivatisation with trifluoroacetic anhydride gave relative standard deviations of less than 1.5% for the determination of the S-(-)-isomers of all three alkaloids. The study also indicated that, a higher proportion of S-(-)-nornicotine is related to the more active nicotine demethylation in the leaf. PMID:18342587
Kobayashi, Naohiro
2014-01-01
Superpositioning of atoms in an ensemble of biomolecules is a common task in a variety of fields in structural biology. Although several automated tools exist based on previously established methods, manual operations to define the atoms in the ordered regions are usually preferred. The task is difficult and lacks output efficiency for multi-core proteins having complicated folding topology. The new method presented here can systematically and quantitatively achieve the identification of ordered cores even for molecules containing multiple cores linked with flexible loops. In contrast to established methods, this method treats the variance of inter-atomic distances in an ensemble as information content using a non-linear (NL) function, and then subjects it to multi-dimensional scaling (MDS) to embed the row vectors in the inter-atomic distance variance matrix into a lower dimensional matrix. The plots of the identified atom groups in a one or two-dimensional map enables users to visually and intuitively infer well-ordered atoms in an ensemble, as well as to automatically identify them by the standard clustering methods. The performance of the NL-MDS method has been examined for number of structure ensembles studied by nuclear magnetic resonance, demonstrating that the method can be more suitable for structural analysis of multi-core proteins in comparison to previously established methods. PMID:24384868
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system
NASA Astrophysics Data System (ADS)
Mix, Ken
The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979--2007. Changes in streamflow, water allocations and water policy were observed in all agriculture periods. Cross-scale linkages were also evident between climate and streamflow; policy and water rights; and agriculture, groundwater pumping and streamflow.
ERIC Educational Resources Information Center
Papay, John P.; Willett, John B.; Murnane, Richard J.
2011-01-01
We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,…
Multi-dimensional sparse time series: feature extraction
Franciosi, Marco
2008-01-01
We show an analysis of multi-dimensional time series via entropy and statistical linguistic techniques. We define three markers encoding the behavior of the series, after it has been translated into a multi-dimensional symbolic sequence. The leading component and the trend of the series with respect to a mobile window analysis result from the entropy analysis and label the dynamical evolution of the series. The diversification formalizes the differentiation in the use of recurrent patterns, from a Zipf law point of view. These markers are the starting point of further analysis such as classification or clustering of large database of multi-dimensional time series, prediction of future behavior and attribution of new data. We also present an application to economic data. We deal with measurements of money investments of some business companies in advertising market for different media sources.
NASA Astrophysics Data System (ADS)
Wilmarth de Leonardi, Tauna Lea
2000-10-01
The development of a reliable containment cooling system is one of the key areas in advanced nuclear reactor development. There are two categories of containment cooling: active and passive. The active containment cooling consists usually of systems that require active participation in their use. The passive systems have, in the past, been reliant on the supply of electrical power. This has instigated worldwide efforts in the development of passive containment cooling systems that are safer, more reliable, and simpler in their use. The passive containment cooling system's performance is deteriorated by noncondensable gases that come from the containment and from the gases produced by cladding/steam interaction during a severe accident. These noncondensable gases degrade the heat transfer capabilities of the condensers in the passive containment cooling systems since they provide a heat transfer resistance to the condensation process. There has been some work done in the area of modeling condensation heat transfer with noncondensable gases, but little has been done to apply the work to integral facilities. It is important to fully understand the heal transfer capabilities of the passive systems so a detailed assessment of the long term cooling capabilities can be performed. The existing correlations and models are for the through-flow of the mixture of steam and the noncondensable gases. This type of analysis may not be applicable to passive containment cooling systems, where there is no clear passage for the steam to escape. This allows the steam to accumulate in the lower header and tubes, where all of the steam condenses. The objective of this work was to develop a condensation heat transfer model for the downward cocurrent flow of a steam/air mixture through a condenser tube, taking into account the atypical characteristics of the passive containment cooling system. An empirical model was developed that depends solely on the inlet conditions to the condenser system, including the mixture Reynolds number and noncondensable gas concentration. This empirical model is applicable to the condensation heat transfer of the passive containment cooling system. This study was also used to characterize the local heat transfer coefficient with a noncondensable gas present.
Multi-dimensional laser radars
NASA Astrophysics Data System (ADS)
Molebny, Vasyl; Steinvall, Ove
2014-06-01
We introduce the term "multi-dimensional laser radar", where the dimensions mean not only the coordinates of the object in space, but its velocity and orientation, parameters of the media: scattering, refraction, temperature, humidity, wind velocity, etc. The parameters can change in time and can be combined. For example, rendezvous and docking missions, autonomous planetary landing, along with laser ranging, laser altimetry, laser Doppler velocimetry, are thought to have aboard also the 3D ladar imaging. Operating in combinations, they provide more accurate and safer navigation, docking or landing, hazard avoidance capabilities. Combination with Doppler-based measurements provides more accurate navigation for both space and cruise missile applications. Critical is the information identifying the snipers based on combination of polarization and fluctuation parameters with data from other sources. Combination of thermal imaging and vibrometry can unveil the functionality of detected targets. Hyperspectral probing with laser reveals even more parameters. Different algorithms and architectures of ladar-based target acquisition, reconstruction of 3D images from point cloud, information fusion and displaying is discussed with special attention to the technologies of flash illumination and single-photon focal-plane-array detection.
Vargas, Sara E; Fava, Joseph L; Severy, Lawrence; Rosen, Rochelle K; Salomon, Liz; Shulman, Lawrence; Guthrie, Kate Morrow
2016-02-01
Currently available risk perception scales tend to focus on risk behaviors and overall risk (vs partner-specific risk). While these types of assessments may be useful in clinical contexts, they may be inadequate for understanding the relationship between sexual risk and motivations to engage in safer sex or one's willingness to use prevention products during a specific sexual encounter. We present the psychometric evaluation and validation of a scale that includes both general and specific dimensions of sexual risk perception. A one-time, audio computer-assisted self-interview was administered to 531 women aged 18-55 years. Items assessing sexual risk perceptions, both in general and in regards to a specific partner, were examined in the context of a larger study of willingness to use HIV/STD prevention products and preferences for specific product characteristics. Exploratory and confirmatory factor analyses yielded two subscales: general perceived risk and partner-specific perceived risk. Validity analyses demonstrated that the two subscales were related to many sociodemographic and relationship factors. We suggest that this risk perception scale may be useful in research settings where the outcomes of interest are related to motivations to use HIV and STD prevention products and/or product acceptability. Further, we provide specific guidance on how this risk perception scale might be utilized to understand such motivations with one or more specific partners. PMID:26621151
Hlebowitsh, Paul Gerardus
2012-01-01
A stable multi-dimensional magnetic levitator was characterized and implemented. This thesis contains a full analysis of the feedback specifications, a short summary of the circuits used in the design of the setup, and ...
Multi-dimensional shock capturing
NASA Astrophysics Data System (ADS)
Slemrod, Marshall
The author has worked on two aspects of multidimensional shock capturing. The first project has been a multifaceted effort to understand dynamic liquid-vapor interface propagation from a kinetic point of view. The phenomenon was modeled via a Boltzmann like cluster dynamics model. Clusters represent groupings of molecules of various cluster sizes which can collide elastically and inelastically. The inelastic collisions can produce coagulation of clusters or fragmentation of a cluster. A fluid made of only small cluster sizes would represent a dilute vapor while one containing very large cluster sizes would be a metastable supersaturated vapor. The model via various scaling limits gives sets of equations describing vapor flow in various transition regimes. Numerical experiments were performed modeling vapor to saturated vapor phase change encountered when a dilute vapor encounters a rigid wall.
Extended Darknet: Multi-Dimensional Internet Threat Monitoring System
NASA Astrophysics Data System (ADS)
Shimoda, Akihiro; Mori, Tatsuya; Goto, Shigeki
Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, light-weight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multi-dimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
8 Multi-dimensional diagnostics in space and time Clemens F. Kaminski and Marshall B. Long
Long, Marshall B.
8 Multi-dimensional diagnostics in space and time Clemens F. Kaminski and Marshall B. Long 8 research. Hundreds of chemical species may react simultaneously on spatial and temporal scales spanning and in the construction of models has come from the application of planar laser imaging techniques. Be it interactions
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
A Brief Multi-Dimensional Children's Level-of-Functioning Tool.
ERIC Educational Resources Information Center
Srebnik, Debra
This paper discusses the results of a study that investigated the validity and reliability of the Ecology Rating Scale (ERS). The ERS is a brief, multi-dimensional level-of-functioning instrument that can be rated by parents or clinicians. The ERS is comprised of seven domains of youth functioning: family, school, emotional, legal/justice,…
Kwon, T.S.; Yun, B.J.; Euh, D.J.; Chu, I.C.; Song, C.H.
2002-07-01
Multi-dimensional thermal-hydraulic behavior in the downcomer annulus of a pressurized water reactor vessel with a Direct Vessel Injection (DVI) mode is presented based on the experimental observation in the MIDAS (Multi-dimensional Investigation in Downcomer Annulus Simulation) steam-water test facility. From the steady-state test results to simulate the late reflood phase of a Large Break Loss-of-Coolant Accidents(LBLOCA), isothermal lines show the multidimensional phenomena of a phasic interaction between steam and water in the downcomer annulus very well. MIDAS is a steam-water separate effect test facility, which is 1/4.93 linearly scaled-down of 1400 MWe PWR type of a nuclear reactor, focused on understanding multi-dimensional thermalhydraulic phenomena in downcomer annulus with various types of safety injection during the refill or reflood phase of a LBLOCA. The initial and the boundary conditions are scaled from the pre-test analysis based on the preliminary calculation using the TRAC code. The superheated steam with a superheating degree of 80 K at a given downcomer pressure of 180 kPa is injected equally through three intact cold legs into the downcomer. (authors)
Vlasov multi-dimensional model dispersion relation
Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia
2014-07-15
A hybrid model of the Vlasov equation in multiple spatial dimension D?>?1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?{sup N}, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.
: An integrated library of multi-dimensional packing problems q
Fekete, SÃ¡ndor P.
and cutting; Benchmark library; Multi-dimensional packing; Open problems; XML 1. Introduction A crucial and maintaining benchmark libraries for a large variety of problems. In operations research, one of the firstPackLib2 : An integrated library of multi-dimensional packing problems q SaÂ´ndor P. Fekete, Jan C
Star-ND (Multi-Dimensional Star-Identification)
Spratling, Benjamin
2012-07-16
In order to perform star-identification with lower processing requirements, multi-dimensional techniques are implemented in this research as a database search as well as to create star pattern parameters. New star pattern parameters are presented...
High-value energy storage for the grid: a multi-dimensional look
Culver, Walter J.
2010-12-15
The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
Dimensionality Reduction on Multi-Dimensional Transfer Functions for Multi-Channel Volume Data Sets
Kim, Han Suk; Schulze, Jürgen P.; Cone, Angela C.; Sosinsky, Gina E.; Martone, Maryann E.
2011-01-01
The design of transfer functions for volume rendering is a non-trivial task. This is particularly true for multi-channel data sets, where multiple data values exist for each voxel, which requires multi-dimensional transfer functions. In this paper, we propose a new method for multi-dimensional transfer function design. Our new method provides a framework to combine multiple computational approaches and pushes the boundary of gradient-based multi-dimensional transfer functions to multiple channels, while keeping the dimensionality of transfer functions at a manageable level, i.e., a maximum of three dimensions, which can be displayed visually in a straightforward way. Our approach utilizes channel intensity, gradient, curvature and texture properties of each voxel. Applying recently developed nonlinear dimensionality reduction algorithms reduces the high-dimensional data of the domain. In this paper, we use Isomap and Locally Linear Embedding as well as a traditional algorithm, Principle Component Analysis. Our results show that these dimensionality reduction algorithms significantly improve the transfer function design process without compromising visualization accuracy. We demonstrate the effectiveness of our new dimensionality reduction algorithms with two volumetric confocal microscopy data sets. PMID:21841914
Synchronous Circuit Optimization via MultiDimensional Retiming y
Sha, Edwin
by producing a circuit capable of executing all its operations in parallel. The multi is to improve the circuit performance by achieving full parallelism among all operations in the circuit. DueSynchronous Circuit Optimization via MultiDimensional Retiming y Nelson Luiz Passos Edwin Hsing
Image matrix processor for fast multi-dimensional computations
Roberson, George P. (Tracy, CA); Skeate, Michael F. (Livermore, CA)
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Multi-Dimensional Recurrent Neural Networks Alex Graves1
Toronto, University of
are pro- vided for two image segmentation tasks. 1 Introduction Recurrent neural networks (RNNs) were of convolution networks [10] to image processing tasks such as digit recognition [14]. One disadvantageMulti-Dimensional Recurrent Neural Networks Alex Graves1 , Santiago FernÂ´andez1 , J
Multi-Dimensional Range Query over Encrypted Data 1
Song, Dawn
, searchable encryption, anonymous identity- based encryption #12;Abstract We design an encryption scheme encrypt a message with a set of attributes. For example, in the network audit log applicationMulti-Dimensional Range Query over Encrypted Data 1 Elaine Shi John Bethencourt T-H. Hubert Chan
MultiDimensional Range Query over Encrypted Data 1
Shi, Elaine
query, searchable encryption, anonymous identity based encryption #12; Abstract We design an encryption (MRQED). In MRQED, we encrypt a message with a set of attributes. For example, in the network audit logMultiDimensional Range Query over Encrypted Data 1 Elaine Shi John Bethencourt TH. Hubert Chan
A fourth order Central WENO Scheme for Multi-dimensional
Puppo, Gabriella
A fourth order Central WENO Scheme for Multi-dimensional Hyperbolic Systems of Conservation Laws, central di#11;erence schemes, high-order accuracy, non- oscillatory schemes, WENO reconstruction, CWENO Non-Oscillatory (ENO) schemes [7], [32] and more recently the Weighted ENO (WENO) schemes [26], [8
The Multi-Dimensional Demands of Reading in the Disciplines
ERIC Educational Resources Information Center
Lee, Carol D.
2014-01-01
This commentary addresses the complexities of reading comprehension with an explicit focus on reading in the disciplines. The author proposes reading as entailing multi-dimensional demands of the reader and posing complex challenges for teachers. These challenges are intensified by restrictive conceptions of relevant prior knowledge and experience…
Application of Multi-Dimensional Sensing Technologies in Production Systems
NASA Astrophysics Data System (ADS)
Shibuya, Hisae; Kimachi, Akira; Suwa, Masaki; Niwakawa, Makoto; Okuda, Haruhisa; Hashimoto, Manabu
Multi-dimensional sensing has been used for various purposes in the field of production systems. The members of the IEEJ MDS committee investigated the trends in sensing technologies and their applications. In this paper, the result of investigations of auto-guided vehicles, cell manufacturing robots, safety, maintenance, worker monitoring, and sensor networks are discussed.
Existence and asymptotic behavior of multi-dimensional quantum hydrodynamic
Markowich, Peter A.
and quantum uid equation was described in view of nonlinear geometric optic (WKB){ansatz of the wave functionExistence and asymptotic behavior of multi-dimensional quantum hydrodynamic model quantum hydrodynamic equations for the electron particle density, the current density
Multi-dimensional dynamical decoupling based quantum sensing
Wen-Long Ma; Ren-Bao Liu
2015-12-14
Nuclear magnetic resonance (NMR) has enormous applications. Multi-dimensional NMR has been an essential technique to characterize correlations between nuclei and hence molecule structures. Multi-dimensional spectroscopy has also been extended to optics to study correlations in molecules and many-body effects in semiconductors. Towards the ultimate goal of single-molecule NMR, dynamical decoupling (DD) enhanced diamond quantum sensing has enabled detection of single nuclear spins and nanoscale NMR. However, there is still lack of a standard method in DD-based quantum sensing to characterize correlations between nuclear spins in single molecules. Here we present a scheme of multi-dimensional DD-based quantum sensing, as a universal method for correlation spectroscopy of single molecules. We design multi-dimensional DD sequences composed of multiple sets of periodic DD sequences with different periods, which can be independently set to match different transition frequencies for resonant DD. We find that under resonant DD the sensor coherence patterns, as functions of multiple DD pulse numbers, can differentiate different types of correlations between nuclear spin transitions. This work offers a standard approach to correlation spectroscopy for single-molecule NMR.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context
Developing a new multi-dimensional depression assessment scale
Cheung, Ho Nam
2010-07-01
. In Study 1, an 85-item questionnaire containing all the possible depressive symptoms was distributed to 87 participants from mental health professions. Based on their clinical experience and knowledge, they rated how typical each symptom was on a 5-point...
Multi-dimensional Indoor Location Information Model
NASA Astrophysics Data System (ADS)
Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.
2013-11-01
Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and IndoorGML. The goal of the model is to provide an exchange GML-based format for location needed for indoor routing and navigation. An elaborated user requirements analysis and investigation of state-of-the-art technology in expressing indoor location at home and abroad was completed to identify the manner humans specify location. The ultimate goal is to provide an ontology that will allow absolute and relative specification of location such as "in room 321", "on the second floor", as well as, "two meters from the second window", "12 steps from the door".
Portable laser synthesizer for high-speed multi-dimensional spectroscopy
Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)
2012-05-29
Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.
Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kwak, Dochan (Technical Monitor)
1995-01-01
The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.
Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus
Weizhen Mao; Jie Chen; William Watson
2005-11-30
Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.
A multi-dimensional sampling method for locating small scatterers
NASA Astrophysics Data System (ADS)
Song, Rencheng; Zhong, Yu; Chen, Xudong
2012-11-01
A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method.
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2013-01-01
In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the…
Development of a Scale Measuring Trait Anxiety in Physical Education
ERIC Educational Resources Information Center
Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos
2012-01-01
The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…
Fourier transform assisted deconvolution of skewed peaks in complex multi-dimensional chromatograms.
Hanke, Alexander T; Verhaert, Peter D E M; van der Wielen, Luuk A M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2015-05-15
Lower order peak moments of individual peaks in heavily fused peak clusters can be determined by fitting peak models to the experimental data. The success of such an approach depends on two main aspects: the generation of meaningful initial estimates on the number and position of the peaks, and the choice of a suitable peak model. For the detection of meaningful peaks in multi-dimensional chromatograms, a fast data scanning algorithm was combined with prior resolution enhancement through the reduction of column and system broadening effects with the help of two-dimensional fast Fourier transforms. To capture the shape of skewed peaks in multi-dimensional chromatograms a formalism for the accurate calculation of exponentially modified Gaussian peaks, one of the most popular models for skewed peaks, was extended for direct fitting of two-dimensional data. The method is demonstrated to successfully identify and deconvolute peaks hidden in strongly fused peak clusters. Incorporation of automatic analysis and reporting of the statistics of the fitted peak parameters and calculated properties allows to easily identify in which regions of the chromatograms additional resolution is required for robust quantification. PMID:25841612
Flexible multi-dimensional modulation method for elastic optical networks
NASA Astrophysics Data System (ADS)
He, Zilong; Liu, Wentao; Shi, Sheping; Shen, Bailin; Chen, Xue; Gao, Xiqing; Zhang, Qi; Shang, Dongdong; Ji, Yongning; Liu, Yingfeng
2016-01-01
We demonstrate a flexible multi-dimensional modulation method for elastic optical networks. We compare the flexible multi-dimensional modulation formats PM-kSC-mQAM with traditional modulation formats PM-mQAM using numerical simulations in back-to-back and wavelength division multiplexed (WDM) transmission (50 GHz-spaced) scenarios at the same symbol rate of 32 Gbaud. The simulation results show that PM-kSC-QPSK and PM-kSC-16QAM can achieve obvious back-to-back sensitivity gain with respect to PM-QPSK and PM-16QAM at the expense of spectral efficiency reduction. And the WDM transmission simulation results show that PM-2SC-QPSK can achieve 57.5% increase in transmission reach compared to PM-QPSK, and 48.5% increase for PM-2SC-16QAM over PM-16QAM. Furthermore, we also experimentally investigate the back to back performance of PM-2SC-QPSK, PM-4SC-QPSK, PM-2SC-16QAM and PM-3SC-16QAM, and the experimental results agree well with the numerical simulations.
Pauly, Anne; Wolf, Carolin; Mayr, Andreas; Lenz, Bernd; Kornhuber, Johannes; Friedland, Kristina
2015-01-01
Background In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy. Methods 269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013) received usual care, whereas intervention patients (05/2013-12/2013) underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component) during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component). Adherence was measured by the “Medication Adherence Report Scale” (MARS) and the “Drug Attitude Inventory” (DAI). Results The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73–1.93) higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15–2.72) more for intervention patients. Conclusion These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program. Trial Registration German Clinical Trials Register DRKS00006358 PMID:26437449
Multi-Dimensional Damage Detection for Surfaces and Structures
NASA Technical Reports Server (NTRS)
Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah
2013-01-01
Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or sensory system, which may include a time domain reflectometer, resistivity monitoring hardware, or other resistance-based systems. To begin, a layered composite consisting of thin-film damage detection layers separated by non-damage detection layers is fabricated. The damage detection layers are attached to a detector that provides details regarding the physical health of each detection layer individually. If damage occurs to any of the detection layers, a change in the electrical properties of the detection layers damaged occurs, and a response is generated. Real-time analysis of these responses will provide details regarding the depth, location, and size estimation of the damage. Multiple damages can be detected, and the extent (depth) of the damage can be used to generate prognostic information related to the expected lifetime of the layered composite system. The detection system can be fabricated very easily using off-the-shelf equipment, and the detection algorithms can be written and updated (as needed) to provide the level of detail needed based on the system being monitored. Connecting to the thin film detection layers is very easy as well. The truly unique feature of the system is its flexibility; the system can be designed to gather as much (or as little) information as the end user feels necessary. Individual detection layers can be turned on or off as necessary, and algorithms can be used to optimize performance. The system can be used to generate both diagnostic and prognostic information related to the health of layer composite structures, which will be essential if such systems are utilized for space exploration. The technology is also applicable to other in-situ health monitoring systems for structure integrity.
2011-01-01
Background The concept of resilience has captured the imagination of researchers and policy makers over the past two decades. However, despite the ever growing body of resilience research, there is a paucity of relevant, comprehensive measurement tools. In this article, the development of a theoretically based, comprehensive multi-dimensional measure of resilience in adolescents is described. Methods Extensive literature review and focus groups with young people living with chronic illness informed the conceptual development of scales and items. Two sequential rounds of factor and scale analyses were undertaken to revise the conceptually developed scales using data collected from young people living with a chronic illness and a general population sample. Results The revised Adolescent Resilience Questionnaire comprises 93 items and 12 scales measuring resilience factors in the domains of self, family, peer, school and community. All scales have acceptable alpha coefficients. Revised scales closely reflect conceptually developed scales. Conclusions It is proposed that, with further psychometric testing, this new measure of resilience will provide researchers and clinicians with a comprehensive and developmentally appropriate instrument to measure a young person's capacity to achieve positive outcomes despite life stressors. PMID:21970409
A Multi-Dimensional Classification Model for Scientific Workflow Characteristics
Ramakrishnan, Lavanya; Plale, Beth
2010-04-05
Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
Balance properties of multi-dimensional words Val erie Berth e and Robert Tijdeman y
Tijdeman, Robert
Balance properties of multi-dimensional words Val#19;erie Berth#19;e #3; and Robert Tijdeman y Abstract A word u is called 1-balanced if for any two factors v and w of u of equal length, we have 1 #20 v. The aim of this paper is to extend the notion of balance to multi-dimensional words. We #12;rst
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets
Maitra, Ranjan
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets Ranjan Maitra in the clustering of multi-dimensional datasets. The developed procedure compares two models and declares the more of the procedure is illustrated on two well-known classification datasets and comprehensively evaluated in terms
Multi-Dimensional Analysis of Dynamic Human Information Interaction
ERIC Educational Resources Information Center
Park, Minsoo
2013-01-01
Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory
NASA Astrophysics Data System (ADS)
Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.
2014-01-01
Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.
Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy
2011-01-01
Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268
Multi-dimensional ultra-high frequency passive radio frequency identification tag antenna designs
Delichatsios, Stefanie Alkistis
2006-01-01
In this thesis, we present the design, simulation, and empirical evaluation of two novel multi-dimensional ultra-high frequency (UHF) passive radio frequency identification (RFID) tag antennas, the Albano-Dipole antenna ...
Kang, InHan
2006-01-01
In this thesis, we present a system for visualizing hierarchical, multi-dimensional, memory-intensive datasets. Specifically, we designed an interactive system to visualize data collected by high-throughput microscopy and ...
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
A Comment on "On Some Contradictory Computations in Multi-dimensional Mathematics"
E. Capelas de Oliveira; W. A. Rodrigues Jr
2006-03-27
In this paper we analyze the status of some `unbelievable results' presented in the paper `On Some Contradictory Computations in Multi-Dimensional Mathematics' [1] published in Nonlinear Analysis, a journal indexed in the Science Citation Index. Among some of the unbelievable results `proved' in the paper we can find statements like that: (i) a linear transformation which is a rotation in R^2 with rotation angle theta different from nphi/2, is inconsistent with arithmetic, (ii) complex number theory is inconsistent. Besides these 'results' of mathematical nature [1],offers also a `proof' that Special Relativity is inconsistent. Now, we are left with only two options (a) the results of [1] are correct and in this case we need a revolution in Mathematics (and also in Physics) or (b) the paper is a potpourri of nonsense. We show that option (b) is the correct one. All `proofs' appearing in [1] are trivially wrong, being based on a poor knowledge of advanced calculus notions. There are many examples (some of them discussed in [2,3,4,5,6]of complete wrong papers using nonsequitur Mathematics in the Physics literature. Taking into account also that a paper like [1] appeared in a Mathematics journal we think that it is time for editors and referees of scientific journals to become more careful in order to avoid the dissemination of nonsense.
Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling
ERIC Educational Resources Information Center
Kim, Se-Kang; Davison, Mark L.; Frisby, Craig L.
2007-01-01
This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major…
The Multi-Dimensional Character and Mechanisms of Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Burrows, Adam; Dessart, Luc; Livne, Eli
2007-10-01
On this twentieth anniversary of the epiphany of SN1987A, we summarize various proposed explosion mechanisms for the generic core-collapse supernova. Whether the agency is neutrinos, acoustic power, magnetohydrodynamics, or some hybrid combination of these three, both multi-dimensional simulations and constraining astronomical measurements point to a key role for asphericities and instabilities in collapse and explosion dynamics. Moreover, different progenitors may explode in different ways, and have different observational signatures. Whatever these are, the complex phenomena being revealed through modern multi-dimensional numerical simulations manifest a richness that was little anticipated in the early years of theoretical supernova research, a richness that continues to challenge us today.
ERIC Educational Resources Information Center
Andreev, Valentin I.
2014-01-01
The main aim of this research is to disclose the essence of students' multi-dimensional thinking, also to reveal the rating of factors which stimulate the raising of effectiveness of self-development of students' multi-dimensional thinking in terms of subject-oriented teaching. Subject-oriented learning is characterized as a type of learning where…
Reshape scale method: A novel multi scale entropic analysis approach
NASA Astrophysics Data System (ADS)
Zandiyeh, P.; von Tscharner, V.
2013-12-01
The Reshape Scale (RS) method was introduced in this article as a novel approach to perform multi scale transition of sample entropy. This method was able to quantify the orderliness in the signal by determining the distance over which the subsequent data points can remain affiliated to one another. Entropic Half Life (EnHL) was introduced to characterize such an affiliation. The method was tested for 1/f? processes for different ? values. Furthermore, the dependency of the multi scale entropy analysis developed by Costa et al. (2002) [6] to the probability density function and the standard deviation of autoregressive signals was studied and discussed.
Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity
Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity Ron Lavi mechanism design, where the machines are the strategic players. This is a multidimensional scheduling domain mechanisms [22, 20]. We study a well-motivated special case of this problem, where the processing time
Multi-dimensional Network Security Game: How do attacker and defender battle on parallel targets?
Lui, John C.S.
of Computer Science & Engineering, The Chinese University of Hong Kong, China Email: {ydxu@fudan.edu.cn, csluiMulti-dimensional Network Security Game: How do attacker and defender battle on parallel targets? Yuedong Xu, John C.S. Lui Department of Electronic Engineering, Fudan University, China Department
Stability of shock waves for multi-dimensional hyperbolic-parabolic conservation laws
NASA Astrophysics Data System (ADS)
Li, Dening
1988-01-01
The uniform linear stability of shock waves is considerd for quasilinear hyperbolic-parabolic coupled conservation laws in multi-dimensional space. As an example, the stability condition and its dynamic meaning for isothermal shock wave in radiative hydrodynamics are analyzed.
Efficient Quantile Retrieval on Multi-Dimensional Data Man Lung Yiu1
Yiu, Man Lung
Efficient Quantile Retrieval on Multi-Dimensional Data Man Lung Yiu1 , Nikos Mamoulis1 , and Yufei that the company is planning to launch a new plan corresponding to the black dot q. To evaluate the potential areas, and the black dots represent supermarkets. Each dashed polygon is the "influence region" [21
Evidencing Learning Outcomes: A Multi-Level, Multi-Dimensional Course Alignment Model
ERIC Educational Resources Information Center
Sridharan, Bhavani; Leitch, Shona; Watty, Kim
2015-01-01
This conceptual framework proposes a multi-level, multi-dimensional course alignment model to implement a contextualised constructive alignment of rubric design that authentically evidences and assesses learning outcomes. By embedding quality control mechanisms at each level for each dimension, this model facilitates the development of an aligned…
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems
Pedram, Massoud
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems Hadi for multi-tier applications in the cloud computing is considered. An upper bound on the total profit on multi-tier architectures [6]. Each tier provides a defined service to the next tiers and uses services
A Relevance-Extended Multi-dimensional Model for a Data Warehouse Contextualized with Documents
Song, Il-Yeol
A Relevance-Extended Multi-dimensional Model for a Data Warehouse Contextualized with Documents,berlanga,aramburu}@uji.es Torben Bach Pedersen Aalborg University tbp@cs.aau.dk ABSTRACT Current data warehouse and OLAP warehouse with a document warehouse, resulting in a contextualized warehouse. Thus, contextualized
Efficient Addressing of Multi-dimensional Signal Constellations Using a Lookup Table
Kabal, Peter
Efficient Addressing of Multi-dimensional Signal Constellations Using a Lookup Table A. K a lookup table for the addressing of an optimally shaped constellation. The method is based on partitioning constellation. 1 Introduction In shaping, one tries to reduce the average energy of a signal constellation
A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system
Asadzadeh, M.; Bartoszek, K.
2011-05-20
We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations
Kabal, Peter
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations A. K constellation. This scheme, called as the ad- dress decomposition, is based on decomposing the addressing. This is called a signal constel- lation. The constellation points are usually selected as a finite subset
Dredze, Mark
Experimenting with Drugs (and Topic Models): Multi-Dimensional Exploration of Recreational Drug of new recreational drugs and trends re- quires mining current information from non-traditional text components. The resulting model learns factors that correspond to drug type, delivery method (smoking
Multi-dimensional simulations of helium shell flash convection F. Herwig1,2
Herwig, Falk
to the shallow surface convection in A- type stars studied by Freytag et al. (1996), coherently moving convectiveMulti-dimensional simulations of helium shell flash convection F. Herwig1,2 , B. Freytag3,2 , R. M-d stellar evolution codes (Fig.1) have to adopt simplifying assumptions on convection induced mixing
ERIC Educational Resources Information Center
Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd
2012-01-01
The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…
ERIC Educational Resources Information Center
Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen
2011-01-01
Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…
Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter
ERIC Educational Resources Information Center
Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.
2010-01-01
We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…
Scale-PC shielding analysis sequences
Bowman, S.M.
1996-05-01
The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.
Reply to Adams: Multi-Dimensional Edge Interference
Eagle, Nathan N.
We completely agree with adams that, in social network analysis, the particular research question should drive the definition of what constitutes a tie ( 1). However, we believe that even studies of inherently social ...
Probabilistic Models for Incomplete Multi-dimensional Arrays Yahoo! Labs.
Edinburgh, University of
College Blvd. Santa Clara, CA, USA chuwei@yahoo-inc.com Zoubin Ghahramani Dept. of Engineering Univ. of Cambridge Cambridge, UK zoubin@eng.cam.ac.uk Abstract In multiway data, each sample is measured by multiple high- order tensor decomposition to integrative analysis of DNA microarray data from series
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
NASA Astrophysics Data System (ADS)
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws
NASA Astrophysics Data System (ADS)
Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu
2013-11-01
We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi-domain strategy. For the same RMI test, the hybrid computations on a single GPU, in double precision arithmetics, displays five- to six-fold speedup over the hybrid computations on a single CPU. The relative speedup of the hybrid computation over the WENO computations on GPUs is similar to that on CPUs, demonstrating the advantage of hybrid schemes technique on both CPUs and GPUs.
Publishing and sharing multi-dimensional image data with OMERO.
Burel, Jean-Marie; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Moore, Josh; Swedlow, Jason R
2015-10-01
Imaging data are used in the life and biomedical sciences to measure the molecular and structural composition and dynamics of cells, tissues, and organisms. Datasets range in size from megabytes to terabytes and usually contain a combination of binary pixel data and metadata that describe the acquisition process and any derived results. The OMERO image data management platform allows users to securely share image datasets according to specific permissions levels: data can be held privately, shared with a set of colleagues, or made available via a public URL. Users control access by assigning data to specific Groups with defined membership and access rights. OMERO's Permission system supports simple data sharing in a lab, collaborative data analysis, and even teaching environments. OMERO software is open source and released by the OME Consortium at www.openmicroscopy.org . PMID:26223880
Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose
2014-01-01
Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589
Hitchhiker's guide to multi-dimensional plant pathology.
Saunders, Diane G O
2015-02-01
Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Dr?gu? et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Dr?gu?, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.
Scale Free Reduced Rank Image Analysis.
ERIC Educational Resources Information Center
Horst, Paul
In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…
Giant Leaps and Minimal Branes in Multi-Dimensional Flux Landscapes
Adam R. Brown; Alex Dahlen
2011-09-14
There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'minimal' brane that wraps the internal manifold and exhibits behavior not visible in the effective theory. We discuss the implications for the cosmological constant.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2001-01-01
A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.
Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.
Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen
2014-05-01
Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg. PMID:24740619
Toward a Mininum Criteria of Multi Dimensional Instanton Formation for Condensed Matter Systems?
A. W. Beckwith
2008-02-05
Our paper generalizes techniques initially explicitly developed for CDW applications only with respect to what is needed for multi dimensional instantons forming in complex condensed matter applications. This involves necessary conditions for formulation of a soliton- anti soliton pair,assuming a minimum distance between charge centers, and discusses the prior density wave physics example as to why a Pierels gap term is added to the tilted washboard potential for insuring the formation of scalar potential fields . We state that much the same methodology is needed for higher dimensional condensed matter systems/ giving an explicit reference to two dimensional instantons as presented by Lu, and indicating further development in higher dimensions is warranted
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.
2015-01-01
Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
A multi-dimensional scale for repositioning public park and recreation services
Kaczynski, Andrew Thomas
2004-09-30
-item instrument was developed that encompasses nine distinct dimensions: Preventing Youth Crime, Environmental Stewardship, Enhancing Real Estate Values, Attracting and Retaining Businesses, Attracting and Retaining Retirees, Improving Community Health...
NASA Astrophysics Data System (ADS)
Ii, Satoshi; Sugiyama, Kazuyasu; Takeuchi, Shintaro; Takagi, Shu; Matsumoto, Yoichiro; Xiao, Feng
2012-03-01
An interface capturing method with a continuous function is proposed within the framework of the volume-of-fluid (VOF) method. Being different from the traditional VOF methods that require a geometrical reconstruction and identify the interface by a discontinuous Heaviside function, the present method makes use of the hyperbolic tangent function (known as one of the sigmoid type functions) in the tangent of hyperbola interface capturing (THINC) method [F. Xiao, Y. Honma, K. Kono, A simple algebraic interface capturing scheme using hyperbolic tangent function, Int. J. Numer. Methods Fluids 48 (2005) 1023-1040] to retrieve the interface in an algebraic way from the volume-fraction data of multi-component materials. Instead of the 1D reconstruction in the original THINC method, a multi-dimensional hyperbolic tangent function is employed in the present new approach. The present scheme resolves moving interface with geometric faithfulness and compact thickness, and has at least the following advantages: (1) the geometric reconstruction is not required in constructing piecewise approximate functions; (2) besides a piecewise linear interface, curved (quadratic) surface can be easily constructed as well; and (3) the continuous multi-dimensional hyperbolic tangent function allows the direct calculations of derivatives and normal vectors. Numerical benchmark tests including transport of moving interface and incompressible interfacial flows are presented to validate the numerical accuracy for interface capturing and to show the capability for practical problems such as a stationary circular droplet, a drop oscillation, a shear-induced drop deformation and a rising bubble.
NASA Astrophysics Data System (ADS)
Schaerer, Joël; Fassi, Aurora; Riboldi, Marco; Cerveri, Pietro; Baroni, Guido; Sarrut, David
2012-01-01
Real-time optical surface imaging systems offer a non-invasive way to monitor intra-fraction motion of a patient's thorax surface during radiotherapy treatments. Due to lack of point correspondence in dynamic surface acquisition, such systems cannot currently provide 3D motion tracking at specific surface landmarks, as available in optical technologies based on passive markers. We propose to apply deformable mesh registration to extract surface point trajectories from markerless optical imaging, thus yielding multi-dimensional breathing traces. The investigated approach is based on a non-rigid extension of the iterative closest point algorithm, using a locally affine regularization. The accuracy in tracking breathing motion was quantified in a group of healthy volunteers, by pair-wise registering the thoraco-abdominal surfaces acquired at three different respiratory phases using a clinically available optical system. The motion tracking accuracy proved to be maximal in the abdominal region, where breathing motion mostly occurs, with average errors of 1.09 mm. The results demonstrate the feasibility of recovering multi-dimensional breathing motion from markerless optical surface acquisitions by using the implemented deformable registration algorithm. The approach can potentially improve respiratory motion management in radiation therapy, including motion artefact reduction or tumour motion compensation by means of internal/external correlation models.
Multi-Dimensional Validation Impact Tests on PZT 95/5 and ALOX
NASA Astrophysics Data System (ADS)
Furnish, M. D.; Robbins, J.; Trott, W. M.; Chhabildas, L. C.; Lawrence, R. J.; Montgomery, S. T.
2002-07-01
Multi-dimensional impact tests were conducted on the ferroelectric ceramic PZT 95/5 and alumina-loaded epoxy (ALOX) encapsulants, with the purpose of providing benchmarks for material models in the ALEGRA wavecode. Diagnostics used included line-imaging VISAR (velocity interferometry), a key diagnostic for such tests. Results from four tests conducted with ALOX cylinders impacted by nonplanar copper projectiles were compared with ALEGRA simulations. The simulation produced approximately correct attenuations and divergence, but somewhat higher wave velocities. Several sets of tests conducted using PZT rods (length:diameter ratio = 5:1) encapsulated in ALOX, and diagnosed with line-imaging and point VISAR, were modeled as well. Significant improvement in wave arrival times and waveforms agreement for the two-material multi-dimensional experiments was achieved by simultaneous multiple parameter optimization on multiple one-dimensional experiments. Additionally, a variable friction interface was studied in these calculations. We conclude further parameter optimization is required for both material models.
Evaluation of an Expanded Disability Status Scale (EDSS) modeling strategy in multiple sclerosis.
Cao, Hua; Peyrodie, Laurent; Agnani, Olivier; Cavillon, Fabrice; Hautecoeur, Patrick; Donzé, Cécile
2015-11-01
The Expanded Disability Status Scale (EDSS) is the most widely used scale to evaluate the degree of neurological impairment in multiple sclerosis (MS). In this paper, we report on the evaluation of an EDSS modeling strategy based on recurrence quantification analysis (RQA) of posturographic data (i.e., center of pressure, COP). A total of 133 volunteers with EDSS ranging from 0 to 4.5 participated in this study, with eyes closed. After selection of time delay (?), embedding dimension (m) as well as threshold (radius, r) to identify recurrent points, several RQA measures were calculated for each COP's position and velocity data in the mono- and multi-dimensional RQAs. Estimation results lead to the selection of the recurrence rate (RR) of the COP's position as the most pertinent RQA measure. The performance of the models versus raw and noisy data was higher in the mono-dimensional analysis than in the multi-dimensional. This study suggests that the posturographic signal's mono-dimensional RQA is a more pertinent method to quantify disability in MS than the multi-dimensional RQA. PMID:26345244
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
Akhter, T.; Hossain, M. M.; Mamun, A. A.
2012-09-15
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Dimensional Analysis, Scaling, and Similarity 1. Systems of units
Hunter, John K.
it is convenient to use a logarithmic scale of units instead of a linear scale (such as the Richter scaleLECTURE 2 Dimensional Analysis, Scaling, and Similarity 1. Systems of units The numerical value that appear in it therefore corresponds to a scale-invariance of the model. Remark 2.1. Sometimes
Markov Chain Analysis for Large-Scale Grid Systems
Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski Fern Hunt NISTIR 7566 #12;2 #12;3 NISTIR 7566 Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski Software;5 Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski and Fern Hunt Abstract: In large
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae
Tanaka, Masaomi; Maeda, Keiichi; Mazzali, Paolo A.; Nomoto, Ken'ichi
2008-05-21
We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.
NASA Astrophysics Data System (ADS)
Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai
2014-03-01
RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here.
Multi-dimensional single-spin nano-optomechanics with a levitated nanodiamond
NASA Astrophysics Data System (ADS)
Neukirch, Levi P.; von Haartman, Eva; Rosenholm, Jessica M.; Nick Vamivakas, A.
2015-10-01
Considerable advances made in the development of nanomechanical and nano-optomechanical devices have enabled the observation of quantum effects, improved sensitivity to minute forces, and provided avenues to probe fundamental physics at the nanoscale. Concurrently, solid-state quantum emitters with optically accessible spin degrees of freedom have been pursued in applications ranging from quantum information science to nanoscale sensing. Here, we demonstrate a hybrid nano-optomechanical system composed of a nanodiamond (containing a single nitrogen-vacancy centre) that is levitated in an optical dipole trap. The mechanical state of the diamond is controlled by modulation of the optical trapping potential. We demonstrate the ability to imprint the multi-dimensional mechanical motion of the cavity-free mechanical oscillator into the nitrogen-vacancy centre fluorescence and manipulate the mechanical system's intrinsic spin. This result represents the first step towards a hybrid quantum system based on levitating nanoparticles that simultaneously engages optical, phononic and spin degrees of freedom.
Giant Leaps and Monkey Branes in Multi-Dimensional Flux Landscapes
Brown, Adam R
2010-01-01
There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'monkey brane' that wraps the internal manifold and exhibits behavior not visible in the effective theory. The implications for the Bousso-Polchinski landscape are discussed.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
Two-photon speckle as a probe of multi-dimensional entanglement
C. W. J. Beenakker; J. W. F. Venderbos; M. P. van Exter
2009-01-15
We calculate the statistical distribution P_2(I_2) of the speckle pattern produced by a photon pair current I_2 transmitted through a random medium, and compare with the single-photon speckle distribution P_1(I_1). We show that the purity Tr rho^2 of a two-photon density matrix rho can be directly extracted from the first two moments of P_1 and P_2. A one-to-one relationship is derived between P_1 and P_2 if the photon pair is in an M-dimensional entangled pure state. For M>>1 the single-photon speckle disappears, while the two-photon speckle acquires an exponential distribution. The exponential distribution transforms into a Gaussian if the quantum entanglement is degraded to a classical correlation of M>>1 two-photon states. Two-photon speckle can therefore discriminate between multi-dimensional quantum and classical correlations.
Multi-Dimensional, Non-Contact Metrology using Trilateration and High Resolution FMCW Ladar
Mateo, Ana Baselga
2015-01-01
Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multi-dimensional, non-contact length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration based design provides 3D resolution inherently independent of stand-off range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly-stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of laser speckle issues caused by diffuse scattering of the targets.
Racial-ethnic self-schemas: Multi-dimensional identity-based motivation
Oyserman, Daphna
2008-01-01
Prior self-schema research focuses on benefits of being schematic vs. aschematic in stereotyped domains. The current studies build on this work, examining racial-ethnic self-schemas as multi-dimensional, containing multiple, conflicting, and non-integrated images. A multidimensional perspective captures complexity; examining net effects of dimensions predicts within-group differences in academic engagement and well-being. When racial-ethnicity self-schemas focus attention on membership in both in-group and broader society, engagement with school should increase since school is not seen as out-group defining. When racial-ethnicity self-schemas focus attention on inclusion (not obstacles to inclusion) in broader society, risk of depressive symptoms should decrease. Support for these hypotheses was found in two separate samples (8th graders, n = 213, 9th graders followed to 12th grade n = 141). PMID:19122837
Han, Xianlin; Yang, Kui; Gross, Richard W.
2011-01-01
Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525
Ionizing shocks in argon. Part II: Transient and multi-dimensional effects
Kapper, M. G.; Cambier, J.-L.
2011-06-01
We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.
Scaling analysis of negative differential thermal resistance
NASA Astrophysics Data System (ADS)
Chan, Ho-Kei; He, Dahai; Hu, Bambi
2014-05-01
Negative differential thermal resistance (NDTR) can be generated for any one-dimensional heat flow with a temperature-dependent thermal conductivity. In a system-independent scaling analysis, the general condition for the occurrence of NDTR is found to be an inequality with three scaling exponents: n1n2<-(1+n3), where n1?(-?,+?) describes a particular way of varying the temperature difference, and n2 and n3 describe, respectively, the dependence of the thermal conductivity on an average temperature and on the temperature difference. For cases with a temperature-dependent thermal conductivity, i.e. n2?0, NDTR can always be generated with a suitable choice of n1 such that this inequality is satisfied. The results explain the illusory absence of a NDTR regime in certain lattices and predict new ways of generating NDTR, where such predictions have been verified numerically. The analysis will provide insights for a designing of thermal devices, and for a manipulation of heat flow in experimental systems, such as nanotubes.
Optimal sensor configuration for flexible structures with multi-dimensional mode shapes
NASA Astrophysics Data System (ADS)
Chang, Minwoo; Pakzad, Shamim N.
2015-05-01
A framework for deciding the optimal sensor configuration is implemented for civil structures with multi-dimensional mode shapes, which enhances the applicability of structural health monitoring for existing structures. Optimal sensor placement (OSP) algorithms are used to determine the best sensor configuration for structures with a priori knowledge of modal information. The signal strength at each node is evaluated by effective independence and modified variance methods. Euclidean norm of signal strength indices associated with each node is used to expand OSP applicability into flexible structures. The number of sensors for each method is determined using the threshold for modal assurance criterion (MAC) between estimated (from a set of observations) and target mode shapes. Kriging is utilized to infer the modal estimates for unobserved locations with a weighted sum of known neighbors. A Kriging model can be expressed as a sum of linear regression and random error which is assumed as the realization of a stochastic process. This study presents the effects of Kriging parameters for the accurate estimation of mode shapes and the minimum number of sensors. The feasible ranges to satisfy MAC criteria are investigated and used to suggest the adequate searching bounds for associated parameters. The finite element model of a tall building is used to demonstrate the application of optimal sensor configuration. The dynamic modes of flexible structure at centroid are appropriately interpreted into the outermost sensor locations when OSP methods are implemented. Kriging is successfully used to interpolate the mode shapes from a set of sensors and to monitor structures associated with multi-dimensional mode shapes.
TWO-DIMENSIONAL CORE-COLLAPSE SUPERNOVA MODELS WITH MULTI-DIMENSIONAL TRANSPORT
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun E-mail: burrows@astro.princeton.edu
2015-02-10
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant O(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate O(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ''ray-by-ray'' approach employed by all other groups may be compromising their results. We show that ''ray-by-ray'' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport
NASA Astrophysics Data System (ADS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-02-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
Bengtsson, Henrik; Hössjer, Ola
2006-01-01
Background Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. Results A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. Conclusion We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R. PMID:16509971
Clement, Prabhakar
-species, bio-reactive or radioactive transport problems involving a sequential first-order decay reaction chain-order reaction network involving distinct retardation factors Cristhian R. Quezada a , T. Prabhakar Clement a of multi-dimensional, multi-species transport problems that are coupled with a first-order reaction network
ERIC Educational Resources Information Center
Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu
2012-01-01
Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…
NASA Astrophysics Data System (ADS)
Kuznetsova, T. F.; Eremenko, S. I.
2015-07-01
Samples of multi-dimensional nanoporous aluminosilicate with the composition (25% Al2O3-75% SiO2) are synthesized using the template effect of supramolecular cetylpyridinium chloride. The samples are studied by means of low-temperature nitrogen static adsorption-desorption, X-ray diffraction, scanning electron microscopy, and FT-IR spectroscopy. Changes in the specific surface area, volume, and DFT distribution of mesopores are shown to depend on the template concentration, annealing temperature, and sample training temperature prior to analysis. When using 5.0% of the template, we observe the formation of an aluminosilicate mesophase with a three-dimensional MCM-48 cubic pore system, homogeneous mesoporosity, and the excellent textural characteristics typical of a well-organized cellular structure.
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings
NASA Astrophysics Data System (ADS)
Ma, Peifeng; Lin, Hui; Lan, Hengxing; Chen, Fulong
2015-08-01
Deformation often occurs in buildings at early ages, and the constant inspection of deformation is of significant importance to discover possible cracking and avoid wall failure. This paper exploits the multi-dimensional SAR tomography technique to monitor the deformation performances of two newly built buildings (B1 and B2) with a special focus on the effects of concrete creep and shrinkage. To separate the nonlinear thermal expansion from total deformations, the extended 4-D SAR technique is exploited. The thermal map estimated from 44 TerraSAR-X images demonstrates that the derived thermal amplitude is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that B1 is subject to settlement during the construction period, in addition, the creep and shrinkage of B1 lead to wall shortening that is a height-dependent movement in the downward direction, and the asymmetrical creep of B2 triggers wall deflection that is a height-dependent movement in the deflection direction. It is also validated that the extended 4-D SAR can rectify the bias of estimated wall shortening and wall deflection by 4-D SAR.
NASA Astrophysics Data System (ADS)
Cheng, Z.; Hsu, T. J.; Calantoni, J.
2014-12-01
In the past decade, researchers have clearly been making progress in predicting coastal erosion/recovery; however, evidences are also clear that existing coastal evolution models cannot predict coastal responses subject to extreme storm events. In this study, we investigate the dynamics of momentary bed failure driven by large horizontal pressure gradients, which may be the dominant sediment transport mechanism under intense storm condition. Recently, a multi-dimensional two-phase Eulerian sediment transport model has been developed and disseminated to the research community as an open-source code. The numerical model is based on extending an open-source CFD library of solvers, OpenFOAM. Model results were validated with published sediment concentration and velocity data measured in steady and oscillatory flow. The 2DV Reynolds-averaged model showed wave-like bed instabilities when the criteria of momentary bed failure was exceeded. These bed instabilities were responsible for the large transport rate observed during plug flow and the onset of the instabilities was associated with a large erosion depth. To better resolve the onset of bed instabilities, subsequent energy cascade and the resulting large sediment transport rate and sediment pickup flux, 3D turbulence-resolving simulations were also carried out. Detailed validation of the 3D turbulence-resolving Eulerian two-phase model will be presented along with the expanded investigation on the dynamics of momentary bed failure.
Multi-dimensional permutation-modulation format for coherent optical communications.
Ishimura, Shota; Kikuchi, Kazuro
2015-06-15
We introduce the multi-dimensional permutation-modulation format in coherent optical communication systems and analyze its performance, focusing on the power efficiency and the spectral efficiency. In the case of four-dimensional (4D) modulation, the polarization-switched quadrature phase-shift keying (PS-QPSK) modulation format and the polarization quadrature-amplitude modulation (POL-QAM) format can be classified into the permutation modulation format. Other than these well-known modulation formats, we find novel modulation formats trading-off between the power efficiency and the spectral efficiency. With the increase in the dimension, the spectral efficiency can more closely approach the channel capacity predicted from the Shannon's theory. We verify these theoretical characteristics through computer simulations of the symbol-error rate (SER) and bit-error rate (BER) performances. For example, the newly-found eight-dimensional (8D) permutation-modulation format can improve the spectral efficiency up to 2.75 bit/s/Hz/pol/channel, while the power penalty against QPSK is about 1 dB at BER=10(-3). PMID:26193538
Vogt, Stefan; Ralle, Martina
2012-01-01
Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4th year Secondary Education students - control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
Operationalising the Sustainable Knowledge Society Concept through a Multi-dimensional Scorecard
NASA Astrophysics Data System (ADS)
Dragomirescu, Horatiu; Sharma, Ravi S.
Since the early 21st Century, building a Knowledge Society represents an aspiration not only for the developed countries, but for the developing ones too. There is an increasing concern worldwide for rendering this process manageable towards a sustainable, equitable and ethically sound societal system. As proper management, including at the societal level, requires both wisdom and measurement, the operationalisation of the Knowledge Society concept encompasses a qualitative side, related to vision-building, and a quantitative one, pertaining to designing and using dedicated metrics. The endeavour of enabling policy-makers mapping, steering and monitoring the sustainable development of the Knowledge Society at national level, in a world increasingly based on creativity, learning and open communication, led researchers to devising a wide range of composite indexes. However, as such indexes are generated through weighting and aggregation, their usefulness is limited to retrospectively assessing and comparing levels and states already attained; therefore, to better serve policy-making purposes, composite indexes should be complemented by other instruments. Complexification, inspired by the systemic paradigm, allows obtaining "rich pictures" of the Knowledge Society; to this end, a multi-dimensional scorecard of the Knowledge Society development is hereby suggested, that seeks a more contextual orientation towards sustainability. It is assumed that, in the case of the Knowledge Society, the sustainability condition goes well beyond the "greening" desideratum and should be of a higher order, relying upon the conversion of natural and productive life-cycles into virtuous circles of self-sustainability.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
A SECOND-ORDER GODUNOV METHOD FOR MULTI-DIMENSIONAL RELATIVISTIC MAGNETOHYDRODYNAMICS
Beckwith, Kris; Stone, James M. E-mail: jstone@astro.princeton.edu
2011-03-15
We describe a new Godunov algorithm for relativistic magnetohydrodynamics (RMHD) that combines a simple, unsplit second-order accurate integrator with the constrained transport (CT) method for enforcing the solenoidal constraint on the magnetic field. A variety of approximate Riemann solvers are implemented to compute the fluxes of the conserved variables. The methods are tested with a comprehensive suite of multi-dimensional problems. These tests have helped us develop a hierarchy of correction steps that are applied when the integration algorithm predicts unphysical states due to errors in the fluxes, or errors in the inversion between conserved and primitive variables. Although used exceedingly rarely, these corrections dramatically improve the stability of the algorithm. We present preliminary results from the application of these algorithms to two problems in RMHD: the propagation of supersonic magnetized jets and the amplification of magnetic field by turbulence driven by the relativistic Kelvin-Helmholtz instability (KHI). Both of these applications reveal important differences between the results computed with Riemann solvers that adopt different approximations for the fluxes. For example, we show that the use of Riemann solvers that include both contact and rotational discontinuities can increase the strength of the magnetic field within the cocoon by a factor of 10 in simulations of RMHD jets and can increase the spectral resolution of three-dimensional RMHD turbulence driven by the KHI by a factor of two. This increase in accuracy far outweighs the associated increase in computational cost. Our RMHD scheme is publicly available as part of the Athena code.
EL-Shamy, E. F.
2014-08-15
The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
NASA Astrophysics Data System (ADS)
EL-Shamy, E. F.
2014-08-01
The solitary structures of multi-dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
Two-Photon Speckle as a Probe of Multi-Dimensional Entanglement C. W. J. Beenakker,1
Exter, Martin van
Two-Photon Speckle as a Probe of Multi-Dimensional Entanglement C. W. J. Beenakker,1 J. W. F of the speckle pattern produced by a photon pair current I2 transmitted through a random medium, and compare it with the single-photon speckle distribution P1ðI1Þ. We show that the purity of a two-photon density matrix can
Otis-Green, Shirley; Sidhu, Rupinder K.; Ferraro, Catherine Del; Ferrell, Betty
2014-01-01
Lung cancer patients and their family caregivers face a wide range of potentially distressing symptoms across the four domains of quality of life. A multi-dimensional approach to addressing these complex concerns with early integration of palliative care has proven beneficial. This article highlights opportunities to integrate social work using a comprehensive quality of life model and a composite patient scenario from a large lung cancer educational intervention National Cancer Institute-funded program project grant. PMID:24797998
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Confirmatory Factor Analysis of the Cancer Locus of Control Scale.
ERIC Educational Resources Information Center
Henderson, Jessica W.; Donatelle, Rebecca J.; Acock, Alan C.
2002-01-01
Conducted a confirmatory factor analysis of the Cancer Locus of Control scale (M. Watson and others, 1990), administered to 543 women with a history of breast cancer. Results support a three-factor model of the scale and support use of the scale to assess control dimensions. (SLD)
Humeau-Heurtier, Anne; Mahe, Guillaume; Abraham, Pierre
2015-10-01
Laser speckle contrast imaging (LSCI) is a noninvasive full-field optical technique which allows analyzing the dynamics of microvascular blood flow. LSCI has attracted attention because it is able to image blood flow in different kinds of tissue with high spatial and temporal resolutions. Additionally, it is simple and necessitates low-cost devices. However, the physiological information that can be extracted directly from the images is not completely determined yet. In this work, a novel multi-dimensional complete ensemble empirical mode decomposition with adaptive noise (MCEEMDAN) is introduced and applied in LSCI data recorded in three physiological conditions (rest, vascular occlusion and post-occlusive reactive hyperaemia). MCEEMDAN relies on the improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and our algorithm is specifically designed to analyze multi-dimensional data (such as images). Over the recent multi-dimensional ensemble empirical mode decomposition (MEEMD), MCEEMDAN has the advantage of leading to an exact reconstruction of the original data. The results show that MCEEMDAN leads to intrinsic mode functions and residue that reveal hidden patterns in LSCI data. Moreover, these patterns differ with physiological states. MCEEMDAN appears as a promising way to extract features in LSCI data for an improvement of the image understanding. PMID:25850087
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Independent and complementary methods for large-scale structural analysis
Yuan, Guo-Cheng "GC"
Independent and complementary methods for large-scale structural analysis of mammalian chromatin. Richter,4 Daniel G. Peterson,5 Oliver J. Rando,3 William S. Noble,2 and Robert E. Kingston1,7 1 Department large-scale analysis. We validated these assays using the known positions of nucleosomes on the mouse
Dynamical scaling analysis of plant callus growth
NASA Astrophysics Data System (ADS)
Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.
2003-07-01
We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.
Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David
2002-01-01
A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this
MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE
Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S. E-mail: takiwaki.tomoya@nao.ac.jp E-mail: shoichi@heap.phys.waseda.ac.jp
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ?20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (?2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
DiMattina, Christopher; Zhang, Kechen
2015-09-01
Numerous psychophysical studies have considered how subjects combine multiple sensory cues to make perceptual decisions, or how contextual information influences the perception of a target stimulus. In cases where cues interact in a linear manner, it is sufficient to characterize an observer's sensitivity along each individual feature dimension to predict perceptual decisions when multiple cues are varied simultaneously. However, in many situations sensory cues interact non-linearly, and therefore quantitatively characterizing subject behavior requires estimating a complex non-linear psychometric model which may contain numerous parameters. In this computational methods study, we analyze three efficient implementations of the well-studied PSI procedure (Kontsevich & Tyler, 1999) for adaptive psychophysical data collection which generalize well to psychometric models defined in multi-dimensional stimulus spaces where the standard implementation is intractable. Using generic multivariate logistic regression models as a test bed for our algorithms, we present two novel implementations of the PSI procedure which offer substantial speed-up compared to previously proposed implementations: (1) A look-up table method where optimal stimulus placements are pre-computed for various values of the (unknown) true model parameters and (2) A Laplace approximation method using a continuous Gaussian approximation to the evolving posterior density. We demonstrate the utility of these novel methods for quickly and accurately estimating the parameters of hypothetical nonlinear cue combination models in 2- and 3-dimensional stimulus spaces. In addition to these generic examples, we further illustrate our methods using a biologically derived model of how stimulus contrast influences orientation discrimination thresholds. Finally, we consider strategies for further speeding up experiments and extensions to models defined in dozens of dimensions. This work is potentially of great significance to investigators who are interested in quantitatively modeling the perceptual representations of complex naturalistic stimuli like textures and occlusion contours which are defined by multiple feature dimensions. Meeting abstract presented at VSS 2015. PMID:26326165
Multi-dimensional Conjunctive Operation Rule for the Water Supply System
NASA Astrophysics Data System (ADS)
Chiu, Y.; Tan, C. A.; CHEN, Y.; Tung, C.
2011-12-01
In recent years, with the increment of floods and droughts, not only in numbers but also in intensities, floods were severer during the wet season and the droughts were more serious during the dry season. In order to reduce their impact on agriculture, industry, and even human being, the conjunctive use of surface water and groundwater has been paid much attention and become a new direction for the future research. Traditionally, the reservoir operation usually follows the operation rule curve to satisfy the water demand and considers only water levels at the reservoirs and time series. The strategy used in the conjunctive-use management model is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the deficit between demand and supply, if exists, is provided by the groundwater. In this study, we propose a new operation rule, named multi-dimensional conjunctive operation rule curve (MCORC), which is extended from the concept of reservoir operation rule curve. The MCORC is a three-dimensional curve and is applied to both surface water and groundwater. Three sets of parameters, water levels and the supply percentage at reservoirs, groundwater levels and the supply percentage, and time series, are considered simultaneously in the curve. The zonation method and heuristic algorithm are applied to optimize the curve subject to the constraints of the reservoir operation rules and the safety yield of groundwater. The proposed conjunctive operation rule was applied to the water supply system which is analogue to the area in northern Taiwan. The results showed that the MCORC could increase the efficiency of water use and reduce the risk of serious water deficits.
Mechem, David B.; Ovtchinnikov, Mikhail; Kogan, Y. L.; Davis, Anthony B; Cahalan, Robert F.; Takara, Ezra E.; Ellingson, Robert G.
2002-06-03
In order to address the interactive and evolutionary nature of the cloud-radiation interaction, we have coupled to a Large Eddy Simulation (LES) model the sophisticated multi-dimensional radiative transfer (MDRT) scheme of Evans (Spherical Harmonics Discrete Ordinate Method; 1998). Because of computational expense, we are at this time only able to run 2D experiments. Preliminary runs consider only the broadband longwave component, in large part because IR cloud top cooling is the significant forcing mechanism for marine stratocumulus. Little difference is noted in the evolution of unbroken stratocumulus between three-hour runs using MDRT and independent pixel approximation (IPA) for 2D domains of 50 km in the horizontal and 1.5 km in the vertical. Local heating rates differ slightly near undulating regions of cloud top, and a slight bias in mean heating rate from 1 to 3 h is present, yet the differences are never strong enough to result in a pronounced evolutionary bias in typical boundary layer metrics (e.g. inversion height, vertical velocity variance, TKE). Longer integration times may eventually produce a physical response to the bias in radiative cooling rates. A low-CCN case, designed to produce significant drizzle and induce cloud breakup does show subtle differences between MDRT and IPA. Over the course of the 6 hour simulations, entrainment is slightly less in the MDRT case, and the transition to the surface-based trade cumulus regime is delayed. Mean cooling rates appear systematically weaker in the MDRT case, indicative of a less energetic PBL and reflected in profiles of vertical velocity variance and TKE.
TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients.
Loorak, Mona Hosseinkhani; Perin, Charles; Kamal, Noreen; Hill, Michael; Carpendale, Sheelagh
2016-01-01
We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan. PMID:26390482
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Multi-dimensional analysis of hdl: an approach to understanding atherogenic hdl
Johnson, Jr., Jeffery Devoyne
2009-05-15
the early onset of coronary artery disease (CAD). The research presented here focuses on the pairing of DGU with post-separatory techniques including matrix-assisted laser desorption mass spectrometry (MALDI-MS), liquid chromatography mass spectrometry (LC-MS...
Pupil Performance, Absenteeism and School Drop-out: A Multi-dimensional Analysis.
ERIC Educational Resources Information Center
Smyht, Emer
1999-01-01
Assesses whether second-level schools in Ireland are equally effective regarding examination performance, absenteeism, and potential dropouts, using multivariate analyses of data from 15- and 16-year-olds in 116 schools. Absenteeism and potential dropout rates are lower in schools that enhance pupils' academic progress. (Contains 22 references.)…
Detection and analysis of multi-dimensional pulse wave based on optical coherence tomography
NASA Astrophysics Data System (ADS)
Shen, Yihui; Li, Zhifang; Li, Hui; Chen, Haiyu
2014-11-01
Pulse diagnosis is an important method of traditional Chinese medicine (TCM). Doctors diagnose the patients' physiological and pathological statuses through the palpation of radial artery for radial artery pulse information. Optical coherence tomography (OCT) is an useful tool for medical optical research. Current conventional diagnostic devices only function as a pressure sensor to detect the pulse wave - which can just partially reflect the doctors feelings and lost large amounts of useful information. In this paper, the microscopic changes of the surface skin above radial artery had been studied in the form of images based on OCT. The deformation of surface skin in a cardiac cycle which is caused by arterial pulse is detected by OCT. The patient's pulse wave is calculated through image processing. It is found that it is good consistent with the result conducted by pulse analyzer. The real-time patient's physiological and pathological statuses can be monitored. This research provides a kind of new method for pulse diagnosis of traditional Chinese medicine.
Graph OLAP: a multi-dimensional framework for graph data analysis
Chen, Chen; Yan, Xifeng; Zhu, Feida; Han, Jiawei; Yu, Philip S.
2009-01-01
of databases and data ware- house systems to handle graphdatabases, and bioinformatics. He investigates models and algo- rithms for managing and mining complex graphs andgraph cube’s base cuboid; without any dif?culty, we can also aggregate DB, DM and IR into a broad Database ?
Correlation network analysis for multi-dimensional data in stocks market
NASA Astrophysics Data System (ADS)
Kazemilari, Mansooreh; Djauhari, Maman Abdurachman
2015-07-01
This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.
Convective scale weather analysis and forecasting
NASA Technical Reports Server (NTRS)
Purdom, J. F. W.
1984-01-01
How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
Scale-Specific Multifractal Medical Image Analysis
Braverman, Boris
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. ...
Wavelet analysis of wall turbulence to study large-scale modulation of small scales
NASA Astrophysics Data System (ADS)
Baars, W. J.; Talluru, K. M.; Hutchins, N.; Marusic, I.
2015-10-01
Wavelet analysis is employed to examine amplitude and frequency modulations in broadband signals. Of particular interest are the streamwise velocity fluctuations encountered in wall-bounded turbulent flows. Recent studies have shown that an important feature of the near-wall dynamics is the modulation of small scales by large-scale motions. Small- and large-scale components of the velocity time series are constructed by employing a spectral separation scale. Wavelet analysis of the small-scale component decomposes the energy in joint time-frequency space. The concept is to construct a low-dimensional representation of the small-scale time-varying spectrum via two new time series: the instantaneous amplitude of the small-scale energy and the instantaneous frequency. Having the latter in a time-continuous representation allows a more thorough analysis of frequency modulation. By correlating the large-scale velocity with the concurrent small-scale amplitude and frequency realizations, both amplitude and frequency modulations are studied. In addition, conditional averages of the small-scale amplitude and frequency realizations depict unique features of the scale interaction. For both modulation phenomena, the much studied time shifts, associated with peak correlations between the large-scale velocity and small-scale amplitude and frequency traces, are addressed. We confirm that the small-scale amplitude signal leads the large-scale fluctuation close to the wall. It is revealed that the time shift in frequency modulation is smaller than that in amplitude modulation. The current findings are described in the context of a conceptual mechanism of the near-wall modulation phenomena.
Cho, H.K.; Park, G.C.; Yun, B.J.; Kwon, T.S.; Song, C.H.
2002-07-01
From the two dimensional two-fluid model a new scaling methodology, named the 'modified linear scaling', is suggested for the scientific design of a scaled-down experimental facility and data analysis of the direct ECC bypass under LBLOCA reflood phase. The characteristics of the scaling law are its velocity is scaled by a Wallis-type parameter and the aspect ratio of experimental facility is preserved with that of prototype. For the experimental validation of the proposed scaling law, the air-water tests for direct ECC bypass were performed in the 1/4.0 and 1/7.3 scaled UPTF downcomer test section. The obtained data are compared with those of UPTF Test 21-D. It is found that the modified linear scaling methodology is appropriate for the preservation of multi-dimensional flow phenomena in downcomer annulus, such as direct ECC bypass. (authors)
Local variance for multi-scale analysis in geomorphometry
Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
NASA Astrophysics Data System (ADS)
Wochner, Mark
A computational acoustic propagation model based upon the Navier-Stokes equations is created that is able to simulate the effects of absorption and dispersion due to shear viscosity, bulk viscosity, thermal conductivity and molecular relaxation of nitrogen and oxygen in one or two dimensions. The model uses a fully nonlinear constitutive equation set that is closed using a thermodynamic entropy relation and a van der Waals equation of state. The use of the total variables in the equations rather than the perturbed (acoustical) variables allow for the extension of the model to include wind, temperature profiles, and other frequency independent conditions. The method of including sources in the model also allow for the incorporation of multiple spatially and temporally complex sources. Two numerical methods are used for the solution of the constitutive equations: a dispersion relation preserving scheme, which is shown to be efficient and accurate but unsuitable for shock propagation; and a weighted essentially non-oscillatory scheme which is shown to be able to stably propagate shocks but at considerable computational cost. Both of these algorithms are utilized in this investigation because their individual strengths are appropriate for different situations. It is shown that these models are able to accurately recreate many acoustical phenomena. Wave steepening in a lossless and thermoviscous medium is compared to the Fubini solution and Mendousse's solution to the Burgers equation, respectively, and the Fourier component amplitudes of the first harmonics is shown to differ from these solutions by at most 0.21%. Nonlinear amplification factors upon rigid boundaries for high incident pressures and its comparisons to the Pfriem solution is shown to differ by at most 0.015%. Modified classical absorption, nitrogen relaxation absorption, and oxygen relaxation absorption is shown to differ from the analytical solutions by at most 1%. Finally, the dispersion due to nitrogen relaxation and oxygen relaxation are also shown to differ from the analytical solutions by at most 1%. It is believed that higher resolution grids would decrease the error in all of these simulations. A number of simulations that do not have explicit analytical solutions are then discussed. To demonstrate the model's ability to propagate multi-dimensional shocks in two dimensions, the formation of a Mach stem is simulated. (Abstract shortened by UMI.)
Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming
Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming S. Lovejoy Although current global warming may have a large anthropogenic component, its quantification relies and emission histories. We statistically formulate the hypothesis of warming through natural variability
An Analysis of a Large Scale Habitat Monitoring Application
Maróti, Miklós
An Analysis of a Large Scale Habitat Monitoring Application Robert Szewczyk , Alan Mainwaring Berkeley, California 94704 Bar Harbor, ME 04609 ABSTRACT Habitat and environmental monitoring is a driving Networks, Habitat Monitoring, Microclimate Monitoring, Network Architecture, Long-Lived Systems
Rasch Analysis of the Geriatric Depression Scale--Short Form
ERIC Educational Resources Information Center
Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.
2009-01-01
Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…
SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION
John Crepeau
2011-03-01
Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.
NASA Astrophysics Data System (ADS)
Jang, Kyoung Won; Cho, Dong Hyun; Shin, Sang Hun; Lee, Bongsoo; Chung, Soon-Cheol; Tack, Gye-Rae; Yi, Jeong Han; Kim, Sin; Cho, Hyosung
In this study, we have fabricated multi-dimensional fiber-optic radiation detectors with organic scintillators, plastic optical fibers and photo-detectors such as photodiode array and a charge-coupled device. To measure the X-ray dose distributions of the clinical linear accelerator in the tissue-equivalent medium, we have fabricated polymethylmethacrylate phantoms which have one-dimensional and two-dimensional fiber-optic detector arrays inside. The one-dimensional and two-dimensional detector arrays can be used to measure percent depth doses and surface dose distributions of high energy X-ray in the phantom respectively.
Kaethner, Christian Ahlborg, Mandy; Buzug, Thorsten M.; Knopp, Tobias; Sattel, Timo F.
2014-01-28
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
NASA Astrophysics Data System (ADS)
Masum Haider, M.; Akter, Suraya; Duha, Syed S.; Mamun, Abdullah A.
2012-10-01
The basic features and multi-dimensional instability of electrostatic (EA) solitary waves propagating in an ultra-relativistic degenerate dense magnetized plasma (containing inertia-less electrons, inertia-less positrons, and inertial ions) have been theoretically investigated by reductive perturbation method and small- k perturbation expansion technique. The Zakharov-Kuznetsov (ZK) equation has been derived, and its numerical solutions for some special cases have been analyzed to identify the basic features (viz. amplitude, width, instability, etc.) of these electrostatic solitary structures. The implications of our results in some compact astrophysical objects, particularly white dwarfs and neutron stars, are briefly discussed.
NASA Astrophysics Data System (ADS)
Masum Haider, M.; Akter, Suraya; Duha, Syed; Mamun, Abdullah
2012-10-01
The basic features and multi-dimensional instability of electrostatic (EA) solitary waves propagating in an ultra-relativistic degenerate dense magnetized plasma (containing inertia-less electrons, inertia-less positrons, and inertial ions) have been theoretically investigated by reductive perturbation method and small-k perturbation expansion technique. The Zakharov-Kuznetsov (ZK) equation has been derived, and its numerical solutions for some special cases have been analyzed to identify the basic features (viz. amplitude, width, instability, etc.) of these electrostatic solitary structures. The implications of our results in some compact astrophysical objects, particularly white dwarfs and neutron stars, are briefly discussed.
Component Cost Analysis of Large Scale Systems
NASA Technical Reports Server (NTRS)
Skelton, R. E.; Yousuff, A.
1982-01-01
The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Design and rigorous analysis of transformation-optics scaling devices.
Jiang, Wei Xiang; Xu, Bai Bing; Cheng, Qiang; Cui, Tie Jun; Yu, Guan Xia
2013-08-01
Scaling devices that can shrink or enlarge an object are designed using transformation optics. The electromagnetic scattering properties of such scaling devices with anisotropic parameters are rigorously analyzed using the eigenmode expansion method. If the radius of the virtual object is smaller than that of the real object, it is a shrinking device with positive material parameters; if the radius of the virtual object is larger than the real one, it is an enlarging device with positive or negative material parameters. Hence, a scaling device can make a dielectric or metallic object look smaller or larger. The rigorous analysis shows that the scattering coefficients of the scaling devices are the same as those of the equivalent virtual objects. When the radius of the virtual object approaches zero, the scaling device will be an invisibility cloak. In such a case, the scattering effect of the scaling device will be sensitive to material parameters of the device. PMID:24323231
Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia
ERIC Educational Resources Information Center
Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal
2014-01-01
A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by…
Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR
Ishii, M.; Ravankar, S.T.; Dowlati, R.
1996-04-01
The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.
NASA Astrophysics Data System (ADS)
Allu, S.; Velamur Asokan, B.; Shelton, W. A.; Philip, B.; Pannala, S.
2014-06-01
A generalized three dimensional computational model based on unified formulation of electrode-electrolyte system of an electric double layer supercapacitor has been developed. This model accounts for charge transport across the electrode-electrolyte system. It is based on volume averaging, a widely used technique in multiphase flow modeling ([1,2]) and is analogous to porous media theory employed for electrochemical systems [3-5]. A single-domain approach is considered in the formulation where there is no need to model the interfacial boundary conditions explicitly as done in prior literature ([6]). Spatio-temporal variations, anisotropic physical properties, and upscaled parameters from lower length-scale simulations and experiments can be easily introduced in the formulation. Model complexities like irregular geometric configuration, porous electrodes, charge transport and related performance characteristics of the supercapacitor can be effectively captured in higher dimensions. This generalized model also provides insight into the applicability of 1D models ([6]) and where multidimensional effects need to be considered. A sensitivity analysis is presented to ascertain the dependence of the charge and discharge processes on key model parameters. Finally, application of the formulation to non-planar supercapacitors is presented.
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A; Philip, Bobby; Pannala, Sreekanth
2014-01-01
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors
Scaling analysis of the galaxy distribution in the SSRS catalog
NASA Astrophysics Data System (ADS)
Campos, A.; Dominguez-Tenreiro, R.; Yepes, G.
1994-12-01
A detailed analysis of the galaxy distribution in the Southern Sky Redshift Survey (SSRS) by means of the multifractal or scaling formalism is presented. It is shown that galaxies cluster in different ways according to their morphological type as well as their size. Elliptical galaxies are more clustered than spirals, even at scales up to 15/h Mpc, whereas no clear segregation between early and late spirals is found. It is also shown that smaller galaxies distribute more homogeneously than larger galaxies.
Scaling Analysis of the Galaxy Distribution in the SSRS Catalog
A. Campos; R. Dominguez-Tenreiro; G. Yepes
1994-07-14
A detailed analysis of the galaxy distribution in the Southern Sky Redshift Survey (SSRS) by means of the multifractal or scaling formalism is presented. It is shown that galaxies cluster in different ways according to their morphological type as well as their size. Ellipticals are more clustered than spirals, even at scales up to 15 h$^{-1}$ Mpc, whereas no clear segregation between early and late spirals is found. It is also shown that smaller galaxies distribute more homogeneously than larger galaxies.
Proteomics beyond large-scale protein expression analysis.
Boersema, Paul J; Kahraman, Abdullah; Picotti, Paola
2015-08-01
Proteomics is commonly referred to as the application of high-throughput approaches to protein expression analysis. Typical results of proteomics studies are inventories of the protein content of a sample or lists of differentially expressed proteins across multiple conditions. Recently, however, an explosion of novel proteomics workflows has significantly expanded proteomics beyond the analysis of protein expression. Targeted proteomics methods, for example, enable the analysis of the fine dynamics of protein systems, such as a specific pathway or a network of interacting proteins, and the determination of protein complex stoichiometries. Structural proteomics tools allow extraction of restraints for structural modeling and identification of structurally altered proteins on a proteome-wide scale. Other variations of the proteomic workflow can be applied to the large-scale analysis of protein activity, location, degradation and turnover. These exciting developments provide new tools for multi-level 'omics' analysis and for the modeling of biological networks in the context of systems biology studies. PMID:25636126
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Geographical Scale Effects on the Analysis of Leptospirosis Determinants
Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães
2014-01-01
Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536
Full-scale system impact analysis: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.
NASA Astrophysics Data System (ADS)
Taghizadeh-Popp, M.; Heinis, S.; Szalay, A. S.
2012-08-01
We propose to describe the variety of galaxies from the Sloan Digital Sky Survey by using only one affine parameter. To this aim, we construct the principal curve (P-curve) passing through the spine of the data point cloud, considering the eigenspace derived from Principal Component Analysis (PCA) of morphological, physical, and photometric galaxy properties. Thus, galaxies can be labeled, ranked, and classified by a single arc-length value of the curve, measured at the unique closest projection of the data points on the P-curve. We find that the P-curve has a "W" letter shape with three turning points, defining four branches that represent distinct galaxy populations. This behavior is controlled mainly by two properties, namely u - r and star formation rate (from blue young at low arc length to red old at high arc length), while most other properties correlate well with these two. We further present the variations of several important galaxy properties as a function of arc length. Luminosity functions vary from steep Schechter fits at low arc length to double power law and ending in lognormal fits at high arc length. Galaxy clustering shows increasing autocorrelation power at large scales as arc length increases. Cross correlation of galaxies with different arc lengths shows that the probability of two galaxies belonging to the same halo decreases as their distance in arc length increases. PCA analysis allows us to find peculiar galaxy populations located apart from the main cloud of data points, such as small red galaxies dominated by a disk, of relatively high stellar mass-to-light ratio and surface mass density. On the other hand, the P-curve helped us understand the average trends, encoding 75% of the available information in the data. The P-curve allows not only dimensionality reduction but also provides supporting evidence for the following relevant physical models and scenarios in extragalactic astronomy: (1) The hierarchical merging scenario in the formation of a selected group of red massive galaxies. These galaxies present a lognormal r-band luminosity function, which might arise from multiplicative processes involved in this scenario. (2) A connection between the onset of active galactic nucleus activity and star formation quenching as mentioned in Martin et al., which appears in green galaxies transitioning from blue to red populations.
Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.
Bing-Nan Lu; Jie Zhao; En-Guang Zhao; Shan-Gui Zhou
2013-03-04
Multi-dimensional constrained covariant density functional theories were developed recently. In these theories, all shape degrees of freedom \\beta_{\\lambda\\mu} deformations with even \\mu are allowed, e.g., \\beta_{20}, \\beta_{22}, \\beta_{30}, \\beta_{32}, \\beta_{40}, \\beta_{42}, \\beta_{44}, and so on and the CDFT functional can be one of the following four forms: the meson exchange or point-coupling nucleon interactions combined with the non-linear or density-dependent couplings. In this contribution, some applications of these theories are presented. The potential energy surfaces of actinide nuclei in the (\\beta_{20}, \\beta_{22}, \\beta_{30}) deformation space are investigated. It is found that besides the octupole deformation, the triaxiality also plays an important role upon the second fission barriers. The non-axial reflection-asymmetric \\beta_{32} shape in some transfermium nuclei with N = 150, namely 246Cm, 248Cf, 250Fm, and 252No are studied.
Enrico Gerlach; Siegfried Eggl; Charalampos Skokos
2011-04-15
We study the problem of efficient integration of variational equations in multi-dimensional Hamiltonian systems. For this purpose, we consider a Runge-Kutta-type integrator, a Taylor series expansion method and the so-called `Tangent Map' (TM) technique based on symplectic integration schemes, and apply them to the Fermi-Pasta-Ulam $\\beta$ (FPU-$\\beta$) lattice of $N$ nonlinearly coupled oscillators, with $N$ ranging from 4 to 20. The fast and accurate reproduction of well-known behaviors of the Generalized Alignment Index (GALI) chaos detection technique is used as an indicator for the efficiency of the tested integration schemes. Implementing the TM technique--which shows the best performance among the tested algorithms--and exploiting the advantages of the GALI method, we successfully trace the location of low-dimensional tori.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
NASA Astrophysics Data System (ADS)
Wu, Dapeng; He, Jinjin; Zhang, Shuo; Cao, Kun; Gao, Zhiyong; Xu, Fang; Jiang, Kai
2015-05-01
Multi-dimensional TiO2 hierarchal structures (MD-THS) assembled by mesoporous nanoribbons consisted of oriented aligned nanocrystals are prepared via thermal decomposing Ti-contained gelatin-like precursor. A unique bridge linking mechanism is proposed to illustrate the formation process of the precursor. Moreover, the as-prepared MD-THS possesses high surface area of ?106 cm2 g-1, broad pore size distribution from several nanometers to ?100 nm and oriented assembled primary nanocrystals, which gives rise to high CdS/CdSe quantum dots loading amount and inhibits the carries recombination in the photoanode. Thanks to these structural advantages, the cell derived from MD-THS demonstrates a power conversion efficiency (PCE) of 4.15%, representing ?36% improvement compared with that of the nanocrystal based cell, which permits the promising application of MD-THS as photoanode material in quantum-dot sensitized solar cells.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth
Smyth, Padhraic
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth Information and Computer Science informationfrommassiveobservationaldatasets. Because of this historical context, data mining to date has largely focused on computational a brief review of the origins of data mining as well as discussing some of the primary themes in current
Large-Scale Quantitative Analysis of Painting Arts
Jeong, Hawoong
Large-Scale Quantitative Analysis of Painting Arts Daniel Kim1 , Seung-Woo Son2 & Hawoong Jeong3 has made rapid progress, researchers have come to a point where it is possible to perform statistical digital image processing techniques, we investigate three quantitative measures of images the usage
Exploratory Factor Analysis of African Self-Consciousness Scale Scores
ERIC Educational Resources Information Center
Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.
2012-01-01
This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…
Galaxy: A platform for interactive large-scale genome analysis
Miller, Webb
Galaxy: A platform for interactive large-scale genome analysis Belinda Giardine,1 Cathy Riemer,1 and functional data pose a challenge for biomedical researchers. Here we describe an interactive system, Galaxy of Galaxy is a flexible history system that stores the queries from each user; performs operations
Confirmatory Factor Analysis of the Recent Exposure to Violence Scale
ERIC Educational Resources Information Center
van Dulmen, Manfred H. M.; Belliston, Lara M.; Flannery, Daniel J.; Singer, Mark
2008-01-01
The purpose of the current study is to advance the psychometric properties of the child-administered 22-item Recent Exposure to Violence Scale (REVS) using confirmatory factor analysis (CFA) across three large and ethnically diverse samples of children ranging in age from middle childhood through adolescence. Results of the CFA suggest that a…
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei; Gu, Chunwei
2015-07-01
This paper presents a high-order discontinuous Galerkin (DG) method based on a multi-dimensional gas kinetic evolution model for viscous flow computations. Generally, the DG methods for equations with higher order derivatives must transform the equations into a first order system in order to avoid the so-called "non-conforming problem". In the traditional DG framework, the inviscid and viscous fluxes are numerically treated differently. Differently from the traditional DG approaches, the current method adopts a kinetic evolution model for both inviscid and viscous flux evaluations uniformly. By using a multi-dimensional gas kinetic formulation, we can obtain a spatial and temporal dependent gas distribution function for the flux integration inside the cell and at the cell interface, which is distinguishable from the Gaussian Quadrature point flux evaluation in the traditional DG method. Besides the initial higher order non-equilibrium states inside each control volume, a Linear Least Square (LLS) method is used for the reconstruction of smooth distributions of macroscopic flow variables around each cell interface in order to construct the corresponding equilibrium state. Instead of separating the space and time integrations and using the multistage Runge-Kutta time stepping method for time accuracy, the current method integrates the flux function in space and time analytically, which subsequently saves the computational time. Many test cases in two and three dimensions, which include high Mach number compressible viscous and heat conducting flows and the low speed high Reynolds number laminar flows, are presented to demonstrate the performance of the current scheme.
A Distributed Architecture for Multi-dimensional Indexing and Data Retrieval
Koziris, Nectarios G.
throughput to scale and effectively copes with flash crowds. 1 Introduction Due to the explosion of network computing is the Ser- vice Oriented Architecture (SOA) applied pragmatically in Grid computing. A Grid
Amadi, Ovid Charles
2013-01-01
The requirement that individual cells be able to communicate with one another over a range of length scales is a fundamental prerequisite for the evolution of multicellular organisms. Often diffusible chemical molecules ...
Order Analysis: An Inferential Model of Dimensional Analysis and Scaling
ERIC Educational Resources Information Center
Krus, David J.
1977-01-01
Order analysis is discussed as a method for description of formal structures in multidimensional space. Its algorithm was derived using a combination of psychometric theory, formal logic theory, information theory, and graph theory concepts. The model provides for adjustment of its sensitivity to random variation. (Author/JKS)
Diskless Checkpointing on Super-scale Architectures
Engelmann, Christian
generation peta-scale systems will have 50,000-100,000 processors. They will be deployed in the next 5 years-dimensional mesh. Multi-dimensional torus. Experimental network configurations: Grid positions, nearest for checkpoints with no communication since last or since start. Coordination techniques: Global synchronization
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-01-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two…
NASA Astrophysics Data System (ADS)
Xu, Fuyi; Yuan, Jia
2015-10-01
This paper is dedicated to study of the Cauchy problem for a multi-dimensional ({N ? 2}) compressible viscous liquid-gas two-phase flow model. We prove the local well-posedness of the system for large data in critical Besov spaces based on the L p framework under the sole assumption that the initial liquid mass is bounded away from zero.
SCALE system cross-section validation for criticality safety analysis
Hathout, A.M.; Westfall, R.M.; Dodds, H.L. Jr.
1980-01-01
The purpose of this study is to test selected data from three cross-section libraries for use in the criticality safety analysis of UO/sub 2/ fuel rod lattices. The libraries, which are distributed with the SCALE system, are used to analyze potential criticality problems which could arise in the industrial fuel cycle for PWR and BWR reactors. Fuel lattice criticality problems could occur in pool storage, dry storage with accidental moderation, shearing and dissolution of irradiated elements, and in fuel transport and storage due to inadequate packing and shipping cask design. The data were tested by using the SCALE system to analyze 25 recently performed critical experiments.
Efficient Processing of Top-k Dominating Queries on Multi-Dimensional Data Man Lung Yiu
Yiu, Man Lung
in the 2D space, where the dimensions correspond to (preference) attribute values; travel- ing time by users, and (iii) the result is indepen- dent of the scales at different dimensions. Despite the parameter k). On the other hand, it might not always be easy for the user to specify an appropriate ranking
Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders
ERIC Educational Resources Information Center
Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.
2013-01-01
Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…
Scaled-particle theory analysis of cylindrical cavities in solution.
Ashbaugh, Henry S
2015-04-01
The solvation of hard spherocylindrical solutes is analyzed within the context of scaled-particle theory, which takes the view that the free energy of solvating an empty cavitylike solute is equal to the pressure-volume work required to inflate a solute from nothing to the desired size and shape within the solvent. Based on our analysis, an end cap approximation is proposed to predict the solvation free energy as a function of the spherocylinder length from knowledge regarding only the solvent density in contact with a spherical solute. The framework developed is applied to extend Reiss's classic implementation of scaled-particle theory and a previously developed revised scaled-particle theory to spherocylindrical solutes. To test the theoretical descriptions developed, molecular simulations of the solvation of infinitely long cylindrical solutes are performed. In hard-sphere solvents classic scaled-particle theory is shown to provide a reasonably accurate description of the solvent contact correlation and resulting solvation free energy per unit length of cylinders, while the revised scaled-particle theory fitted to measured values of the contact correlation provides a quantitative free energy. Applied to the Lennard-Jones solvent at a state-point along the liquid-vapor coexistence curve, however, classic scaled-particle theory fails to correctly capture the dependence of the contact correlation. Revised scaled-particle theory, on the other hand, provides a quantitative description of cylinder solvation in the Lennard-Jones solvent with a fitted interfacial free energy in good agreement with that determined for purely spherical solutes. The breakdown of classical scaled-particle theory does not result from the failure of the end cap approximation, however, but is indicative of neglected higher-order curvature dependences on the solvation free energy. PMID:25974499
Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale
Nykanen, V.; Raines, G.L.
2006-01-01
A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
Bridgman crystal growth in low gravity - A scaling analysis
NASA Technical Reports Server (NTRS)
Alexander, J. I. D.; Rosenberger, Franz
1990-01-01
The results of an order-of-magnitude or scaling analysis are compared with those of numerical simulations of the effects of steady low gravity on compositional nonuniformity in crystals grown by the Bridgman-Stockbarger technique. In particular, the results are examined of numerical simulations of the effect of steady residual acceleration on the transport of solute in a gallium-doped germanium melt during directional solidification under low-gravity conditions. The results are interpreted in terms of the relevant dimensionless groups associated with the process, and scaling techniques are evaluated by comparing their predictions with the numerical results. It is demonstrated that, when convective transport is comparable with diffusive transport, some specific knowledge of the behavior of the system is required before scaling arguments can be used to make reasonable predictions.
New Criticality Safety Analysis Capabilities in SCALE 5.1
Bowman, Stephen M; DeHart, Mark D; Dunn, Michael E; Goluoglu, Sedat; Horwedel, James E; Petrie Jr, Lester M; Rearden, Bradley T; Williams, Mark L
2007-01-01
Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.
Multiple scales analysis of interface dynamics in ^4He.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Ranjan; Prasad, Anoop; Weichman, Peter B.; Miller, Jonathan
1997-03-01
We describe theoretically the slow dynamics of the superfluid-normal interface that develops when a uniform heat current is passed through near-critical ^4He (R.V. Duncan, G. Ahlers and V. Steinberg, Phys. Rev. Lett. 60),1522 (1988) and references therein.. Using a multiple scales analysis, along with microscopically derived matching conditions that determine how heat transport is converted from conduction to superfluid counterflow as the interface is crossed, we derive an effective two-dimensional phase equation, resembling somewhat the KPZ equation, for the interface response to internal thermal and external vibrational noise sources, focusing especially on the question of large scale wandering and roughness. We also compare this work with a linear stability analysis which we have carried out. The results are relevant to the proposed NASA microgravity DYNAMX project(Czech. J. Phys. 46), Sup. 1, 87, (1996). We acknowledge financial support from the DYNAMX project..
Robust Topology-Based Analysis of Large Scale Data
Pascucci, Valerio
regular point Classical definitions Simulation of Differentiability 2D 3D1D )( ixf)(xF 1 )( - = n Bx n n f x f x x x x f x f x x x x L M O M L )()( ji vFvF = 2D Multi-Saddle 3D Multi of Rayleigh-Taylor instability. Multi-scale time tracking of combustion particles. Topological analysis
Empirical analysis of scaling and fractal characteristics of outpatients
NASA Astrophysics Data System (ADS)
Zhang, Li-Jiang; Liu, Zi-Xian; Guo, Jin-Li
2014-01-01
The paper uses power-law frequency distribution, power spectrum analysis, detrended fluctuation analysis, and surrogate data testing to evaluate outpatient registration data of two hospitals in China and to investigate the human dynamics of systems that use the “first come, first served” protocols. The research results reveal that outpatient behavior follow scaling laws. The results also suggest that the time series of inter-arrival time exhibit 1/f noise and have positive long-range correlation. Our research may contribute to operational optimization and resource allocation in hospital based on FCFS admission protocols.
Microbial community analysis of a full-scale DEMON bioreactor.
Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muñoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus
2015-03-01
Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398
Swan Jr., Colby Corson
stiffening · Length scales considered Fiber-diameter Yarn-diameter Textile composite Structure #12;4 Essence1 Multi-Scale Unit-Cell Analysis ofMulti-Scale Unit-Cell Analysis of Textile CompositesTextile Composites'' FiniteFinite Deformation StiffnessDeformation Stiffness CharacteristicsCharacteristics Hyung
Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)
1999-01-01
We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.
Multi-dimensional cosmological radiative transfer with a Variable Eddington Tensor formalism
NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.; Abel, Tom
2001-10-01
We present a new approach to numerically model continuum radiative transfer based on the Optically Thin Variable Eddington Tensor (OTVET) approximation. Our method insures the exact conservation of the photon number and flux (in the explicit formulation) and automatically switches from the optically thick to the optically thin regime. It scales as N log N with the number of hydrodynamic resolution elements and is independent of the number of sources of ionizing radiation (i.e. works equally fast for an arbitrary source function). We also describe an implementation of the algorithm in a Soften Lagrangian Hydrodynamic code (SLH) and a multi-frequency approach appropriate for hydrogen and helium continuum opacities. We present extensive tests of our method for single and multiple sources in homogeneous and inhomogeneous density distributions, as well as a realistic simulation of cosmological reionization.
Multi--dimensional Cosmological Radiative Transfer with a Variable Eddington Tensor Formalism
Nickolay Y. Gnedin; Tom Abel
2001-06-15
We present a new approach to numerically model continuum radiative transfer based on the Optically Thin Variable Eddington Tensor (OTVET) approximation. Our method insures the exact conservation of the photon number and flux (in the explicit formulation) and automatically switches from the optically thick to the optically thin regime. It scales as N logN with the number of hydrodynamic resolution elements and is independent of the number of sources of ionizing radiation (i.e. works equally fast for an arbitrary source function). We also describe an implementation of the algorithm in a Soften Lagrangian Hydrodynamic code (SLH) and a multi--frequency approach appropriate for hydrogen and helium continuum opacities. We present extensive tests of our method for single and multiple sources in homogeneous and inhomogeneous density distributions, as well as a realistic simulation of cosmological reionization.
NASA Astrophysics Data System (ADS)
Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi
2015-10-01
Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR = 4-12 kHz) at cryogenic temperatures (T = 35-120 K) for over a week without consuming helium at a cost for electricity of 16 kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T = 40 K and B0 = 16.4 T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100 K.
Schoenberg, Poppy L A; Speckens, Anne E M
2015-02-01
To illuminate candidate neural working mechanisms of Mindfulness-Based Cognitive Therapy (MBCT) in the treatment of recurrent depressive disorder, parallel to the potential interplays between modulations in electro-cortical dynamics and depressive symptom severity and self-compassionate experience. Linear and nonlinear ? and ? EEG oscillatory dynamics were examined concomitant to an affective Go/NoGo paradigm, pre-to-post MBCT or natural wait-list, in 51 recurrent depressive patients. Specific EEG variables investigated were; (1) induced event-related (de-) synchronisation (ERD/ERS), (2) evoked power, and (3) inter-/intra-hemispheric coherence. Secondary clinical measures included depressive severity and experiences of self-compassion. MBCT significantly downregulated ? and ? power, reflecting increased cortical excitability. Enhanced ?-desynchronisation/ERD was observed for negative material opposed to attenuated ?-ERD towards positively valenced stimuli, suggesting activation of neural networks usually hypoactive in depression, related to positive emotion regulation. MBCT-related increase in left-intra-hemispheric ?-coherence of the fronto-parietal circuit aligned with these synchronisation dynamics. Ameliorated depressive severity and increased self-compassionate experience pre-to-post MBCT correlated with ?-ERD change. The multi-dimensional neural mechanisms of MBCT pertain to task-specific linear and non-linear neural synchronisation and connectivity network dynamics. We propose MBCT-related modulations in differing cortical oscillatory bands have discrete excitatory (enacting positive emotionality) and inhibitory (disengaging from negative material) effects, where mediation in the ? and ? bands relates to the former. PMID:26052359
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi
2015-10-01
Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR=4-12kHz) at cryogenic temperatures (T=35-120K) for over a week without consuming helium at a cost for electricity of 16kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T=40K and B0=16.4T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100K. PMID:26302269
Barrett, Louise; Henzi, S. Peter; Lusseau, David
2012-01-01
Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054
ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels
Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino
2015-01-01
A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 ?m in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å^{3}, which is one of the lowest among the known zeolites. ITQ-54 was obtained together with GeO_{2} as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.
ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels
Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino
2015-01-01
A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 ?m in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtainedmore »together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.« less
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.
Reactor Physics Methods and Analysis Capabilities in SCALE
DeHart, Mark D; Bowman, Stephen M
2011-01-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Two-field analysis of no-scale supergravity inflation
Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu
2015-01-01
Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values || 0.1. We also calculate the non-Gaussianity measure f{sub NL}, finding that is well below the current experimental sensitivity.
Bayesian large-scale structure inference and cosmic web analysis
Leclercq, Florent
2015-01-01
Surveys of the cosmic large-scale structure carry opportunities for building and testing cosmological theories about the origin and evolution of the Universe. This endeavor requires appropriate data assimilation tools, for establishing the contact between survey catalogs and models of structure formation. In this thesis, we present an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the cosmic web: the BORG algorithm infers the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. The method, based on Bayesian probability theory, provides accurate means of uncertainty quantification. We demonstrate the application of BORG to the Sloan Digital Sky Survey data and describe the primordial and late-time large-scale structure in the observed volume. We show how the approach has led to the fi...
Reactor Physics Methods and Analysis Capabilities in SCALE
Mark D. DeHart; Stephen M. Bowman
2011-05-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Scaling and dimensional analysis of acoustic streaming jets
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Dehazing method through polarimetric imaging and multi-scale analysis
NASA Astrophysics Data System (ADS)
Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin
2015-05-01
An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.
Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R
2015-07-01
Focusing on a set of 3 multidimensional measures of conceptually related but different aspects of masculinity, we use factor analytic techniques to address 2 issues: (a) whether psychological constructs that are theoretically distinct but require fairly subtle discriminations by survey respondents can be accurately captured by self-report measures, and (b) how to better understand sources of variance in subscale and total scores developed from such measures. The specific measures investigated were the: (a) Male Role Norms Inventory-Short Form (MRNI-SF); (b) Conformity to Masculine Norms Inventory-46 (CMNI-46); and (c) Gender Role Conflict Scale-Short Form (GRCS-SF). Data (N = 444) were from community-dwelling and college men who responded to an online survey. EFA results demonstrated the discriminant validity of the 20 subscales comprising the 3 instruments, thus indicating that relatively subtle distinctions between norms, conformity, and conflict can be captured with self-report measures. CFA was used to compare 2 different methods of modeling a broad/general factor for each of the 3 instruments. For the CMNI-46 and MRNI-SF, a bifactor model fit the data significantly better than did a hierarchical factor model. In contrast, the hierarchical model fit better for the GRCS-SF. The discussion addresses implications of these specific findings for use of the measures in research studies, as well as broader implications for measurement development and assessment in other research domains of counseling psychology which also rely on multidimensional self-report instruments. PMID:26167651
Three decades of multi-dimensional change in global leaf phenology
NASA Astrophysics Data System (ADS)
Buitenwerf, Robert; Rose, Laura; Higgins, Steven I.
2015-04-01
Changes in the phenology of vegetation activity may accelerate or dampen rates of climate change by altering energy exchanges between the land surface and the atmosphere and can threaten species with synchronized life cycles. Current knowledge of long-term changes in vegetation activity is regional, or restricted to highly integrated measures of change such as net primary productivity, which mask details that are relevant for Earth system dynamics. Such details can be revealed by measuring changes in the phenology of vegetation activity. Here we undertake a comprehensive global assessment of changes in vegetation phenology. We show that the phenology of vegetation activity changed severely (by more than 2 standard deviations in one or more dimensions of phenological change) on 54% of the global land surface between 1981 and 2012. Our analysis confirms previously detected changes in the boreal and northern temperate regions. The adverse consequences of these northern phenological shifts for land-surface-climate feedbacks, ecosystems and species are well known. Our study reveals equally severe phenological changes in the southern hemisphere, where consequences for the energy budget and the likelihood of phenological mismatches are unknown. Our analysis provides a sensitive and direct measurement of ecosystem functioning, making it useful both for monitoring change and for testing the reliability of early warning signals of change.
NASA Astrophysics Data System (ADS)
Rice, Stuart A.; Toda, Mikito; Komatsuzaki, Tamiki; Konishi, Tetsuro; Berry, R. Stephen
2005-01-01
Edited by Nobel Prize winner Ilya Prigogine and renowned authority Stuart A. Rice, the Advances in Chemical Physics series provides a forum for critical, authoritative evaluations in every area of the discipline. In a format that encourages the expression of individual points of view, experts in the field present comprehensive analyses of subjects of interest. Advances in Chemical Physics remains the premier venue for presentations of new findings in its field. Volume 130 consists of three parts including: Part I: Phase Space Geometry of Multi-dimensional Dynamical Systems and Reaction Processes Part II Complex Dynamical Behavior in Clusters and Proteins, and Data Mining to Extract Information on Dynamics Part III New directions in Multi-Dimensional Chaos and Evolutionary Reactions
NASA Astrophysics Data System (ADS)
Kiyan, D.; Jones, A. G.; Fullea, J.; Ledo, J.; Siniscalchi, A.; Romano, G.
2014-12-01
The PICASSO (Program to Investigate Convective Alboran Sea System Overturn) project and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROSCORES TOPO-EUROPE project) project were designed to collect high resolution, multi-disciplinary lithospheric scale data in order to understand the tectonic evolution and lithospheric structure of the western Mediterranean. The over-arching objectives of the magnetotelluric (MT) component of the projects are (i) to provide new electrical conductivity constraints on the crustal and lithospheric structure of the Atlas Mountains, and (ii) to test the hypotheses for explaining the purported lithospheric cavity beneath the Middle and High Atlas inferred from potential-field lithospheric modeling. We present the results of an MT experiment we carried out in Morocco along two profiles: an approximately N-S oriented profile crossing the Middle Atlas, the High Atlas and the eastern Anti-Atlas to the east (called the MEK profile, for Meknes) and NE-SW oriented profile through western High Atlas to the west (called the MAR profile, for Marrakech). Our results are derived from three-dimensional (3-D) MT inversion of the MT data set employing the parallel version of Modular system for Electromagnetic inversion (ModEM) code. The distinct conductivity differences between the Middle-High Atlas (conductive) and the Anti-Atlas (resistive) correlates with the South Atlas Front fault, the depth extent of which appears to be limited to the uppermost mantle (approx. 60 km). In all inverse solutions, the crust and the upper mantle show resistive signatures (approx. 1,000 ?m) beneath the Anti-Atlas, which is the part of stable West African Craton. Partial melt and/or exotic fluids enriched in volatiles produced by the melt can account for the high middle to lower crustal and uppermost mantle conductivity in the Folded Middle Atlas, the High Moulouya Plain and the central High Atlas.
NASA Astrophysics Data System (ADS)
West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman
2009-02-01
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.
Conservative-variable average states for equilibrium gas multi-dimensional fluxes
NASA Technical Reports Server (NTRS)
Iannelli, G. S.
1992-01-01
Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.
Philip E. Wannamaker
2007-12-31
The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.
Multi-scale analysis and simulation of powder blending in pharmaceutical manufacturing
Ngai, Samuel S. H
2005-01-01
A Multi-Scale Analysis methodology was developed and carried out for gaining fundamental understanding of the pharmaceutical powder blending process. Through experiment, analysis and computer simulations, microscopic ...
Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai
2015-12-21
A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode. PMID:26581017
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
Irregularities and scaling in signal and image processing: multifractal analysis
NASA Astrophysics Data System (ADS)
Abry, Patrice; Jaffard, Herwig; Wendt, Stéphane
2015-03-01
B. Mandelbrot gave a new birth to the notions of scale invariance, self-similarity and non-integer dimensions, gathering them as the founding corner-stones used to build up fractal geometry. The first purpose of the present contribution is to review and relate together these key notions, explore their interplay and show that they are different facets of a single intuition. Second, we will explain how these notions lead to the derivation of the mathematical tools underlying multifractal analysis. Third, we will reformulate these theoretical tools into a wavelet framework, hence enabling their better theoretical understanding as well as their efficient practical implementation. B. Mandelbrot used his concept of fractal geometry to analyze real-world applications of very different natures. As a tribute to his work, applications of various origins, and where multifractal analysis proved fruitful, are revisited to illustrate the theoretical developments proposed here.
A Multi-scale Approach to Urban Thermal Analysis
NASA Technical Reports Server (NTRS)
Gluch, Renne; Quattrochi, Dale A.
2005-01-01
An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.
Psychometric analysis of the Ten-Item Perceived Stress Scale.
Taylor, John M
2015-03-01
Although the 10-item Perceived Stress Scale (PSS-10) is a popular measure, a review of the literature reveals 3 significant gaps: (a) There is some debate as to whether a 1- or a 2-factor model best describes the relationships among the PSS-10 items, (b) little information is available on the performance of the items on the scale, and (c) it is unclear whether PSS-10 scores are subject to gender bias. These gaps were addressed in this study using a sample of 1,236 adults from the National Survey of Midlife Development in the United States II. Based on self-identification, participants were 56.31% female, 77% White, 17.31% Black and/or African American, and the average age was 54.48 years (SD = 11.69). Findings from an ordinal confirmatory factor analysis suggested the relationships among the items are best described by an oblique 2-factor model. Item analysis using the graded response model provided no evidence of item misfit and indicated both subscales have a wide estimation range. Although t tests revealed a significant difference between the means of males and females on the Perceived Helplessness Subscale (t = 4.001, df = 1234, p < .001), measurement invariance tests suggest that PSS-10 scores may not be substantially affected by gender bias. Overall, the findings suggest that inferences made using PSS-10 scores are valid. However, this study calls into question inferences where the multidimensionality of the PSS-10 is ignored. PMID:25346996
Instrumentation development for multi-dimensional two-phase flow modeling
Kirouac, G.J.; Trabold, T.A.; Vassallo, P.F.; Moore, W.E.; Kumar, R.
1999-06-01
A multi-faceted instrumentation approach is described which has played a significant role in obtaining fundamental data for two-phase flow model development. This experimental work supports the development of a three-dimensional, two-fluid, four field computational analysis capability. The goal of this development is to utilize mechanistic models and fundamental understanding rather than rely on empirical correlations to describe the interactions in two-phase flows. The four fields (two dispersed and two continuous) provide a means for predicting the flow topology and the local variables over the full range of flow regimes. The fidelity of the model development can be verified by comparisons of the three-dimensional predictions with local measurements of the flow variables. Both invasive and non-invasive instrumentation techniques and their strengths and limitations are discussed. A critical aspect of this instrumentation development has been the use of a low pressure/temperature modeling fluid (R-134a) in a vertical duct which permits full optical access to visualize the flow fields in all two-phase flow regimes. The modeling fluid accurately simulates boiling steam-water systems. Particular attention is focused on the use of a gamma densitometer to obtain line-averaged and cross-sectional averaged void fractions. Hot-film anemometer probes provide data on local void fraction, interfacial frequency, bubble and droplet size, as well as information on the behavior of the liquid-vapor interface in annular flows. A laser Doppler velocimeter is used to measure the velocity of liquid-vapor interfaces in bubbly, slug and annular flows. Flow visualization techniques are also used to obtain a qualitative understanding of the two-phase flow structure, and to obtain supporting quantitative data on bubble size. Examples of data obtained with these various measurement methods are shown.
Tera-scale astronomical data analysis and visualization
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.; Kilborn, V. A.
2013-03-01
We present a high-performance, graphics processing unit (GPU) based framework for the efficient analysis and visualization of (nearly) terabyte (TB) sized 3D images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image (1) volume rendering using an arbitrary transfer function at 7-10 frames per second, (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s, (3) evaluation of the image histogram in 4 s and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching 1 teravoxel per second, and are 10-100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows that the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array Pathfinder radio telescopes.
MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A
Anusha, L. S.; Nagendra, K. N. E-mail: knn@iiap.res.in
2013-04-20
Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.
Tera-scale Astronomical Data Analysis and Visualization
Hassan, A H; Barnes, D G; Kilborn, V A
2012-01-01
We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solut...
Scaled models in the analysis of fire-structure interaction
NASA Astrophysics Data System (ADS)
Andreozzi, A.; Bianco, N.; Musto, M.; Rotondo, G.
2015-11-01
A fire problem has been scaled both in terms of geometry, boundary conditions and materials thermophysical properties by means of dimensionless parameters. Both the full and scaled models have been solved numerically for two different fire power values. Results obtained by means of the full scale model and the scaled one are compared in terms of velocity and temperature profiles in order to assess the reliability of the scaled model to represent the behavior of the full scale one.
Large-scale dimension densities for heart rate variability analysis
NASA Astrophysics Data System (ADS)
Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen
2006-04-01
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (?ls?=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ?ls? (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
Global Mapping Analysis: Stochastic Gradient Algorithm in Multidimensional Scaling
NASA Astrophysics Data System (ADS)
Matsuda, Yoshitatsu; Yamaguchi, Kazunori
In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.
Multidimensional Scaling Analysis of the Dynamics of a Country Economy
Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Large-Scale Quantitative Analysis of Painting Arts
NASA Astrophysics Data System (ADS)
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Multidimensional scaling analysis of the dynamics of a country economy.
Tenreiro Machado, J A; Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Anomaly Detection in Multiple Scale for Insider Threat Analysis
Kim, Yoohwan; Sheldon, Frederick T; Hively, Lee M
2012-01-01
We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.
Parallel Index and Query for Large Scale Data Analysis
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.
13C metabolic flux analysis at a genome-scale.
Gopalakrishnan, Saratram; Maranas, Costas D
2015-11-01
Metabolic models used in 13C metabolic flux analysis generally include a limited number of reactions primarily from central metabolism. They typically omit degradation pathways, complete cofactor balances, and atom transition contributions for reactions outside central metabolism. This study addresses the impact on prediction fidelity of scaling-up mapping models to a genome-scale. The core mapping model employed in this study accounts for (75 reactions and 65 metabolites) primarily from central metabolism. The genome-scale metabolic mapping model (GSMM) (697 reaction and 595 metabolites) is constructed using as a basis the iAF1260 model upon eliminating reactions guaranteed not to carry flux based on growth and fermentation data for a minimal glucose growth medium. Labeling data for 17 amino acid fragments obtained from cells fed with glucose labeled at the second carbon was used to obtain fluxes and ranges. Metabolic fluxes and confidence intervals are estimated, for both core and genome-scale mapping models, by minimizing the sum of square of differences between predicted and experimentally measured labeling patterns using the EMU decomposition algorithm. Overall, we find that both topology and estimated values of the metabolic fluxes remain largely consistent between core and GSM model. Stepping up to a genome-scale mapping model leads to wider flux inference ranges for 20 key reactions present in the core model. The glycolysis flux range doubles due to the possibility of active gluconeogenesis, the TCA flux range expanded by 80% due to the availability of a bypass through arginine consistent with labeling data, and the transhydrogenase reaction flux was essentially unresolved due to the presence of as many as five routes for the inter-conversion of NADPH to NADH afforded by the genome-scale model. By globally accounting for ATP demands in the GSMM model the unused ATP decreased drastically with the lower bound matching the maintenance ATP requirement. A non-zero flux for the arginine degradation pathway was identified to meet biomass precursor demands as detailed in the iAF1260 model. Inferred ranges for 81% of the reactions in the genome-scale metabolic (GSM) model varied less than one-tenth of the basis glucose uptake rate (95% confidence test). This is because as many as 411 reactions in the GSM are growth coupled meaning that the single measurement of biomass formation rate locks the reaction flux values. This implies that accurate biomass formation rate and composition are critical for resolving metabolic fluxes away from central metabolism and suggests the importance of biomass composition (re)assessment under different genetic and environmental backgrounds. In addition, the loss of information associated with mapping fluxes from MFA on a core model to a GSM model is quantified. PMID:26358840
Confirmatory Factor Analysis of the Educators' Attitudes toward Educational Research Scale
ERIC Educational Resources Information Center
Ozturk, Mehmet Ali
2011-01-01
This article reports results of a confirmatory factor analysis performed to cross-validate the factor structure of the Educators' Attitudes Toward Educational Research Scale. The original scale had been developed by the author and revised based on the results of an exploratory factor analysis. In the present study, the revised scale was given to…
Lahey, Jr., Richard T.; Jansen, Kenneth E.; Nagrath, Sunitha
2002-12-02
A new adaptive grid, 3-D FEM hydrodynamic shock (ie, HYDRO )code called PHASTA-2C has been developed and used to investigate bubble implosion phenomena leading to ultra-high temperatures and pressures. In particular, it was shown that nearly spherical bubble compressions occur during bubble implosions and the predicted conditions associated with a recent ORNL Bubble Fusion experiment [Taleyarkhan et al, Science, March, 2002] are consistent with the occurrence of D/D fusion.
Jiang, Boyang
2012-02-14
parameters (bottom friction, eddy viscosity, etc.); errors in input fields and errors in the specification of boundary information (lateral boundary conditions, etc.). Errors in input parameters can be addressed with fairly straightforward parameter...
Not Available
1982-01-01
The Department of Energy, Morgantown Energy Technology Center, has been supporting the development of flow models for Devonian shale gas reservoirs. The broad objectives of this modeling program are: (1) To develop and validate a mathematical model which describes gas flow through Devonian shales. (2) To determine the sensitive parameters that affect deliverability and recovery of gas from Devonian shales. (3) To recommend laboratory and field measurements for determination of those parameters critical to the productivity and timely recovery of gas from the Devonian shales. (4) To analyze pressure and rate transient data from observation and production gas wells to determine reservoir parameters and well performance. (5) To study and determine the overall performance of Devonian shale reservoirs in terms of well stimulation, well spacing, and resource recovery as a function of gross reservoir properties such as anisotropy, porosity and thickness variations, and boundary effects. The flow equations that are the mathematical basis of the two-dimensional model are presented. It is assumed that gas transport to producing wells in Devonian shale reservoirs occurs through a natural fracture system into which matrix blocks of contrasting physical properties deliver contained gas. That is, the matrix acts as a uniformly distributed gas source in a fracture medium. Gas desorption from pore walls is treated as a uniformly distributed source within the matrix blocks. 24 references.
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.
The Effect of Data Scaling on Dual Prices and Sensitivity Analysis in Linear Programs
ERIC Educational Resources Information Center
Adlakha, V. G.; Vemuganti, R. R.
2007-01-01
In many practical situations scaling the data is necessary to solve linear programs. This note explores the relationships in translating the sensitivity analysis between the original and the scaled problems.
Large-scale dynamics of the mesosphere and lower thermosphere: an analysis
Wirosoetisno, Djoko
Large-scale dynamics of the mesosphere and lower thermosphere: an analysis using the extended of the mesosphere and lower thermosphere: an analysis using the extended Canadian Middle Atmosphere Model. Journal's research outputs online #12;Large-scale dynamics of the mesosphere and lower thermosphere: An analysis
Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.
ERIC Educational Resources Information Center
Faulstich, Michael E.; And Others
1985-01-01
Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)
Revision and Factor Analysis of a Death Anxiety Scale.
ERIC Educational Resources Information Center
Thorson, James A.; Powell, F. C.
Earlier research on death anxiety using the 34-item scale developed by Nehrke-Templer-Boyar (NTB) indicated that females and younger persons have significantly higher death anxiety. To simplify a death anxiety scale for use with different age groups, and to determine the conceptual factors actually measured by the scale, a revised 25-item…
Numerical Simulation and Scaling Analysis of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2011-11-01
Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.
Large-scale fault kinematic analysis in Noctis Labyrinthus (Mars)
NASA Astrophysics Data System (ADS)
Bistacchi, Nicola; Massironi, Matteo; Baggio, Paolo
2004-01-01
Noctis Labyrinthus (Mars) is characterized by many tectonic features, which represent brittle deformation of the crust. This tectonic setting was analysed by remote sensing of the Viking Mars Digital Image Model (MDIM) mosaic and Mars Orbiter Camera (MOC) global mosaic, in order to identify deformational events. The main features are normal faults producing horst-graben structures, strike-slip faults, and related en-echelon and pull-apart basins. Using the criterion of cross-cutting relationships and analysis of secondary structures, to infer sense of movement of faults, two deformational phases were identified in the Noctis Labyrinthus area. The first, D1, located mainly in the northern part, is characterized by transtensional faults (Noachian). The second, D2, recorded in the southern part of the Noctis Labyrinthus by an orthorhombic extensional fault pattern along NNE and WNW trends, is related to the Valles Marineris formation (Late Noachian-Early Hesperian). A third tectonic event, D3, represented by the partly known dextral NW strike-slip faults cross-cutting the Valles Marineris Canyon System (Late Hesperian?-Amazonian?), was not found in Noctis Labyrinthus at the scale and resolution considered.
MicroScale Thermophoresis: Interaction analysis and beyond
NASA Astrophysics Data System (ADS)
Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis
2014-12-01
MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.
ERIC Educational Resources Information Center
Redfield, Joel
1978-01-01
TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)
Systematic analysis of scaling properties in deep inelastic scattering
Beuf, Guillaume; Peschanski, Robi; Royon, Christophe; Salek, David
2008-10-01
Using the 'quality factor' method, we analyze the scaling properties of deep inelastic processes at the accelerator HERA and fixed target experiments for x{<=}10{sup -2}. We look for scaling formulas of the form {sigma}{sup {gamma}}*{sup p}({tau}), where {tau}(L=logQ{sup 2},Y) is a scaling variable suggested by the asymptotic properties of QCD evolution equations with rapidity Y. We consider four cases: 'fixed coupling', corresponding to the original geometric scaling proposal and motivated by the asymptotic properties of the Balitsky-Kovchegov equation with fixed QCD coupling constant; two versions, 'running coupling I, II,' of the scaling suggested by the Balitsky-Kovchegov equation with running coupling; and 'diffusive scaling' suggested by the QCD evolution equation with Pomeron loops. The quality factors, quantifying the phenomenological validity of the candidate scaling variables, are fitted on the total and deeply virtual Compton scattering cross-section data from HERA and predictions are made for the elastic vector meson and for the diffractive cross sections at fixed small x{sub P} or {beta}. The first three scaling formulas have comparably good quality factors while the fourth one is disfavored. Adjusting initial conditions gives a significant improvement of the running coupling II scaling.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
Using Markov Chain Analysis to Study Dynamic Behaviour in Large-Scale Grid Systems
Using Markov Chain Analysis to Study Dynamic Behaviour in Large-Scale Grid Systems Christopher-wise homogeneous Discrete Time Markov chains to provide rapid, potentially scalable, simulation of large-scale grid-scale grid systems. In this approach, a Markov chain model of a grid system is first represented in a reduced
Fine-Scale Analysis Reveals Cryptic Landscape Genetic Structure in Desert Tortoises
Latch, Emily K.
Fine-Scale Analysis Reveals Cryptic Landscape Genetic Structure in Desert Tortoises Emily K. Latch1 of the landscape in desert tortoise (Gopherus agassizii), using 859 tortoises genotyped at 16 microsatellite loci scale. Desert tortoises exhibit weak genetic structure at a local scale, and we identified two
Regional Scale Analysis of Extremes in an SRM Geoengineering Simulation
NASA Astrophysics Data System (ADS)
Muthyala, R.; Bala, G.
2014-12-01
Only a few studies in the past have investigated the statistics of extreme events under geoengineering. In this study, a global climate model is used to investigate the impact of solar radiation management on extreme precipitation events on regional scale. Solar constant was reduced by 2.25% to counteract the global mean surface temperature change caused by a doubling of CO2 (2XCO2) from its preindustrial control value. Using daily precipitation rates, extreme events are defined as those which exceed 99.9th percentile precipitation threshold. Extremes are substantially reduced in geoengineering simulation: the magnitude of change is much smaller than those that occur in a simulation with doubled CO2. Regional analysis over 22 Giorgi land regions is also performed. Doubling of CO2 leads to an increase in intensity of extreme (99.9th percentile) precipitation by 17.7% on global-mean basis with maximum increase in intensity over South Asian region by 37%. In the geoengineering simulation, there is a global-mean reduction in intensity of 3.8%, with a maximum reduction over Tropical Ocean by 8.9%. Further, we find that the doubled CO2 simulation shows an increase in the frequency of extremes (>50 mm/day) by 50-200% with a global mean increase of 80%. In contrast, in geo-engineering climate there is a decrease in frequency of extreme events by 20% globally with a larger decrease over Tropical Ocean by 30%. In both the climate states (2XCO2 and geo-engineering) change in "extremes" is always greater than change in "means" over large domains. We conclude that changes in precipitation extremes are larger in 2XCO2 scenario compared to preindustrial climate while extremes decline slightly in the geoengineered climate. We are also investigating the changes in extreme statistics for daily maximum and minimum temperature, evapotranspiration and vegetation productivity. Results will be presented at the meeting.
GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY
Lee, S
2008-05-28
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.
A scaling approach for high-frequency vibration analysis of line-coupled plates
NASA Astrophysics Data System (ADS)
Li, Xianhui
2013-09-01
A scaling approach is proposed for the vibration analysis of line-coupled plates at high frequencies. It extends earlier scaling approaches for an isolated system to coupled systems. Based on the power flow balance in the plate assembly, a general scaling law is derived and a scaled model is built accordingly. Due to the dynamic similitude in a statistical sense, the scaled model is able to simulate the dynamics of the original system at high frequencies. Numerical examples validate the efficiency of the approach and suggest the application of finite element methods in the high-frequency vibration analysis.
Analysis of Hydrogen Depletion Using a Scaled Passive Autocatalytic Recombiner
Blanchat, T.K.; Malliakos, A.
1998-10-28
Hydrogen depletion tests of a scaled passive autocatalytic recombine (pAR) were performed in the Surtsey test vessel at Sandia National Laboratories (SNL). The experiments were used to determine the hydrogen depletion rate of a PAR in the presence of steam and also to evaluate the effect of scale (number of cartridges) on the PAR performance at both low and high hydrogen concentrations.
Confirmatory Factor Analysis of the Geriatric Depression Scale
ERIC Educational Resources Information Center
Adams, Kathryn Betts; Matto, Holly C.; Sanders, Sara
2004-01-01
Purpose: The Geriatric Depression Scale (GDS) is widely used in clinical and research settings to screen older adults for depressive symptoms. Although several exploratory factor analytic structures have been proposed for the scale, no independent confirmation has been made available that would enable investigators to confidently identify scores…
Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011
Home News and Events Multimedia Library Videos Large-Scale Cancer Genomics Data Analysis - David Haussler Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011 You will need Adobe Flash Player 8 or later and JavaScript
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web
Caverlee, James
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering
QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web
Liu, Ling
1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping
Scaling parameters for PFBC cyclone separator system analysis
Gil, A.; Romeo, L.M.; Cortes, C.
1999-07-01
Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.
Time and scale Hurst exponent analysis for financial markets
NASA Astrophysics Data System (ADS)
Matos, José A. O.; Gama, Sílvio M. A.; Ruskin, Heather J.; Sharkasi, Adel Al; Crane, Martin
2008-06-01
We use a new method of studying the Hurst exponent with time and scale dependency. This new approach allows us to recover the major events affecting worldwide markets (such as the September 11th terrorist attack) and analyze the way those effects propagate through the different scales. The time-scale dependence of the referred measures demonstrates the relevance of entropy measures in distinguishing the several characteristics of market indices: “effects” include early awareness, patterns of evolution as well as comparative behaviour distinctions in emergent/established markets.
Analysis and measurement about atmospheric turbulent outer scale
NASA Astrophysics Data System (ADS)
Zong, Fei; Hu, Yuehong; Chang, Jinyong; Feng, Shuanglian; Qiang, Xiwen
2015-10-01
The objectives of this tutorial are to introduce an optical method of measuring atmospheric turbulent outer scale. The method utilizes the ratio between the correlation functions of the wandering in two perpendicular planes. A simple relationship to obtain the outer scale from the measured correlation functions is established for a particular model of turbulence, the modified Von Karman model. Base on the rational conclusion, an implementary project of measuring atmospheric turbulent outer scale with optical method is designed. At the same time, the plan of the experiment system is given. By predigesting the model, the calculating program of the measurement is written also.
Data mining techniques for large-scale gene expression analysis
Palmer, Nathan Patrick
2011-01-01
Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...
Analysis of Large-Scale Asynchronous Switched Dynamical Systems
Lee, Kooktae
2015-08-13
This dissertation addresses research problems related to the switched system as well as its application to large-scale asynchronous dynamical systems. For decades, this switched system has been widely studied in depth, ...
Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer
NASA Astrophysics Data System (ADS)
Schnieders, Jana; Garbe, Christoph
2014-05-01
The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.
An item response theory analysis of the Olweus Bullying scale.
Breivik, Kyrre; Olweus, Dan
2014-12-01
In the present article, we used IRT (graded response) modeling as a useful technology for a detailed and refined study of the psychometric properties of the various items of the Olweus Bullying scale and the scale itself. The sample consisted of a very large number of Norwegian 4th-10th grade students (n?=?48 926). The IRT analyses revealed that the scale was essentially unidimensional and had excellent reliability in the upper ranges of the latent bullying tendency trait, as intended and desired. Gender DIF effects were identified with regard to girls' use of indirect bullying by social exclusion and boys' use of physical bullying by hitting and kicking but these effects were small and worked in opposite directions, having negligible effects at the scale level. Also scale scores adjusted for DIF effects differed very little from non-adjusted scores. In conclusion, the empirical data were well characterized by the chosen IRT model and the Olweus Bullying scale was considered well suited for the conduct of fair and reliable comparisons involving different gender-age groups. Information Aggr. Behav. 9999:XX-XX, 2014. © 2014 Wiley Periodicals, Inc. PMID:25460720
Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis and
Mound, Jon
1 Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis not be regarded as the views of SRI or The University of Leeds. #12;3 Asset-based poverty analysis in rural The trend towards multi-dimensional poverty assessment ..................... 5 Principal component analysis
Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de
2014-06-10
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ?}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ?-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ? 10 M {sub ?} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of ?E{sub ?-bar{sub e}}? with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ?10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.
Initial length scale estimate for waveguides with some random singular potentials
D. I. Borisov; R. Kh. Karimov; T. F. Sharapov
2015-07-08
In this work we consider three examples of random singular perturbations in multi-dimensional models of waveguides. These perturbations are described by a large potential supported on a set of a small measure, by a compactly supported fast oscillating potential, and by a delta-potential. In all cases we prove initial length scale estimate.
Refining a self-assessment of informatics competency scale using Mokken scaling analysis.
Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne
2015-11-01
Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions. PMID:26652630
ERIC Educational Resources Information Center
Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.
2010-01-01
The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…
Kim, K. S.; Boyer, L. L.; Degelman, L. O.
1985-01-01
In the first part of this study, daylighting levles in an actualy classroom are compared to scale model measurements and to computer program predictions. Secondly, the daylighting effects in the building atrium are examined through the studies...
Three-dimensional bridging scale analysis of dynamic fracture.
Liu, Wing Kam Northwestern University, Evanston, IL); Park, Harold S.; Karpov, Eduard G.; Klein, Patrick A.
2004-12-01
This paper presents a three-dimensional generalization of the bridging scale concurrent method, a finite temperature multiple scale method that couples molecular dynamics (MD) to finite elements (FE). The generalizations include the numerical calculation of the boundary condition acting upon the reduced MD region, as such boundary conditions are analytically intractable for realistic three-dimensional crystal structures. The formulation retains key advantages emphasized in previous papers, particularly the compact size of the resulting time history kernel matrix. The coupled FE and reduced MD equations of motion are used to analyze dynamic fracture in a three-dimensional FCC lattice, where interesting physical phenomena such as crack branching are seen. The multiple scale results are further compared to benchmark MD simulations for verification purposes.
Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling
Wu, Hua; Li, Zhao-Liang
2009-01-01
With the development of quantitative remote sensing, scale issues have attracted more and more the attention of scientists. Research is now suffering from a severe scale discrepancy between data sources and the models used. Consequently, both data interpretation and model application become difficult due to these scale issues. Therefore, effectively scaling remotely sensed information at different scales has already become one of the most important research focuses of remote sensing. The aim of this paper is to demonstrate scale issues from the points of view of analysis, processing and modeling and to provide technical assistance when facing scale issues in remote sensing. The definition of scale and relevant terminologies are given in the first part of this paper. Then, the main causes of scale effects and the scaling effects on measurements, retrieval models and products are reviewed and discussed. Ways to describe the scale threshold and scale domain are briefly discussed. Finally, the general scaling methods, in particular up-scaling methods, are compared and summarized in detail. PMID:22573986
SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS
MICHAEL T. ITAMUA AND CLIFFORD K. HO
1998-06-04
The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.
NASA Astrophysics Data System (ADS)
Shahmansouri, M.; Mamun, A. A.
2015-07-01
The effects of strong electrostatic interaction among highly charged dust on multi-dimensional instability of dust-acoustic (DA) solitary waves in a magnetized strongly coupled dusty plasma by small- k perturbation expansion method have been investigated. We found that a Zakharov-Kuznetsov equation governs the evolution of obliquely propagating small amplitude DA solitary waves in such a strongly coupled dusty plasma. The parametric regimes for which the obliquely propagating DA solitary waves become unstable are identified. The basic properties, viz., amplitude, width, instability criterion, and growth rate, of these obliquely propagating DA solitary structures are found to be significantly modified by the effects of different physical strongly coupled dusty plasma parameters. The implications of our results in some space/astrophysical plasmas and some future laboratory experiments are briefly discussed.
Introducing Scale Analysis by Way of a Pendulum
ERIC Educational Resources Information Center
Lira, Ignacio
2007-01-01
Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…
Mental Models of Text and Film: A Multidimensional Scaling Analysis.
ERIC Educational Resources Information Center
Rowell, Jack A.; Moss, Peter D.
1986-01-01
Reports results of experiment to determine whether mental models are constructed of interrelationships and cross-relationships of character attributions drawn in themes of novels and films. The study used "Animal Farm" in print and cartoon forms. Results demonstrated validity of multidimensional scaling for representing both media. Proposes use of…
Translation Analysis at the Genome Scale by Ribosome Profiling.
Baudin-Baillieu, Agnès; Hatin, Isabelle; Legendre, Rachel; Namy, Olivier
2016-01-01
Ribosome profiling is an emerging approach using deep sequencing of the mRNA part protected by the ribosome to study protein synthesis at the genome scale. This approach provides new insights into gene regulation at the translational level. In this review we describe the protocol to prepare polysomes and extract ribosome protected fragments before to deep sequence them. PMID:26483019
The Contraceptive Self-Efficacy Scale: Analysis in Four Samples.
ERIC Educational Resources Information Center
Levinson, Ruth Andrea; Wan, Choi K.; Beamer, LuAnn J.
1998-01-01
The relationship of the Contraceptive Self-Efficacy Scale to contraceptive behavior was explored in four female samples: (1) 258 California adolescents, (2) 259 Chicago (Illinois) adolescents, (3) 231 Montreal (Canada) high school students, and (4) 148 college students. Results are discussed in terms of use in research and clinical settings. (SLD)
Analysis of Small-Scale Hydraulic Actuation Jicheng Xia
Durfee, William K.
of force and power while at the same time being relatively light weight compared to the equivalent to an equivalent electromechanical system comprised of off-the-shelf components. Calculation results revealed that high operating pressures are needed for small-scale hydraulics to be lighter than the equivalent
Analysis of Small-Scale Hydraulic Systems Jicheng Xia
Durfee, William K.
system power density was analyzed with simple physics models and com- pared to an equivalent pressures are needed for small-scale hy- draulic power systems to be lighter than the equivalent elec to attain these levels of force and power while at the same time being rel- atively light weight compared
A Reliability Analysis of Goal Attainment Scaling (GAS) Weights
ERIC Educational Resources Information Center
Marson, Stephen M.; Wei, Guo; Wasserman, Deborah
2009-01-01
Goal attainment scaling (GAS) has been considered to be one of the most versatile and appealing evaluation protocols available for human services. Aspects of the protocol that make the method so appealing to practitioners--that is, collaboratively working with individual clients to identify and assign weights to goals they will work to…
A Factor Analysis of the Research Self-Efficacy Scale.
ERIC Educational Resources Information Center
Bieschke, Kathleen J.; And Others
Counseling professionals' and counseling psychology students' interest in performing research seems to be waning. Identifying the impediments to graduate students' interest and participation in research is important if systematic efforts to engage them in research are to succeed. The Research Self-Efficacy Scale (RSES) was designed to measure…
THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS
Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...
Large-scale computations in analysis of structures
McCallen, D.B.; Goudreau, G.L.
1993-09-01
Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.
Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis
Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.
2015-01-01
Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223
Co-designing the Failure Analysis and Monitoring of Large-Scale Systems
Minnesota, University of
Co-designing the Failure Analysis and Monitoring of Large-Scale Systems Abhishek Chandra, Rohini in such a sys- tem, we need to co-design monitoring system with the failure analysis system. Unlike existing themselves need to go beyond simple statistical analysis of failure events in isolation to serve as an effec
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Williams, Dean; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Steed, Chad; Krishnan, Harinarayan; Silva, Claudio; Chaudhary, Aashish; Bremer, Peer-Timo; Pugmire, David; Bethel, E. Wes; Childs, Hank; Prabhat, Mr.; Geveci, Berk; Bauer, Andrew; Pletzer, Alexander; Poco, Jorge; Ellqvist, Tommy; Santos, Emanuele; Potter, Gerald; Smith, Brian; Maxwell, Thomas; Kindig, David; Koop, David
2013-05-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…
NASA Astrophysics Data System (ADS)
Ratto, Gustavo; Maronna, Ricardo; Berri, Guillermo
2010-12-01
Knowledge of frequency wind patterns is very important for air pollution modelling, especially in a city like La Plata (approximately 850,000 inhabitants) with high vehicular and industrial activities and no air monitoring network. An hourly wind analysis was carried out on data from two local weather stations (points A and J). An initial result was that, in spite of differences in data quality, the local weather stations observations were consistent with local and regional National Meteorological Service (NMS) monthly based observations. Two non conventional multivariate statistical methods were employed to further analyse hourly data at points A and J. Hierarchical cluster resulted in a good summarising tool to visualise prevailing hourly winds. Resultant vectors emerging from the clustering process showed good similarity between sites and seasons; this allowed a further visualization of the average diurnal wind development. Multidimensional scaling (MDS) permitted a pairwise comparison of a large number of hourly wind roses. These wind roses were more similar to each other in colder seasons and at site A (the one that is closer to the river) than in warmer seasons and at site J. Most of the observed variations regarding seasons and sites revealed by cluster and MDS analysis are explained in terms of the sea-land breeze circulations. The methodology applied proved to be of utility for simplifying the analysis of high dimensional data with numerous observations.
A two-scale finite element formulation for the dynamic analysis of heterogeneous materials
Ionita, Axinte
2008-01-01
In the analysis of heterogeneous materials using a two-scale Finite Element Method (FEM) the usual assumption is that the Representative Volume Element (RVE) of the micro-scale is much smaller than the finite element discretization of the macro-scale. However there are situations in which the RVE becomes comparable with, or even bigger than the finite element. These situations are considered in this article from the perspective of a two-scale FEM dynamic analysis. Using the principle of virtual power, new equations for the fluctuating fields are developed in terms of velocities rather than displacements. To allow more flexibility in the analysis, a scaling deformation tensor is introduced together with a procedure for its determination. Numerical examples using the new approach are presented.
ERIC Educational Resources Information Center
Viney, Linda L.; Caputi, Peter
2005-01-01
Content analysis scales apply rigorous measurement to verbal communications and make possible the quantification of text in counseling research. The limitations of the Origin and Pawn Scales (M. T. Westbrook & L. L. Viney, 1980), the Positive Affect Scale (M. T. Westbrook, 1976), the Content Analysis Scales of Psychosocial Maturity (CASPM; L. L.…
An Analysis Approach to Large-scale Vehicular Network Simulations
Perumalla, Kalyan S; Beckerman, Martin
2007-01-01
Advances in parallel simulation capabilities are now enabling the possibility of simulating multiple scenarios of large problem configurations. In emergency management applications, for example, it is now conceivable to consider simulating phenomena in large (city- or state-scale) vehicular networks. However, an informed understanding of simulation results is needed for real-time decision support tools that make use a number of simulation runs. Of special interest are insights into trade-offs between accuracy and confidence bounds of simulation results, such as in the quality of predicted evacuation time in emergencies. In some of our emergency management projects, we are exploring approaches that not only aid in making statistically significant interpretations of simulation results but also provide a basis for presenting the inherent qualitative properties of the results to the decision makers. We provide experimental results that demonstrate the possibility of applying our approach to large-scale vehicular network simulations for emergency planning and management.
Bootstrapped discrete scale invariance analysis of geomagnetic dipole intensity
NASA Astrophysics Data System (ADS)
Jonkers, Art R. T.
2007-05-01
The technique of bootstrapped discrete scale invariance allows multiple time-series of different observables to be normalized in terms of observed and predicted characteristic timescales. A case study is presented using the SINT2000 time-series of virtual axial dipole moment, which spans the past 2 Myr. It is shown that this sequence not only bears a clear signature of a preferred timescale of about 55.6 Ka, but additionally predicts similar features (of shorter and longer duration) that are actually observed on the timescales of historical secular variation and dipole reversals, respectively. In turn, the latter two empirical sources both predict the characteristic timescale found in the dipole intensity sequence. These communal scaling characteristics suggest that a single underlying process could be driving dynamo fluctuations across all three observed timescales, from years to millions of years.
Analysis plan for 1985 large-scale tests. Technical report
McMullan, F.W.
1983-01-01
The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.
Wavelet multiscale analysis for Hedge Funds: Scaling and strategies
NASA Astrophysics Data System (ADS)
Conlon, T.; Crane, M.; Ruskin, H. J.
2008-09-01
The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.
Two scale analysis applied to low permeability sandstones
NASA Astrophysics Data System (ADS)
Davy, Catherine; Song, Yang; Nguyen Kim, Thang; Adler, Pierre
2015-04-01
Low permeability materials are often composed of several pore structures of various scales, which are superposed one to another. It is often impossible to measure and to determine the macroscopic properties in one step. In the low permeability sandstones that we consider, the pore space is essentially made of micro-cracks between grains. These fissures are two dimensional structures, which aperture is roughly on the order of one micron. On the grain scale, i.e., on the scale of 1 mm, the fissures form a network. These two structures can be measured by using two different tools [1]. The density of the fissure networks is estimated by trace measurements on the two dimensional images provided by classical 2D Scanning Electron Microscopy (SEM) with a pixel size of 2.2 micron. The three dimensional geometry of the fissures is measured by X-Ray micro-tomography (micro-CT) in the laboratory, with a voxel size of 0.6x0.6x0.6microns3. The macroscopic permeability is calculated in two steps. On the small scale, the fracture transmissivity is calculated by solving the Stokes equation on several portions of the measured fissures by micro-CT. On the large scale, the density of the fissures is estimated by three different means based on the number of intersections with scanlines, on the surface density of fissures and on the intersections between fissures per unit surface. These three means show that the network is relatively isotropic and they provide very close estimations of the density. Then, a general formula derived from systematic numerical computations [2] is used to derive the macroscopic dimensionless permeability which is proportional to the fracture transmissivity. The combination of the two previous results yields the dimensional macroscopic permeability which is found to be in acceptable agreement with the experimental measurements. Some extensions of these preliminary works will be presented as a tentative conclusion. References [1] Z. Duan, C. A. Davy, F. Agostini, L. Jeannin, D. Troadec, F. Skoczylas, Hydraulic cut-off and gas recovery potential of sandstones from Tight Gas Reservoirs: a laboratory investigation, International Journal of Rock Mechanics and Mining Science, Vol.65, pp.75-85, 2014. [2] P.M. Adler, J.-F. Thovert, V.V. Mourzenko: Fractured porous media, Oxford University Press, 2012.
Performance analysis of large-scale resource-bound computer systems
Pourranjbar, Alireza
2015-06-29
We present an analysis framework for performance evaluation of large-scale resource-bound (LSRB) computer systems. LSRB systems are those whose resources are continually in demand to serve resource users, who appear in ...
Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii
Not Available
2012-03-01
This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.
Brief Psychometric Analysis of the Self-Efficacy Teacher Report Scale
ERIC Educational Resources Information Center
Erford, Bradley T.; Duncan, Kelly; Savin-Murphy, Janet
2010-01-01
This study provides preliminary analysis of reliability and validity of scores on the Self-Efficacy Teacher Report Scale, which was designed to assess teacher perceptions of self-efficacy of students aged 8 to 17 years. (Contains 3 tables.)
Supplementary Information Integrative genome-scale metabolic analysis of Vibrio vulnificus
Supplementary Information Integrative genome-scale metabolic analysis of Vibrio vulnificus for drug of conserved genes in Vibrio vulnificus and Vibrio parahaemolyticus genomes Vibrios........................................................ 5 Supplementary Table III. Reactions
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.
2012-01-01
The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…
Large-scale analysis of phylogenetic search behavior
Park, Hyun Jung
2009-05-15
Phylogenetic analysis is used in all branches of biology by inferring evolutionary trees. Applications include designing more effective drugs, tracing the transmission of deadly viruses, and guiding conservation and biodiversity efforts. Most...
A Multi-Time-Scale Analysis of Chemical Reaction Networks : II. Stochastic Systems
Ciocan-Fontanine, Ionut
A Multi-Time-Scale Analysis of Chemical Reaction Networks : II. Stochastic Systems Xingye Kan1 reactions, and for which the time scales are widely separated. We develop a computational algorithm consider stochastic descriptions of chemical reaction networks in which there are both fast and slow
A Philosophical Item Analysis of the Right-Wing Authoritarianism Scale.
ERIC Educational Resources Information Center
Eigenberger, Marty
Items of Altemeyer's 1986 version of the "Right-Wing Authoritarianism Scale" (RWA Scale) were analyzed as philosophical propositions in an effort to establish each item's suggestive connotation and denotation. The guiding principle of the analysis was the way in which the statements reflected authoritarianism's defining characteristics of…
Analysis of a compressed thin film bonded to a compliant substrate: the energy scaling law
patterns, determining for each the associated scaling law. The one with the best law was a periodic "MiuraAnalysis of a compressed thin film bonded to a compliant substrate: the energy scaling law Robert V law of the minimum energy with respect to the physical parameters of the problem, and we prove
An analysis of flow-simulation scales and seismic response P. L. Stoffa*
Bangerth, Wolfgang
An analysis of flow-simulation scales and seismic response P. L. Stoffa* , M. K. Sen and R and seismic modeling is one of the cornerstones of reliable time-lapse (4D) seismic monitoring. However production and whether these scales are resolvable in seismic data is an open one. The answer impacts
Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome
Ren, Kui
Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome Jin Wang1 , Qidong Zhang of E. coli and provided a foldout with the arrangeme-scale density distribution of nucleotides from the complete Escherichia coli genome by applying the newly
Analysis and algorithms design for the partition of large-scale adaptive mobile wireless networks
Xiao, Bin
Analysis and algorithms design for the partition of large-scale adaptive mobile wireless networks is then to design efficient algorithms for adaptively assigning mobile units to mobile servers. A large Available online 28 February 2007 Abstract In a large-scale adaptive mobile wireless network, mobile units
MultiScale Wavelet p-Leader based Heart Rate Variability Analysis for
Herwig, Wendt
(HRV) used to discriminate CHF patients HRV characterized by scaling properties, 1/f spectra, non discriminative power for CHF? Wendt et al.y MultiScale p-Leader HRV analysis EMBC'14, 28 Aug 1 / 10- #12;Databaseg ´Ey CHF patient HRV data cohort of 108 CHF patients Fujita Health University Hospital, 2000
DISCLOSURE: Detecting Botnet Command and Control Servers Through Large-Scale NetFlow Analysis
Kruegel, Christopher
DISCLOSURE: Detecting Botnet Command and Control Servers Through Large-Scale NetFlow Analysis Leyla@ccs.neu.edu Christopher Kruegel UC Santa Barbara chris@cs.ucsb.edu ABSTRACT Botnets continue to be a significant problem the effects of botnets. Two of the primary factors preventing the development of effective large-scale, wide
Rasch Rating Scale Analysis of Quality Indicators of Elementary and Secondary School Performance.
ERIC Educational Resources Information Center
Schumacker, Randall E.
Types of quality indicators (QIs) for elementary schools and secondary schools in Texas, the selection of indicators by district superintendents in Texas, and the subsequent rating scale analysis using Rasch measurement procedures were studied. QIs were scaled from 1 to 7, with 1 representing "not important", and 7 representing "very important".…
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model
Dimov, Ivan
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model Ivan of input parameters contribution into output variability of a large- scale air pollution model]. This model simulates the transport of air pollutants and has been developed by Dr. Z. Zlatev and his
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists Michael Q arrays to monitor gene expression at a genome-wide scale constitutes a fundamental advance in biology. In particular, the expression pattern of all genes in Saccharomyces cerevisiae can be interrogated using
Enabling Large-Scale Biomedical Analysis in the Cloud
Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen
2013-01-01
Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665
Analysis of World Economic Variables Using Multidimensional Scaling
Machado, J.A. Tenreiro; Mata, Maria Eugénia
2015-01-01
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene. PMID:25811177
Large-scale temporal analysis of computer and information science
NASA Astrophysics Data System (ADS)
Soos, Sandor; Kampis, George; Gulyás, László
2013-09-01
The main aim of the project reported in this paper was twofold. One of the primary goals was to produce an extensive source of network data for bibliometric analyses of field dynamics in the case of Computer and Information Science. To this end, we rendered the raw material of the DBLP computer and infoscience bibliography into a comprehensive collection of dynamic network data, promptly available for further statistical analysis. The other goal was to demonstrate the value of our data source via its use in mapping Computer and Information Science (CIS). An analysis of the evolution of CIS was performed in terms of collaboration (co-authorship) network dynamics. Dynamic network analysis covered three quarters of the XX. century (76 years, from 1936 to date). Network evolution was described both at the macro- and the mezo level (in terms of community characteristics). Results show that the development of CIS followed what appears to be a universal pattern of growing into a "mature" discipline.
ERIC Educational Resources Information Center
Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.
2015-01-01
This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory…
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales
Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault
2011-05-01
Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.
TIME-MASS SCALING IN SOIL TEXTURE ANALYSIS
Technology Transfer Automated Retrieval System (TEKTRAN)
Data on texture are used in the majority of inferences about soil functioning and use. The model of fractal fragmentation has attracted attention as a possible source of minimum set of parameters to describe observed particle size distributions. Popular techniques of textural analysis employ the rel...
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
Combinatorial Motif Analysis and Hypothesis Generation on a Genomic Scale
Kibler, Dennis F.
extended to less well understood genes. It is likely, given the number of genes for which no function,calg@uci.edu Keywords: motif, gene regulation, machine learning Abstract Motivation: Computerassisted methods are essential for the analysis of biose quences. Gene activity is regulated in part by the binding
Murray Gibson
2007-04-27
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
Murray Gibson
2010-01-08
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
NASA Astrophysics Data System (ADS)
Suteanu, Cristian
2015-04-01
Many natural objects and processes have been shown to enjoy scale symmetry, i.e. they are characterized by invariance to change in scale; it is also well-known that for real-world features, scaling aspects are only valid over limited scale intervals. At the same time, many natural patterns also enjoy other symmetry properties. This paper presents an approach to natural patterns based on the coupling of scale symmetry with three other forms of symmetry: translation, reflection, and rotation. The first one is assessed using isopersistence diagrams based on multiscale time series analysis (detrended fluctuation analysis and Haar wavelet analysis), the second evaluates time series temporal irreversibility as a function of scale, while the third one considers the impact of rotation on scaling properties found in data from vector fields. The paper shows that the characterization of the way and the extent to which these three forms of symmetry are coupled to scale symmetry can effectively support the evaluation of strongly variable natural patterns. The methodology is illustrated with a wide range of application examples, including air temperature, wind speed and direction, river discharge, and earthquakes.
Check list and zoogeographic analysis of the scale insect fauna (Hemiptera: Coccomorpha) of Greece.
Pellizzari, Giuseppina; Chadzidimitriou, Evangelia; Milonas, Panagiotis; Stathas, George J; Kozár, Ferenc
2015-01-01
This paper presents an updated checklist of the Greek scale insect fauna and the results of the first zoogeographic analysis of the Greek scale insect fauna. According to the latest data, the scale insect fauna of the whole Greek territory includes 207 species; of which 187 species are recorded from mainland Greece and the minor islands, whereas only 87 species are known from Crete. The most rich families are the Diaspididae (with 86 species), followed by Coccidae (with 35 species) and Pseudococcidae (with 34 species). In this study the results of a zoogeographic analysis of scale insect fauna from mainland Greece and Crete are also presented. Five species, four from mainland Greece and one from Crete are considered to be endemic. Comparison with the scale insect fauna of other countries is provided. PMID:26623845
Molecular scale analysis of dry sliding copper asperities
NASA Astrophysics Data System (ADS)
Vadgama, Bhavin N.; Jackson, Robert L.; Harris, Daniel K.
2015-04-01
A fundamental characterization of friction requires an accurate understanding of how the surfaces in contact interact at the nano or atomic scales. In this work, molecular dynamics simulations are used to study friction and deformation in the dry sliding interaction of two hemispherical asperities. The material simulated is copper and the atomic interactions are defined by the embedded atom method potential. The effect of interference, ?, relative sliding velocity, v, asperity size, R, lattice orientation, ?, and temperature control, on the friction characteristics are investigated. Extensive plastic deformation and material transfer between the asperities were observed. The sliding process was dominated by adhesion and resulted in high effective friction coefficient values. The friction force and the effective friction coefficient increased with the interference and asperity size but showed no significant change with an increase in the sliding velocity or with temperature control. The friction characteristics varied strongly with the lattice orientation and an average effective friction coefficient was calculated that compared quantitatively with existing measurements.
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
Construct Validation of the Translated Version of the Work-Family Conflict Scale for Use in Korea
ERIC Educational Resources Information Center
Lim, Doo Hun; Morris, Michael Lane; McMillan, Heather S.
2011-01-01
Recently, the stress of work-family conflict has been a critical workplace issue for Asian countries, especially within those cultures experiencing rapid economic development. Our research purpose is to translate and establish construct validity of a Korean-language version of the Multi-Dimensional Work-Family Conflict (WFC) scale used in the U.S.…
Large-Scale Identification and Analysis of Suppressive Drug Interactions
Cokol, Murat; Weinstein, Zohar B.; Yilancioglu, Kaan; Tasan, Murat; Doak, Allison; Cansever, Dilay; Mutlu, Beste; Li, Siyang; Rodriguez-Esteban, Raul; Akhmedov, Murodzhon; Guvenek, Aysegul; Cokol, Melike; Cetiner, Selim; Giaever, Guri; Iossifov, Ivan; Nislow, Corey; Shoichet, Brian; Roth, Frederick P.
2014-01-01
SUMMARY One drug may suppress the effects of another. Although knowledge of drug suppression is vital to avoid efficacy-reducing drug interactions or discover countermeasures for chemical toxins, drug-drug suppression relationships have not been systematically mapped. Here, we analyze the growth response of Saccharomyces cerevisiae to anti-fungal compound (“drug”) pairs. Among 440 ordered drug pairs, we identified 94 suppressive drug interactions. Using only pairs not selected on the basis of their suppression behavior, we provide an estimate of the prevalence of suppressive interactions between anti-fungal compounds as 17%. Analysis of the drug suppression network suggested that Bromopyruvate is a frequently suppressive drug and Staurosporine is a frequently suppressed drug. We investigated potential explanations for suppressive drug interactions, including chemogenomic analysis, coaggregation, and pH effects, allowing us to explain the interaction tendencies of Bromopyruvate. PMID:24704506
Exploratory and Confirmatory Factor Analysis of the Metacognition Scale for Primary School Students
ERIC Educational Resources Information Center
Yildiz, Eylem; Akpinar, Ercan; Tatar, Nilgun; Ergin, Omer
2009-01-01
The purpose of this study is to develop the Metacognition Scale (MS) which is designed for primary school students. The sample of the study consisted of 426 primary school students in Izmir, Turkey. In order to examine the construct validity of the MS, exploratory factor analysis and confirmatory factor analysis were performed. For the validity of…
Stress analysis of 27% scale model of AH-64 main rotor hub
NASA Technical Reports Server (NTRS)
Hodges, R. V.
1985-01-01
Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.
EARs in the Wild: Large-Scale Analysis of Execution After Redirect Vulnerabilities
Kruegel, Christopher
EARs in the Wild: Large-Scale Analysis of Execution After Redirect Vulnerabilities Pierre Payet a research paper that incorrectly modeled the redirect seman- tics, causing their static analysis to miss EAR vulnerabilities. To understand the breadth and scope of EARs in the real world, we performed a large
Large-Scale Network Traffic Monitoring with DBStream, a System for Rolling Big Data Analysis
Giaccone, Paolo
Large-Scale Network Traffic Monitoring with DBStream, a System for Rolling Big Data Analysis Arian can be generalized to other big data problems with high volume and velocity. Keywords-Big Data and Analysis (NTMA) is to process big, heterogeneous and high-speed data. Network monitoring data
Physical Analysis and Scaling of a Jet and Vortex Actuator
NASA Technical Reports Server (NTRS)
Lachowicz, Jason T.; Yao, Chung-Sheng; Joslin, Ronald D.
2004-01-01
Our previous studies have shown that the Jet and Vortex Actuator generates free-jet, wall-jet, and near- wall vortex flow fields. That is, the actuator can be operated in different modes by simply varying the driving frequency and/or amplitude. For this study, variations are made in the actuator plate and wide-slot widths and sine/asymmetrical actuator plate input forcing (drivers) to further study the actuator induced flow fields. Laser sheet flow visualization, particle- image velocimetry, and laser velocimetry are used to measure and characterize the actuator induced flow fields. Laser velocimetry measurements indicate that the vortex strength increases with the driver repetition rate for a fixed actuator geometry (wide slot and plate width). For a given driver repetition rate, the vortex strength increases as the plate width decreases provided the wide-slot to plate-width ratio is fixed. Using an asymmetric plate driver, a stronger vortex is generated for the same actuator geometry and a given driver repetition rate. The nondimensional scaling provides the approximate ranges for operating the actuator in the free jet, wall jet, or vortex flow regimes. Finally, phase-locked velocity measurements from particle image velocimetry indicate that the vortex structure is stationary, confirming previous computations. Both the computations and the particle image velocimetry measurements (expectantly) show unsteadiness near the wide-slot opening, which is indicative of mass ejection from the actuator.
ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK
Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D
2007-03-28
Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.
Quantitative nonlinearity analysis of model-scale jet noise
NASA Astrophysics Data System (ADS)
Miller, Kyle G.; Reichman, Brent O.; Gee, Kent L.; Neilsen, Tracianne B.; Atchley, Anthony A.
2015-10-01
The effects of nonlinearity on the power spectrum of jet noise can be directly compared with those of atmospheric absorption and geometric spreading through an ensemble-averaged, frequency-domain version of the generalized Burgers equation (GBE) [B. O. Reichman et al., J. Acoust. Soc. Am. 136, 2102 (2014)]. The rate of change in the sound pressure level due to the nonlinearity, in decibels per jet nozzle diameter, is calculated using a dimensionless form of the quadspectrum of the pressure and the squared-pressure waveforms. In this paper, this formulation is applied to atmospheric propagation of a spherically spreading, initial sinusoid and unheated model-scale supersonic (Mach 2.0) jet data. The rate of change in level due to nonlinearity is calculated and compared with estimated effects due to absorption and geometric spreading. Comparing these losses with the change predicted due to nonlinearity shows that absorption and nonlinearity are of similar magnitude in the geometric far field, where shocks are present, which causes the high-frequency spectral shape to remain unchanged.
Global analysis of large-scale chemical and biological experiments
Root, David E; Kelley, Brian P
2005-01-01
Research in the life sciences is increasingly dominated by high-throughput data collection methods that benefit from a global approach to data analysis. Recent innovations that facilitate such comprehensive analyses are highlighted. Several developments enable the study of the relationships between newly derived experimental information, such as biological activity in chemical screens or gene expression studies, and prior information, such as physical descriptors for small molecules or functional annotation for genes. The way in which global analyses can be applied to both chemical screens and transcription profiling experiments using a set of common machine learning tools is discussed. PMID:12058610
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
Wu, Hui-Chun; Hegelich, B.M.; Fernandez, J.C.; Shah, R.C.; Palaniyappan, S.; Jung, D.; Yin, L; Albright, B.J.; Bowers, K.; Huang, C.; Kwan, T.J.
2012-06-19
Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.
Mayunga, Joseph S.
2010-07-14
Over the past decades, coastal areas in the United States have experienced exponential increases in economic losses due to flooding, hurricanes, and tropical storms. This in part is due to increasing concentrations of human ...
Clement, T P.; Johnson, Christian D.); Sun, Y; Klecka, Gary M.; Bartlett, Craig
2000-03-31
A multi-dimensional, multi-species reactive transport model was developed to aid in the analysis of natural attenuation design at chlorinated solvent sites. The model can simulate several simultaneously occurring attenuation processes including aerobic and anaerobic biological degradation processes. The developed model was applied to analyze field-scale transport and biodegradation processes occurring at the Area-6 site in Dover Air Force Base, Delaware. The model was calibrated to field data collected at this site. The calibrated model reproduced the general groundwater flow patterns, and also successfully recreated the observed distribution of PCE, TCE, DCE, VC and chloride plumes. Field-scale decay rates of these contaminant plumes were also estimated. The decay rates are within the range of values that were previously estimated based on lab-scale microcosm and field-scale transect analyses. Model simulation results indicated that the anaerobic degradation rate of TCE, source loading rate, and groundwater transport rate are the important model parameters. Sensitivity analysis of the model indicated that the shape and extent of the predicted TCE plume is most sensitive to transmissivity values. The total mass of the predicted TCE plume is most sensitive to TCE anaerobic degradation rates. The numerical model developed in this study is a useful engineering tool for integrating field-scale natural attenuation data within a rational modeling framework. The model results can be used for quantifying the relative importance of various simultaneously occurring natural attenuation processes.
Multi-Scale Scratch Analysis in Qinghai-Tibet Plateau and its Geological Implications
NASA Astrophysics Data System (ADS)
Sun, Yanyun; Yang, Wencai; Yu, Changqing
2015-08-01
Multi-scale scratch analysis on a regional gravity field is a new data processing system for depicting three-dimensional density structures and tectonic features. It comprises four modules including the spectral analysis of potential fields, multi-scale wavelet analysis, density distribution inversion, and scratch analysis. The multi-scale scratch analysis method was applied to regional gravity data to extract information about the deformation belts in the Qinghai-Tibet Plateau, which can help reveal variations of the deformation belts and plane distribution features from the upper crust to the lower crust, provide evidence for the study of three-dimensional crustal structures, and define distribution of deformation belts and mass movement. Results show the variation of deformation belts from the upper crust to the lower crust. The deformation belts vary from dense and thin in the upper crust to coarse and thick in the lower crust, demonstrating that vertical distribution of deformation belts resembles a tree with a coarse and thick trunk in the lower part and dense and thin branches at the top. The dense and thin deformation areas in the upper crust correspond to crustal shortening areas, while the thick and continuous deformation belts in the lower crust indicate the structural framework of the plateau. Additionally, the lower crustal deformation belts recognized by the multi-scale scratch analysis coincide approximately with the crustal deformation belts recognized using single-scale scratch analysis. However, deformation belts recognized by the latter are somewhat rough while multi-scale scratch analysis can provide more detailed and accurate results.
NASA Astrophysics Data System (ADS)
Kim, M. G.; Jang, H.; Kim, H.; Cho, S.
2013-04-01
To obtain design sensitivity in molecular dynamics (MD), finite differencing is impractical from the viewpoint of efficiency and accuracy since MD problems could include highly nonlinear design parameters and generally require a lot of computation time. In this paper, using a bridging scale decomposition method, we develop a multiscale adjoint design sensitivity analysis (DSA) method for the coarse-scale performance of atomistic-continuum dynamic systems considering fine scale effects. Due to the decomposition of total solution into fine and coarse scales using a mass-weighted projection operator that possesses the orthogonal property of complimentary projector to mass matrix, each scale can be independently considered in both response and sensitivity analyses. To reduce computing costs, using a generalized Langevin equation and a lattice mechanics, the fine scale MD region is confined locally while the coarse-scale finite element analysis is utilized in the whole domain. The multiscale adjoint sensitivity includes only explicitly design-dependent terms together with the original and adjoint responses so that one additional time integration process is sufficient to evaluate the design sensitivity. Numerical examples demonstrate the accuracy of the developed DSA method for various design variables in coarse and fine scales.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.
1997-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.
Scale invariance analysis of the premature ECG signals
NASA Astrophysics Data System (ADS)
Wang, Jun; Cheng, Keqiang
2012-06-01
The multifractal detrended fluctuation analysis and detrending moving average algorithm were introduced in detail and applied to the study of the multifractal characteristics of the normal signals, the atrial premature beat (APB) signals and the premature ventricular contraction (PVC) signals. By analyzing the generalized Hurst exponents, Renyi exponents and multifractal spectrum and comparing the relation of h?h(q) for original signals and their shuffled time series, the result indicated that the three signals have multifractality and present long-range correlation in a certain range. According to the mean value of ??, we found that the strength of the multifractality is varying. The PVC signals is the strongest, and the Normal signals is the weakest. It is useful for clinical practice of medicine to distinguish APB signals with PVC signals.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Automated Large-Scale Shoreline Variability Analysis From Video
NASA Astrophysics Data System (ADS)
Pearre, N. S.
2006-12-01
Land-based video has been used to quantify changes in nearshore conditions for over twenty years. By combining the ability to track rapid, short-term shoreline change and changes associated with longer term or seasonal processes, video has proved to be a cost effective and versatile tool for coastal science. Previous video-based studies of shoreline change have typically examined the position of the shoreline along a small number of cross-shore lines as a proxy for the continuous coast. The goal of this study is twofold: (1) to further develop automated shoreline extraction algorithms for continuous shorelines, and (2) to track the evolution of a nourishment project at Rehoboth Beach, DE that was concluded in June 2005. Seven cameras are situated approximately 30 meters above mean sea level and 70 meters from the shoreline. Time exposure and variance images are captured hourly during daylight and transferred to a local processing computer. After correcting for lens distortion and geo-rectifying to a shore-normal coordinate system, the images are merged to form a composite planform image of 6 km of coast. Automated extraction algorithms establish shoreline and breaker positions throughout a tidal cycle on a daily basis. Short and long term variability in the daily shoreline will be characterized using empirical orthogonal function (EOF) analysis. Periodic sediment volume information will be extracted by incorporating the results of monthly ground-based LIDAR surveys and by correlating the hourly shorelines to the corresponding tide level under conditions with minimal wave activity. The Delaware coast in the area downdrift of the nourishment site is intermittently interrupted by short groins. An Even/Odd analysis of the shoreline response around these groins will be performed. The impact of groins on the sediment volume transport along the coast during periods of accretive and erosive conditions will be discussed. [This work is being supported by DNREC and the Henlopen Hotel
Wavelet-based cross-correlation analysis of structure scaling in turbulent clouds
NASA Astrophysics Data System (ADS)
Arshakian, Tigran G.; Ossenkopf, Volker
2016-01-01
Aims: We propose a statistical tool to compare the scaling behaviour of turbulence in pairs of molecular cloud maps. Using artificial maps with well-defined spatial properties, we calibrate the method and test its limitations to apply it ultimately to a set of observed maps. Methods: We develop the wavelet-based weighted cross-correlation (WWCC) method to study the relative contribution of structures of different sizes and their degree of correlation in two maps as a function of spatial scale, and the mutual displacement of structures in the molecular cloud maps. Results: We test the WWCC for circular structures having a single prominent scale and fractal structures showing a self-similar behaviour without prominent scales. Observational noise and a finite map size limit the scales on which the cross-correlation coefficients and displacement vectors can be reliably measured. For fractal maps containing many structures on all scales, the limitation from observational noise is negligible for signal-to-noise ratios ?5. We propose an approach for the identification of correlated structures in the maps, which allows us to localize individual correlated structures and recognize their shapes and suggest a recipe for recovering enhanced scales in self-similar structures. The application of the WWCC to the observed line maps of the giant molecular cloud G 333 allows us to add specific scale information to the results obtained earlier using the principal component analysis. The WWCC confirms the chemical and excitation similarity of 13CO and C18O on all scales, but shows a deviation of HCN at scales of up to 7 pc. This can be interpreted as a chemical transition scale. The largest structures also show a systematic offset along the filament, probably due to a large-scale density gradient. Conclusions: The WWCC can compare correlated structures in different maps of molecular clouds identifying scales that represent structural changes, such as chemical and phase transitions and prominent or enhanced dimensions.
SU-E-T-472: A Multi-Dimensional Measurements Comparison to Analyze a 3D Patient Specific QA Tool
Ashmeg, S; Jackson, J; Zhang, Y; Oldham, M; Yin, F; Ren, L
2014-06-01
Purpose: To quantitatively evaluate a 3D patient specific QA tool using 2D film and 3D Presage dosimetry. Methods: A brain IMRT case was delivered to Delta4, EBT2 film and Presage plastic dosimeter. The film was inserted in the solid water slabs at 7.5cm depth for measurement. The Presage dosimeter was inserted into a head phantom for 3D dose measurement. Delta4's Anatomy software was used to calculate the corresponding dose to the film in solid water slabs and to Presage in the head phantom. The results from Anatomy were compared to both calculated results from Eclipse and measured dose from film and Presage to evaluate its accuracy. Using RIT software, we compared the “Anatomy” dose to the EBT2 film measurement and the film measurement to ECLIPSE calculation. For 3D analysis, DICOM file of “Anatomy” was extracted and imported to CERR software, which was used to compare the Presage dose to both “Anatomy” calculation and ECLIPSE calculation. Gamma criteria of 3% - 3mm and 5% - 5mm was used for comparison. Results: Gamma passing rates of film vs “Anatomy”, “Anatomy” vs ECLIPSE and film vs ECLIPSE were 82.8%, 70.9% and 87.6% respectively when 3% - 3mm criteria is used. When the criteria is changed to 5% - 5mm, the passing rates became 87.8%, 76.3% and 90.8% respectively. For 3D analysis, Anatomy vs ECLIPSE showed gamma passing rate of 86.4% and 93.3% for 3% - 3mm and 5% - 5mm respectively. The rate is 77.0% for Presage vs ECLIPSE analysis. The Anatomy vs ECLIPSE were absolute dose comparison. However, film and Presage analysis were relative comparison Conclusion: The results show higher passing rate in 3D than 2D in “Anatomy” software. This could be due to the higher degrees of freedom in 3D than in 2D for gamma analysis.
Scaling analysis applied to the NORVEX code development and thermal energy flight experiment
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; Namkoong, David; Darling, Douglas
1991-01-01
A scaling analysis is used to study the dominant flow processes that occur in molten phase change material (PCM) under 1 g and microgravity conditions. Results of the scaling analysis are applied to the development of the NORVEX (NASA Oak Ridge Void Experiment) computer program and the preparation of the Thermal Energy Storage (TES) flight experiment. The NORVEX computer program which is being developed to predict melting and freezing with void formation in a 1 g or microgravity environment of the PCM is described. NORVEX predictions are compared with the scaling and similarity results. The approach to be used to validate NORVEX with TES flight data is also discussed. Similarity and scaling show that the inertial terms must be included as part of the momentum equation in either the 1 g or microgravity environment (a creeping flow assumption is invalid). A 10(exp -4) environment was found to be a suitable microgravity environment for the proposed PCM.
Scaling Analysis for the Direct Reactor Auxiliary Cooling System for AHTRs
Yoder Jr, Graydon L; Wilson, Dane F; Wang, X. NMN; Lv, Q. NMN; Sun, X NMN; Christensen, R. N.; Blue, T. E.; Subharwall, Piyush
2011-01-01
The Direct Reactor Auxiliary Cooling System (DRACS), shown in Fig. 1 [1], is a passive heat removal system proposed for the Advanced High-Temperature Reactor (AHTR). It features three coupled natural circulation/convection loops completely relying on the buoyancy as the driving force. A prototypic design of the DRACS employed in a 20-MWth AHTR has been discussed in our previous work [2]. The total height of the DRACS is usually more than 10 m, and the required heating power will be large (on the order of 200 kW), both of which make a full-scale experiment not feasible in our laboratory. This therefore motivates us to perform a scaling analysis for the DRACS to obtain a scaled-down model. In this paper, theory and methodology for such a scaling analysis are presented.
An analysis of space scales for sea ice drift
Carrieres, T.
1994-12-31
Sea ice presents a hazard to navigation off Canada`s east coast from January to June. The Ice Centre Environment Canada (ICEC) which is part of the Atmospheric Environment Service monitors ice conditions in order to assist safe and efficient operations through or around the ice. The ice program depends on an advanced data acquisition, analysis and forecasting effort. Support for the latter is provided by kinematic models as well as a fairly simple dynamic sea ice model. In order to improve ICEC`s forecasting capabilities, the Department of Fisheries and Oceans (DFO) conducts ice modelling research and regular field experiments. The experiments provide a better understanding of the ice and also allow models to be validated and refined. The Bedford Institute of Oceanography (BIO, part of DFO) regularly deploys beacons on ice floes off the Labrador and Newfoundland coasts. These beacons provide environmental as well as location information through Service ARGOS. Documentation on the accuracy and information of the sensors is documented in Prinsenberg, 1993. The beacon locations are used here to infer an relatively unbiased representation of sea ice drift.
Adapting and Validating a Scale to Measure Sexual Stigma among Lesbian, Bisexual and Queer Women
Logie, Carmen H.; Earnshaw, Valerie
2015-01-01
Lesbian, bisexual and queer (LBQ) women experience pervasive sexual stigma that harms wellbeing. Stigma is a multi-dimensional construct and includes perceived stigma, awareness of negative attitudes towards one’s group, and enacted stigma, overt experiences of discrimination. Despite its complexity, sexual stigma research has generally explored singular forms of sexual stigma among LBQ women. The study objective was to develop a scale to assess perceived and enacted sexual stigma among LBQ women. We adapted a sexual stigma scale for use with LBQ women. The validation process involved 3 phases. First, we held a focus group where we engaged a purposively selected group of key informants in cognitive interviewing techniques to modify the survey items to enhance relevance to LBQ women. Second, we implemented an internet-based, cross-sectional survey with LBQ women (n=466) in Toronto, Canada. Third, we administered an internet-based survey at baseline and 6-week follow-up with LBQ women in Toronto (n=24) and Calgary (n=20). We conducted an exploratory factor analysis using principal components analysis and descriptive statistics to explore health and demographic correlates of the sexual stigma scale. Analyses yielded one scale with two factors: perceived and enacted sexual stigma. The total scale and subscales demonstrated adequate internal reliability (total scale alpha coefficient: 0.78; perceived sub-scale: 0.70; enacted sub-scale: 0.72), test-retest reliability, and construct validity. Perceived and enacted sexual stigma were associated with higher rates of depressive symptoms and lower self-esteem, social support, and self-rated health scores. Results suggest this sexual stigma scale adapted for LBQ women has good psychometric properties and addresses enacted and perceived stigma dimensions. The overwhelming majority of participants reported experiences of perceived sexual stigma. This underscores the importance of moving beyond a singular focus on discrimination to explore perceptions of social judgment, negative attitudes and social norms. PMID:25679391
Spatial scaling: Its analysis and effects on animal movements in semiarid landscape mosaics
Wiens, J.A.
1992-09-01
The research conducted under this agreement focused in general on the effects of envirorunental heterogeneity on movements of animals and materials in semiarid grassland landscapes, on the form of scale-dependency of ecological patterns and processes, and on approaches to extrapolating among spatial scales. The findings are summarized in a series of published and unpublished papers that are included as the main body of this report. We demonstrated the value of experimental model systems'' employing observations and experiments conducted in small-scale microlandscapes to test concepts relating to flows of individuals and materials through complex, heterogeneous mosaics. We used fractal analysis extensively in this research, and showed how fractal measures can produce insights and lead,to questions that do not emerge from more traditional scale-dependent measures. We developed new concepts and theory to deal with scale-dependency in ecological systems and with integrating individual movement patterns into considerations of population and ecosystem dynamics.
NASA Astrophysics Data System (ADS)
Norman, Matthew R.
2014-10-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.
NASA Astrophysics Data System (ADS)
Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I.
2015-08-01
We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source function is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.
Kato, Takahiro A; Watabe, Motoki; Kanba, Shigenobu
2013-01-01
Neurons and synapses have long been the dominant focus of neuroscience, thus the pathophysiology of psychiatric disorders has come to be understood within the neuronal doctrine. However, the majority of cells in the brain are not neurons but glial cells including astrocytes, oligodendrocytes, and microglia. Traditionally, neuroscientists regarded glial functions as simply providing physical support and maintenance for neurons. Thus, in this limited role glia had been long ignored. Recently, glial functions have been gradually investigated, and increasing evidence has suggested that glial cells perform important roles in various brain functions. Digging up the glial functions and further understanding of these crucial cells, and the interaction between neurons and glia may shed new light on clarifying many unknown aspects including the mind-brain gap, and conscious-unconscious relationships. We briefly review the current situation of glial research in the field, and propose a novel translational research with a multi-dimensional model, combining various experimental approaches such as animal studies, in vitro & in vivo neuron-glia studies, a variety of human brain imaging investigations, and psychometric assessments. PMID:24155727
A new approach for modeling and analysis of molten salt reactors using SCALE
Powers, J. J.; Harrison, T. J.; Gehin, J. C.
2013-07-01
The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)
NASA Astrophysics Data System (ADS)
Jain, G.; Sharma, S.; Vyas, A.; Rajawat, A. S.
2014-11-01
This study attempts to measure and characterize urban sprawl using its multiple dimensions in the Jamnagar city, India. The study utilized the multi-date satellite images acquired by CORONA, IRS 1D PAN & LISS-3, IRS P6 LISS-4 and Resourcesat-2 LISS-4 sensors. The extent of urban growth in the study area was mapped at 1 : 25,000 scale for the years 1965, 2000, 2005 and 2011. The growth of urban areas was further categorized into infill growth, expansion and leapfrog development. The city witnessed an annual growth of 1.60 % per annum during the period 2000-2011 whereas the population growth during the same period was observed at less than 1.0 % per annum. The new development in the city during 2000-2005 time period comprised of 22 % as infill development, 60 % as extension of the peripheral urbanized areas, and 18 % as leapfrogged development. However, during 2005-2011 timeframe the proportion of leapfrog development increased to 28 % whereas due to decrease in availability of developable area within the city, the infill developments declined to 9 %. The urban sprawl in Jamnagar city was further characterized on the basis of five dimensions of sprawl viz. population density, continuity, clustering, concentration and centrality by integrating the population data with sprawl for year 2001 and 2011. The study characterised the growth of Jamnagar as low density and low concentration outwardly expansion.
Dynamics of a lamellar system with diffusion and reaction: Scaling analysis and global kinetics
NASA Astrophysics Data System (ADS)
Muzzio, F. J.; Ottino, J. M.
1989-12-01
The evolution of a one-dimensional array of reactive lamellae with distributed striation thickness is studied by means of simulations, scaling analysis, and space-averaged kinetics. An infinitely fast, diffusion-controlled reaction A+B-->2P occurs at the interfaces between striations. As time increases, thin striations are eaten by thicker neighbors resulting in a modification of the striation thickness distribution (STD). Scaling analysis suggests that the STD evolves into a universal form and that the behavior of the system at short and long times is characterized by two different kinetic regimes. These predictions are confirmed by means of a novel numerical algorithm.
Compte, Emilio J; Sepúlveda, Ana R; de Pellegrin, Yolanda; Blanco, Miriam
2015-06-01
Several studies have demonstrated that men express body dissatisfaction differently than women. Although specific instruments that address body dissatisfaction in men have been developed, only a few have been validated in Latin-American male populations. The aim of this study was to reassess the factor structure of the Spanish versions of the Drive for Muscularity Scale (DMS-S) and the Male Body Attitudes Scale (MBAS-S) in an Argentinian sample. A cross-sectional study was conducted among 423 male students to examine: the factorial structure (confirmatory factor analysis), the internal consistency reliability, and the concurrent, convergent and discriminant validity of both scales. Results replicated the two factor structures for the DMS-S and MBAS-S. Both scales showed excellent levels of internal consistency, and various measures of construct validity indicated that the DMS-S and MBAS-S were acceptable and valid instruments to assess body dissatisfaction in Argentinian males. PMID:25828841
Tescione, Lia; Lambropoulos, James; Paranandi, Madhava Ram; Makagiansar, Helena; Ryll, Thomas
2015-01-01
A bench scale cell culture model representative of manufacturing scale (2,000 L) was developed based on oxygen mass transfer principles, for a CHO-based process producing a recombinant human protein. Cell culture performance differences across scales are characterized most often by sub-optimal performance in manufacturing scale bioreactors. By contrast in this study, reduced growth rates were observed at bench scale during the initial model development. Bioreactor models based on power per unit volume (P/V), volumetric mass transfer coefficient (kL a), and oxygen transfer rate (OTR) were evaluated to address this scale performance difference. Lower viable cell densities observed for the P/V model were attributed to higher sparge rates and reduced oxygen mass transfer efficiency (kL a) of the small scale hole spargers. Increasing the sparger kL a by decreasing the pore size resulted in a further decrease in growth at bench scale. Due to sensitivity of the cell line to gas sparge rate and bubble size that was revealed by the P/V and kL a models, an OTR model based on oxygen enrichment and increased P/V was selected that generated endpoint sparge rates representative of 2,000 L scale. This final bench scale model generated similar growth rates as manufacturing. In order to take into account other routinely monitored process parameters besides growth, a multivariate statistical approach was applied to demonstrate validity of the small scale model. After the model was selected based on univariate and multivariate analysis, product quality was generated and verified to fall within the 95% confidence limit of the multivariate model. PMID:25042258
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Pandarinath, Kailasa; Verma, Sanjeet K.; Agrawal, Salil
2013-05-01
For the discrimination of four tectonic settings of island arc, continental arc, within-plate (continental rift and ocean island together), and collision, we present three sets of new diagrams obtained from linear discriminant analysis of natural logarithm transformed ratios of major elements, immobile major and trace elements, and immobile trace elements in acid magmas. The use of discordant outlier-free samples prior to linear discriminant analysis had improved the success rates by about 3% on the average. Success rates of these new diagrams were acceptably high (about 69% to 97% for the first set, about 69% to 99% for the second set, and about 60% to 96% for the third set). Testing of these diagrams for acid rock samples (not used for constructing them) from known tectonic settings confirmed their overall good performance. Application of these new diagrams to Precambrian case studies provided the following generally consistent results: a continental arc setting for the Caribou greenstone belt (Canada) at about 3000 Ma, São Francisco craton (Brazil) at about 3085-2983 Ma, Penakacherla greenstone terrane (Dharwar craton, India) at about 2700 Ma, and Adola (Ethiopia) at about 885-765 Ma; a transitional continental arc to collision setting for the Rio Maria terrane (Brazil) at about 2870 Ma and Eastern felsic volcanic terrain (India) at about 2700 Ma; a collision setting for the Kolar suture zone (India) at about 2610 Ma and Korpo area (Finland) at about 1852 Ma; and a within-plate (likely a continental rift) setting for Malani igneous suite (India) at about 745-700 Ma. These applications suggest utility of the new discrimination diagrams for all four tectonic settings. In fact, all three sets of diagrams were shown to be robust against post-emplacement compositional changes caused by analytical errors, element mobility related to low or high temperature alteration, or Fe-oxidation caused by weathering.
NASA Astrophysics Data System (ADS)
Temple, Brian Allen
Conservative macroscopic governing equations for a plasma in the magnetohydrodynamic (MHD) approximation were derived from the symmetry properties of the microscopic action integral. Invariance of the action integral under translation and rotation symmetries was verified. Noether's Theorem was used to express the invariance equations as two sets of microscopic equations that represent the Euler-Lagrange (E-L) equations for the plasma and an equation in divergence form, called a conservation law, that is the first integral of the Euler-Lagrange equations. The specific forms of the conservation laws were determined from the invariance properties admitted by the action integral. Invariance under translations gave the conservation law for the translational momentum balance while invariance under rotations gave the conservation law for the angular momentum balance. The ensemble average of the microscopic equations was taken to give the kinetic representation of the equations. The momentum integrals in the kinetic equation were evaluated to give the fluid representation of the system. The fluid representation was then expressed in the MHD limit to give the one-fluid representation for the plasma. The total derivatives in the conservation laws were evaluated for the kinetic and fluid representations to verify that the expressions are first integrals of the respective E-L equations. The symmetry properties of the conservation laws in auxiliary form were determined to test the system, of equations for mapping properties that may allow the nonlinear conservation laws to be expressed as nonlinear or linear expressions. The results showed that no nonlinear to linear mapping was possible for the governing equations with charge distributions. The quasi-neutral governing equations admitted a scaling group that allows mapping from the source nonlinear equations to nonlinear target equations that contain one less independent variable. The translation conservation laws were used in a two- dimensional computer simulation of the confined eddy problem to demonstrate an application of the equations. Comparison of the results produced by the code using the conservation law governing equations with previous work is limited to qualitative comparison since differences in the numerical input and graphics software make direct quantitative comparison impossible. Qualitative comparison with previous work show the results to be consistent.
Spatial-scale analysis of hydrographic data over the Texas-Louisiana continental shelf
NASA Astrophysics Data System (ADS)
Li, Yongxiang; Nowlin, Worth D.; Reid, Robert O.
1996-09-01
On the basis of hydrographic data collected by the Texas-Louisiana Shelf Circulation and Transport Processes Study (LATEX A) and on earlier cruises, we examined the energetic scales of spatial variability over the Texas-Louisiana continental shelf. Shelf-scale spatial reference fields were sought to represent the general distributions of circulation and water properties over the shelf at the time of the observations. Various methods were explored for determining such reference fields of potential temperature, salinity, and geopotential anomaly at the sea surface relative to 70 dB. Spatial reference fields obtained from mean May fields and from polynomials fitted to individual May cruise data were compared. On the basis of those comparisons, quadratics were selected to fit property distributions from individual cruises and so to yield reference fields. Smaller-scale anomaly fields were obtained by removing the reference fields from the observed distributions. Calculation of correlation versus separation distance based on these anomaly fields then allowed estimation of spatial scales of anomaly fields for cross-shelf and along-shelf transects. The zero-crossing scale and the Gaussian decay scale are shown to be essentially the same, and the zero-crossing scale is used. The principal results for the anomaly scales are (1) cross-shelf scales over the western shelf are shorter (order 15 km) than those in the eastern and central regions (order 20 km), (2) along-shelf spatial scales are of the order of 35 km, (3) there is no significant difference in cross-shelf scales at the surface, middepth, and bottom, and (4) along-shelf scales are essentially the same over the western and eastern regions of the shelf, over the midshelf (50-m isobath) and along the shelf break (200-m isobath), and at different depths along the 200-m isobath. The same spatial scales are found when using data with spatial resolution of 1-10 km cross shelf and 10-20 along shelf to obtain the anomaly fields, so the data resolution used is adequate to represent the scales. The variances of the observed (shelf-wide) salinity, temperature, and geopotential anomaly are greater cross shelf than along shelf. The variance of the cross-shelf anomaly fields is around 10% of the shelf-wide fields; that of the along-shelf anomaly fields is about 35% of that in the shelf-wide fields. The analysis of scales when grouped by season did not show persuasive evidence of seasonal variation.
SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems
Mueller, Don; Rearden, Bradley T
2009-01-01
Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
FACTOR ANALYSIS OF A SOCIAL SKILLS SCALE FOR HIGH SCHOOL STUDENTS.
Wang, H-Y; Lin, C-K
2015-10-01
The objective of this study was to develop a social skills scale for high school students in Taiwan. This study adopted stratified random sampling. A total of 1,729 high school students were included. The students ranged in age from 16 to 18 years. A Social Skills Scale was developed for this study and was designed for classroom teachers to fill out. The test-retest reliability of this scale was tested by Pearson's correlation coefficient. Exploratory factor analysis was used to determine construct validity. The Social Skills Scale had good overall test-retest reliability of .92, and the internal consistency of the five subscales was above .90. The results of the factor analysis showed that the Social Skills Scale covered the five domains of classroom learning skills, communication skills, individual initiative skills, interaction skills, and job-related social skills, and the five factors explained 68.34% of the variance. Thus, the Social Skills Scale had good reliability and validity and would be applicable to and could be promoted for use in schools. PMID:26340050
Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model
P. Tucci
2001-12-20
This Analysis/Model Report (AMR) documents an updated analysis of water-level data performed to provide the saturated-zone, site-scale flow and transport model (CRWMS M&O 2000) with the configuration of the potentiometric surface, target water-level data, and hydraulic gradients for model calibration. The previous analysis was presented in ANL-NBS-HS-000034, Rev 00 ICN 01, Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model (USGS 2001). This analysis is designed to use updated water-level data as the basis for estimating water-level altitudes and the potentiometric surface in the SZ site-scale flow and transport model domain. The objectives of this revision are to develop computer files containing (1) water-level data within the model area (DTN: GS010908312332.002), (2) a table of known vertical head differences (DTN: GS0109083 12332.003), and (3) a potentiometric-surface map (DTN: GS010608312332.001) using an alternate concept from that presented in ANL-NBS-HS-000034, Rev 00 ICN 01 for the area north of Yucca Mountain. The updated water-level data include data obtained from the Nye County Early Warning Drilling Program (EWDP) and data from borehole USW WT-24. In addition to being utilized by the SZ site-scale flow and transport model, the water-level data and potentiometric-surface map contained within this report will be available to other government agencies and water users for ground-water management purposes. The potentiometric surface defines an upper boundary of the site-scale flow model, as well as provides information useful to estimation of the magnitude and direction of lateral ground-water flow within the flow system. Therefore, the analysis documented in this revision is important to SZ flow and transport calculations in support of total system performance assessment.
Sipahi, Rifat
, Controlling Dynamical Systems using Multiple Feedback Control, Phys. Rev. E 72, 016206 2005][Niculescu et al curves and surfaces and points whe- re there is an imaginary eigenvalues [Gu, Niculescu, On stability
Boularas, A. Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.
2014-08-28
Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip–sample distances (from 0 to 600?nm) and for different DC voltages applied to the tip (6 to 20?V) are shown and compared with experimental data in order to validate our approach.
NASA Astrophysics Data System (ADS)
Guan, K.; Harman, C. J.; Basu, N. B.; Rao, S. S.; Sivapalan, M.; Kalita, P. K.; Packman, A. I.
2009-12-01
Agricultural watersheds are intensely managed systems, and consist of a large number of dynamic components that interact non-linearly to create emergent patterns in space and time. These systems can be conceptualized as input signals (“drivers”) that cascade through a hierarchy of non-linear “filters” to create the modulated spatial and temporal responses (“signatures”). The coupling between flow and transport (“hydrologic filter”) and transformations (“biogeochemical filter”) control the cascading processes from precipitation through stream flow, and finally to chemical concentrations and loads, at various nested spatial and temporal scales. To detect important “signatures”, we applied spectral analysis and wavelet coherence to the 10-year dataset (at daily resolution) collected from Little Vermillion River watershed (Illinois, USA), an agricultural watershed (~400 km2), drained by an extensive network of subsurface tiles, surface ditches, and streams. Watershed monitoring data includes hydrologic measurements (flow and stage), and concentrations of chemical constituents (nitrate, phosphate, and pesticides) across different spatial scales, from tile-flow stations (drainage area ~ 0.05 km2) to river stations (drainage area ~400 km2). We find that a power-law scaling behavior exists in all the smoothed power spectra for precipitation, stream flow, nitrate concentration and load. The slopes of power spectra increase from precipitation to stream flow to nitrate concentration, demonstrating the cascading effect of the filters. The spectral analysis further shows that the filters retain the major characteristics of long-term response (annual and sub-annual cycle), but smooth (or filter) the short-term responses. Steeper slopes are observed at larger spatial scales, indicating a stronger filtering effect due to greater averaging (buffering) with increasing residence time. Further data analysis using wavelet coherence suggests that at small spatial scales, stream flow is tightly coupled with nitrate concentration for both short and long temporal scales; while at larger spatial scales, only long-term coupling is observed. The decreased coupling with increasing spatial and temporal scale is attributed to the averaging of heterogeneities from different local tiles and ditches to the large-scale stream network outlets. It is hypothesized that this averaging and decoupling leads to the “apparent chemostatic” response that is observed at larger spatial scales, despite strong coupling and non-chemostatic behavior at smaller spatial scales.
Relaxation Mode Analysis and Scale-Dependent Energy Landscape Statistics in Liquids
NASA Astrophysics Data System (ADS)
Cai, Zhikun; Zhang, Yang
2015-03-01
In contrast to the prevailing focus on short-lived classical phonon modes in liquids, we propose a classical treatment of the relaxation modes in liquids under a framework analogous to the normal mode analysis in solids. Our relaxation mode analysis is built upon the experimentally measurable two-point density-density correlation function (e.g. using quasi-elastic and inelastic scattering experiments). We show in the Laplace-inverted relaxation frequency z-domain, the eigen relaxation modes are readily decoupled. From here, important statistics of the scale-dependent activation energy in the energy landscape as well as the scale-dependent relaxation time distribution function can be obtained. We first demonstrate this approach in the case of supercooled liquids when dynamic heterogeneity emerges in the landscape-influenced regime. And then we show, using this framework, we are able to extract the scale-dependent energy landscape statistics from neutron scattering measurements.
Fractal and multifractal analysis of pore-scale images of soil
NASA Astrophysics Data System (ADS)
Bird, Nigel; Díaz, M. Cruz; Saa, Antonio; Tarquis, Ana M.
2006-05-01
We examine critically the fractal and multifractal analysis of two-dimensional images of soil sections. We demonstrate that, dependent on the porosity displayed in the image, both a fractal dimension and a multifractal spectrum can be extracted from such images irrespective of whether these images exhibit fractal structures and multifractal scaling of local density and porosity. We suggest ways to transform the data arising from the analysis in order to differentiate better between fractal and non-fractal images. We examine three soil images and conclude that there is no compelling evidence of scaling properties associated with mass fractal and multifractal structures. Our results point to a need for alternative methods for characterizing soil pore structures and to extend our modelling of complex and multiscale porous media to cases where scaling symmetries are relaxed.
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.
2015-05-01
Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.
Karahalios, Karrie G.
People Search within an Online Social Network: Large Scale Analysis of Facebook Graph Search Query and Facebook2 , Menlo Park, CA 94025 {spirin2,kkarahal}@illinois.edu and {jfh,miked,maxime}@fb.com ABSTRACT Facebook in- troduced its innovative Graph Search product with the goal to take the OSN search experience
Genome-Scale Analysis of Translation Elongation with a Ribosome Flow Model
Ruppin, Eytan
Genome-Scale Analysis of Translation Elongation with a Ribosome Flow Model Shlomi Reuveni1. The Ribosomal Flow Model (RFM) predicts fundamental features of the translation process, including translation rates, protein abundance levels, ribosomal densities and the relation between all these variables
A Confirmatory Factor Analysis of the Academic Motivation Scale with Black College Students
ERIC Educational Resources Information Center
Cokley, Kevin
2015-01-01
The factor structure of the Academic Motivation Scale (AMS) was examined with a sample of 578 Black college students. A confirmatory factor analysis of the AMS was conducted. Results indicated that the hypothesized seven-factor model did not fit the data. Implications for future research with the AMS are discussed.
Analysis of Heating Systems and Scale of Natural Gas-Condensing Water Boilers in Northern Zones
Wu, Y.; Wang, S.; Pan, S.; Shi, Y.
2006-01-01
In this paper, various heating systems and scale of the natural gas-condensing water boiler in northern zones are discussed, based on a technical-economic analysis of the heating systems of natural gas condensing water boilers in northern zones...
Large scale estimation of arterial traffic and structural analysis of traffic patterns using probe and analyzing traffic conditions on large arterial networks is an inherently difficult task. The first goal of this article is to demonstrate how arterial traffic conditions can be estimated using sparsely sampled GPS
1 Analysis of Thermoelectric Properties of Scaled Silicon Nanowires Using an Atomistic Tight Abstract Low dimensional materials provide the possibility of improved thermoelectric performance due. As a result of suppressed phonon conduction, large improvements on the thermoelectric figure of merit, ZT
Smartphone usage in the wild: a large-scale analysis of applications and context
Gatica-Perez, Daniel
Smartphone usage in the wild: a large-scale analysis of applications and context Trinh Minh Tri Do two contextual variables that condition the use of smartphone applications, namely places and social of applications and context, bringing out design implica- tions for interfaces on smartphones. Categories
Factor Analysis of Minnesota Multiphasic Personality Inventory-1 (MMPI-1) Validity Scale Items.
ERIC Educational Resources Information Center
Cloak, Nancy L.; Kirklen, Leonard E.; Strozier, Anne L.; Reed, James R.
1997-01-01
A factor analysis of 332 university counseling center students responses to Minnesota Multiphasic Personality Inventory-1 validity scale items identified four major factors: minimizing, exaggerating, cynicism, and psychological distress. Discusses implications for the development and interpretation of tests used in college counseling centers. (RJM)
ERIC Educational Resources Information Center
Donovan, Phillip Raymond
2009-01-01
This study focuses on the analysis of the behavior of unbound aggregates to offset wheel loads. Test data from full-scale aircraft gear loading conducted at the National Airport Pavement Test Facility (NAPTF) by the Federal Aviation Administration (FAA) are used to investigate the effects of wander (offset loads) on the deformation behavior of…
LARGE-SCALE BIOLOGY ARTICLE Systems and Trans-System Level Analysis Identi es
Clarke, Steven
thaliana in low iron. INTRODUCTION Iron is an essential nutrient for virtually all life forms becauseLARGE-SCALE BIOLOGY ARTICLE Systems and Trans-System Level Analysis Identi es Conserved Iron De Angeles, California 90095 We surveyed the iron nutrition-responsive transcriptome of Chlamydomonas
Analysis and Simulation of a Meso-scale Model of Diffusive Resistance of Bacterial Biofilms to
Demaret, Laurent
Analysis and Simulation of a Meso-scale Model of Diffusive Resistance of Bacterial Biofilms Most bacteria live in biofilm communities, which offer protection against harmful external impacts. This makes treatment of biofilm borne bacterial infections with antibiotics difficult. We discuss a dy- namic
Analysis of Peptide MS/MS Spectra from Large-Scale Proteomics Experiments Using
Noble, William Stafford
and facilitating high-throughput proteomics; however, there remains room for improvement. First, database searchAnalysis of Peptide MS/MS Spectra from Large-Scale Proteomics Experiments Using Spectrum Libraries, Aurora, Colorado 80045 A widespread proteomics procedure for characterizing a complex mixture of proteins
An application of latent topic document analysis to large-scale proteomics databases
Hinneburg, Alexander
An application of latent topic document analysis to large-scale proteomics databases Sebastian Klie repositories, including the Global Proteome Machine (GPM) [2], the Proteomics Identifications Database (PRIDE.martens,juan,rcote,pjones,apweiler,hhe}@ebi.ac.uk Abstract. Since the advent of public data repositories for proteomics data, readily accessible results from
SCALE-DEPENDENT ANALYSIS OF IONOSPHERE FLUCTUATIONS Stephane G. Roux, Patrice Abry
Roux, Stephane
SCALE-DEPENDENT ANALYSIS OF IONOSPHERE FLUCTUATIONS St´ephane G. Roux, Patrice Abry Physics Dept Institute of Atmospheric Physics, ASCR, Czech Republic, pkn@ufa.cas.cz ABSTRACT Ionosphere consists that such Ionospheric variations are well described by the superimpo- sition of well-defined long-term cycles
A Polytomous Item Response Theory Analysis of Social Physique Anxiety Scale
ERIC Educational Resources Information Center
Fletcher, Richard B.; Crocker, Peter
2014-01-01
The present study investigated the social physique anxiety scale's factor structure and item properties using confirmatory factor analysis and item response theory. An additional aim was to identify differences in response patterns between groups (gender). A large sample of high school students aged 11-15 years (N = 1,529) consisting of n =…
Multi-Scale Entropy Analysis of Different Spontaneous Motor Unit Discharge Patterns
Zhang, Xu; Chen, Xiang; Barkhaus, Paul E.; Zhou, Ping
2013-01-01
This study explores a novel application of multi-scale entropy (MSE) analysis for characterizing different patterns of spontaneous electromyogram (EMG) signals including sporadic, tonic and repetitive spontaneous motor unit discharges, and normal surface EMG baseline. Two algorithms for MSE analysis, namely the standard MSE and the intrinsic mode entropy (IMEn) (based on the recently developed multivariate empirical mode decomposition (MEMD) method), were applied to different patterns of spontaneous EMG. Significant differences were observed in multiple scales of the standard MSE and IMEn analyses (p < 0.001) for any two of the spontaneous EMG patterns, while such significance may not be observed from the single scale entropy analysis. Compared to the standard MSE, the IMEn analysis facilitates usage of a relatively low scale number to discern entropy difference among various patterns of spontaneous EMG signals. The findings from this study contribute to our understanding of the nonlinear dynamic properties of different spontaneous EMG patterns, which may be related to spinal motoneuron or motor unit health. PMID:24235117
ERIC Educational Resources Information Center
Walters, Glenn D.; Diamond, Pamela M.; Magaletta, Philip R.; Geyer, Matthew D.; Duncan, Scott A.
2007-01-01
The Antisocial Features (ANT) scale of the Personality Assessment Inventory (PAI) was subjected to taxometric analysis in a group of 2,135 federal prison inmates. Scores on the three ANT subscales--Antisocial Behaviors (ANT-A), Egocentricity (ANT-E), and Stimulus Seeking (ANT-S)--served as indicators in this study and were evaluated using the…
Multiscale analysis of depth images from natural scenes: Scaling in the depth of the woods
Chapeau-Blondeau, François
Multiscale analysis of depth images from natural scenes: Scaling in the depth of the woods Yann a c t We analyze an ensemble of images from outdoor natural scenes and consisting of pairs of a standard gray-level luminance image associated with a depth image of the same scene, delivered
Lui, John C.S.
Stochastic Modeling of Large-Scale Solid-State Storage Systems: Analysis, Design Tradeoffs and Optimization Yongkun Li, Patrick P. C. Lee, John C. S. Lui The Chinese University of Hong Kong yongkunlee. However, GC poses additional writes that hinder the I/O per- formance, while SSD blocks can only endure
Children's Understanding of Large-Scale Mapping Tasks: An Analysis of Talk, Drawings, and Gesture
ERIC Educational Resources Information Center
Kotsopoulos, Donna; Cordy, Michelle; Langemeyer, Melanie
2015-01-01
This research examined how children represent motion in large-scale mapping tasks that we referred to as "motion maps". The underlying mathematical content was transformational geometry. In total, 19 children, 8- to 10-year-old, created motion maps and captured their motion maps with accompanying verbal description digitally. Analysis of…
LARGE SCALE DISASTER ANALYSIS AND MANAGEMENT: SYSTEM LEVEL STUDY ON AN INTEGRATED MODEL
The increasing intensity and scale of human activity across the globe leading to severe depletion and deterioration of the Earth's natural resources has meant that sustainability has emerged as a new paradigm of analysis and management. Sustainability, conceptually defined by the...
ERIC Educational Resources Information Center
Tucker-Drob, Elliot M.; Salthouse, Timothy A.
2009-01-01
Although factor analysis is the most commonly-used method for examining the structure of cognitive variable interrelations, multidimensional scaling (MDS) can provide visual representations highlighting the continuous nature of interrelations among variables. Using data (N = 8,813; ages 17-97 years) aggregated across 38 separate studies, MDS was…
ERIC Educational Resources Information Center
Davison, Mark L.; Kim, Se-Kang; Ding, Shuai
A model for test scores called the profile analysis via multidimensional scaling (PAMS) model is described. The model reparameterizes the linear latent variable model in such a way that the latent variables can be interpreted in terms of profile patterns, rather than factors. The model can serve as the basis for exploratory multidimensional…
Data Mining: Data Analysis on a Grand Scale? \\Lambda Padhraic Smyth
Smyth, Padhraic
Data Mining: Data Analysis on a Grand Scale? \\Lambda Padhraic Smyth Information and Computer data mining has evolved largely as a result of efforts by computer scientists to address the needs of this historical context, data mining to date has largely focused on computational and algorithmic issues rather
Menut, Laurent
of photochemical air pollution episodes, but also for multiyear simulations. These models combine mathematical estimate how pollution can be reduced [e.g., Vautard et al., 2005]; (2) but also to forecast of airBayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive
Analysis of image versus position, scale and direction reveals pattern texture anisotropy
NASA Astrophysics Data System (ADS)
Lehoucq, Roland; Weiss, Jerome; Dubrulle, Berengere; Amon, Axelle; Le Bouil, Antoine; Crassous, Jerome; Amitrano, David; Graner, Francois
2014-12-01
Pattern heterogeneities and anisotropies often carry significant physical information. We provide a toolbox which: (i) cumulates analysis in terms of position, direction and scale; (ii) is as general as possible; (iii) is simple and fast to understand, implement, execute and exploit. It consists in dividing the image into analysis boxes at a chosen scale; in each box an ellipse (the inertia tensor) is fitted to the signal and thus determines the direction in which the signal is more present. This tensor can be averaged in position and/or be used to study the dependence with scale. This choice is formally linked with Leray transforms and anisotropic wavelet analysis. Such protocol is intutively interpreted and consistent with what the eye detects: relevant scales, local variations in space, priviledged directions. It is fast and parallelizable. Its several variants are adaptable to the user's data and needs. It is useful to statistically characterize anisotropies of 2D or 3D patterns in which individual objects are not easily distinguished, with only minimal pre-processing of the raw image, and more generally applies to data in higher dimensions. It is less sensitive to edge effects, and thus better adapted for a multiscale analysis down to small scale boxes, than pair correlation function or Fourier transform. Easy to understand and implement, it complements more sophisticated methods such as Hough transform or diffusion tensor imaging. We use it on various fracture patterns (sea ice cover, thin sections of granite, granular materials), to pinpoint the maximal anisotropy scales. The results are robust to noise and to user choices. This toolbox could turn also useful for granular materials, hard condensed matter, geophysics, thin films, statistical mechanics, characterisation of networks, fluctuating amorphous systems, inhomogeneous and disordered systems, or medical imaging, among others.
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ? progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Zhuk, Alexander; Bezerra, Valdir B.; Romero, Carlos
2005-08-01
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R4 model.
Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
NASA Astrophysics Data System (ADS)
Mutter, J. C.; Deraniyagala, S.; Mara, V.; Marinova, S.
2011-12-01
The study of the socio-economic impacts of natural disasters is still in its infancy. Social scientists have historically regarded natural disasters as exogenous or essentially random perturbations. More recent scholarship treats disaster shocks as endogenous, with pre-existing social, economic and political conditions determining the form and magnitude of disaster impacts. One apparently robust conclusion is that direct economic losses from natural disasters, similar to human losses, are larger (in relative terms) the poorer a country is, yet cross-country regressions show that disasters may accrue economic benefits due to new investments in productive infrastructure, especially if the investment is funded by externally provided capital (Work Bank assistance, private donations, etc) and do not deplete national savings or acquire a debt burden. Some econometric studies also show that the quality of a country's institutions can mitigate the mortality effects of a disaster. The effects on income inequality are such that the poor suffer greater 'asset shocks' and may never recover from a disaster leading to a widening of existing disparities. Natural disasters affect women more adversely than men in terms of life expectancy at birth. On average they kill more women than men or kill women at a younger age than men, and the more so the stronger the disaster. The extent to which women are more likely to die than men or to die at a younger age from the immediate disaster impact or from post-disaster events depends not only on disaster strength itself but also on the socioeconomic status of women in the affected country. Existing research on the economic effects of disasters focus almost exclusively on the impact on economic growth - the growth rate of GDP. GDP however is only a partial indicator of welfare, especially for countries that are in the lower ranks of development status. Very poor communities are typically involved in subsistence level activities or in the informal economy and will not register disaster set backs in GDP accounts. The alterations to their lives can include loss of livelihood, loss of key assets such as livestock, loss of property and loss of savings, reduced life expectancy among survivors, increased poverty rates, increased inequality, greater subsequent maternal and child mortality (due to destruction of health care facilities), reduced education attainment (lack of school buildings), increased gender-based violence and psychological ailments. Our study enhances this literature in two ways. Firstly, it examines the effects of disasters on human development and poverty using cross-country econometric analysis with indicators of welfare that go beyond GDP. We aim to search the impact of disasters on human development and absolute poverty. Secondly we use Peak Ground Acceleration for earthquakes, a modified Palmer Drought Severity and Hurricane Energy rather than disaster event occurrence to account for the severity of the disaster.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 3-Surry Unit 1 Cycle 2
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using selected critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations in this report is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies in the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and to provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of two reactor critical configurations for Surry Unit 1 Cycle 2. This unit and cycle were chosen for a previous analysis using a different methodology because detailed isotopics from multidimensional reactor calculations were available from the Virginia Power Company. These data permitted a direct comparison of criticality calculations using the utility-calculated isotopics with those using the isotopics generated by the SCALE-4 SAS2H sequence. These reactor critical benchmarks have been reanalyzed using the methodology described above. The two benchmark critical calculations were the beginning-of-cycle (BOC) startup at hot, zero-power (HZP) and an end-of-cycle (EOC) critical at hot, full-power (HFP) critical conditions. These calculations were used to check for consistency in the calculated results for different burnup, downtime, temperature, xenon, and boron conditions. The k{sub eff} results were 1.0014 and 1.0113, respectively, with a standard deviation of 0.0005.
Wang, C.Y.; Ku, J.L.; Zeuch, W.R.
1984-03-01
This paper describes the ALICE-II analysis of and comparison with complex vessel experiments. Tests SM-2 through SM-5 were performed by SRI International in 1978 in studying the structural response of 1/20 scale models of the Clinch River Breeder Reactor to a simulated hypothetical core-disruptive accident. These experiments provided quality data for validating treatments of the nonlinear fluid-structure interactions and many complex excursion phenomena, such as flow through perforated structures, large material distortions, multi-dimensional sliding interfaces, flow around sharp corners, and highly contorted fluid boundaries. Correlations of the predicted pressures with the test results of all gauges are made. Wave characteristics and arrival times are also compared. Results show that the ALICE-II code predicts the pressure profile well. Despite the complexity, the code gave good results for the SM-5 test.
Scaling analysis for the direct reactor auxiliary cooling system for FHRs
Lv, Q.; Kim, I. H.; Sun, X.; Christensen, R. N.; Blue, T. E.; Yoder, G.; Wilson, D.; Sabharwall, P.
2015-04-01
The Direct Reactor Auxiliary Cooling System (DRACS) is a passive residual heat removal system proposed for the Fluoride-salt-cooled High-temperature Reactor (FHR) that combines the coated particle fuel and graphite moderator with a liquid fluoride salt as the coolant. The DRACS features three natural circulation/convection loops that rely on buoyancy as the driving force and are coupled via two heat exchangers, namely, the DRACS heat exchanger and the natural draft heat exchanger. A fluidic diode is employed to minimize the parasitic flow into the DRACS primary loop and correspondingly the heat loss to the DRACS during reactor normal operation, and to activate the DRACS in accidents when the reactor is shut down. While the DRACS concept has been proposed, there are no actual prototypic DRACS systems for FHRs built or tested in the literature. In this paper, a detailed scaling analysis for the DRACS is performed, which will provide guidance for the design of scaled-down DRACS test facilities. Based on the Boussinesq assumption and one-dimensional flow formulation, the governing equations are non-dimensionalized by introducing appropriate dimensionless parameters. The key dimensionless numbers that characterize the DRACS system are obtained from the non-dimensional governing equations. Based on the dimensionless numbers and non-dimensional governing equations, similarity laws are proposed. In addition, a scaling methodology has been developed, which consists of a core scaling and a loop scaling. The consistency between the core and loop scaling is examined via the reference volume ratio, which can be obtained from both the core and loop scaling processes. The scaling methodology and similarity laws have been applied to obtain a scientific design of a scaled-down high-temperature DRACS test facility.
New seismic attribute: Fractal scaling exponent based on gray detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Huang, Ya-Ping; Geng, Jian-Hua; Guo, Tong-Lou
2015-09-01
Seismic attributes have been widely used in oil and gas exploration and development. However, owing to the complexity of seismic wave propagation in subsurface media, the limitations of the seismic data acquisition system, and noise interference, seismic attributes for seismic data interpretation have uncertainties. Especially, the antinoise ability of seismic attributes directly affects the reliability of seismic interpretations. Gray system theory is used in time series to minimize data randomness and increase data regularity. Detrended fluctuation analysis (DFA) can effectively reduce extrinsic data tendencies. In this study, by combining gray system theory and DFA, we propose a new method called gray detrended fluctuation analysis (GDFA) for calculating the fractal scaling exponent. We consider nonlinear time series generated by the Weierstrass function and add random noise to actual seismic data. Moreover, we discuss the antinoise ability of the fractal scaling exponent based on GDFA. The results suggest that the fractal scaling exponent calculated using the proposed method has good antinoise ability. We apply the proposed method to 3D poststack migration seismic data from southern China and compare fractal scaling exponents calculated using DFA and GDFA. The results suggest that the use of the GDFA-calculated fractal scaling exponent as a seismic attribute can match the known distribution of sedimentary facies.
Straub, Anthony P; Lin, Shihong; Elimelech, Menachem
2014-10-21
We investigate the performance of pressure retarded osmosis (PRO) at the module scale, accounting for the detrimental effects of reverse salt flux, internal concentration polarization, and external concentration polarization. Our analysis offers insights on optimization of three critical operation and design parameters--applied hydraulic pressure, initial feed flow rate fraction, and membrane area--to maximize the specific energy and power density extractable in the system. For co- and counter-current flow modules, we determine that appropriate selection of the membrane area is critical to obtain a high specific energy. Furthermore, we find that the optimal operating conditions in a realistic module can be reasonably approximated using established optima for an ideal system (i.e., an applied hydraulic pressure equal to approximately half the osmotic pressure difference and an initial feed flow rate fraction that provides equal amounts of feed and draw solutions). For a system in counter-current operation with a river water (0.015 M NaCl) and seawater (0.6 M NaCl) solution pairing, the maximum specific energy obtainable using performance properties of commercially available membranes was determined to be 0.147 kWh per m(3) of total mixed solution, which is 57% of the Gibbs free energy of mixing. Operating to obtain a high specific energy, however, results in very low power densities (less than 2 W/m(2)), indicating that the trade-off between power density and specific energy is an inherent challenge to full-scale PRO systems. Finally, we quantify additional losses and energetic costs in the PRO system, which further reduce the net specific energy and indicate serious challenges in extracting net energy in PRO with river water and seawater solution pairings. PMID:25222561
Image analysis of contact lens grading scales for objective grade assignment of ocular complications
NASA Astrophysics Data System (ADS)
Perez-Cabre, Elisabet; Millan, Maria S.; Abril, Hector C.; Valencia, Edison
2005-06-01
Ocular complications in contact lens wearers are usually graded by specialists using visual inspection and comparing with a standard reference. The standard grading scales consist of either a set of illustrations or photographs ordered from a normal situation to a severe complication. Usually, visual inspection based on comparison with standards yields results that may differ from one specialist to another due to contour conditions or personal appreciation, causing a lack of objectiveness in the assessment of an ocular disorder. We aim to develop a method for an objective assessment of two contact lens wear complications: conjunctiva hyperemia and papillary conjunctivitis. In this work, we start by applying different image processing techniques to two standard grading scales (Efron and Cornea and Contact Lens Research Unit-CCLRU grading scales). Given a set of standard illustrations or pictures, image pre-processing is needed to compare equivalent areas. Histogram analysis allows segmenting vessel and background pixel populations, which are used to determine features, such as total area of vessels and vessel length, in the measurement of contact lens effects. In some cases, the colour content of standard series can be crucial to obtain a correct assessment. Thus, colour image analysis techniques are used to extract the most relevant features. The procedure to obtain an automatic grading method by digital image analysis of standard grading scales is described.
ERIC Educational Resources Information Center
Bredemeier, Keith; Spielberg, Jeffery M.; Silton, Rebecca Levin; Berenbaum, Howard; Heller, Wendy; Miller, Gregory A.
2010-01-01
The present study examined the utility of the anhedonic depression scale from the Mood and Anxiety Symptoms Questionnaire (MASQ-AD scale) as a way to screen for depressive disorders. Using receiver-operating characteristic analysis, we examined the sensitivity and specificity of the full 22-item MASQ-AD scale, as well as the 8- and 14-item…
ERIC Educational Resources Information Center
Durham, Thomas W.
1982-01-01
Administered the Hopelessness Scale to criminal psychiatric inpatients, general psychiatric inpatients, and college students. Both psychiatric groups endorsed significantly more items in the hopeless direction. Found the scale more reliable with the psychiatric patients. Item analysis of the Hopelessness Scale suggests that three items were not…
PWR core and spent fuel pool analysis using scale and nestle
Murphy, J. E.; Maldonado, G. I.; St Clair, R.; Orr, D.
2012-07-01
The SCALE nuclear analysis code system [SCALE, 2011], developed and maintained at Oak Ridge National Laboratory (ORNL) is widely recognized as high quality software for analyzing nuclear systems. The SCALE code system is composed of several validated computer codes and methods with standard control sequences, such as the TRITON/NEWT lattice physics sequence, which supplies dependable and accurate analyses for industry, regulators, and academia. Although TRITON generates energy-collapsed and space-homogenized few group cross sections, SCALE does not include a full-core nodal neutron diffusion simulation module within. However, in the past few years, the open-source NESTLE core simulator [NESTLE, 2003], originally developed at North Carolina State Univ. (NCSU), has been updated and upgraded via collaboration between ORNL and the Univ. of Tennessee (UT), so it now has a growingly seamless coupling to the TRITON/NEWT lattice physics [Galloway, 2010]. This study presents the methodology used to couple lattice physics data between TRITON and NESTLE in order to perform a three-dimensional full-core analysis employing a 'real-life' Duke Energy PWR as the test bed. The focus for this step was to compare the key parameters of core reactivity and radial power distribution versus plant data. Following the core analysis, following a three cycle burn, a spent fuel pool analysis was done using information generated from NESTLE for the discharged bundles and was compared to Duke Energy spent fuel pool models. The KENO control module from SCALE was employed for this latter stage of the project. (authors)
Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2015-02-01
We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.
Multifractal and cumulant analysis of small scale turbulent structures in MODIS reflectance
NASA Astrophysics Data System (ADS)
Schmitt, F. G.; Loisel, H.
2011-12-01
For a better understanding of turbulence effects on physical and biological oceanic processes, we analyze the spatial scales variability of satellite ocean color and sea surface temperature remote sensing data collected by the MODIS-Aqua sensor. We use level 2 GAC (Global Area Coverage) products with a nominal 1.1 km x1.1 km resolution. The main objective of this study is to characterize the scaling properties of the spatial small-scale variability of the chlorophyll concentration, Chl, and sea surface temperature, SST. Different oceanic areas covering various environmental conditions have been selected for that purpose: The Eastern Mediterranean sea (oligotrophic), the trophic Argentinian coast, and the Mauritanian upwelling area with the highest chlorophyll concentration. We first considered power spectral analysis and selected images for which scaling was found in the Fourier power spectra. We then estimated spatial structure functions of various moment order, and consider the multifractal moment functions that can be extracted for each parameter, using structure functions as well as Empirical Mode Decomposition approach. We also use cumulant approachs to assess the statistics of fluctuations. A passive scalar intermittent turbulent theoretical framework is used for comparison. The aim is ultimately to use such framework to estimate scales for which chlorophyll data behave as passive scalars versus scales for which biological activity is dominant.
Challenges of Cardiac Image Analysis in Large-Scale Population-Based Studies
Medrano-Gracia, Pau; Cowan, Brett R.; Suinesiaputra, Avan
2015-01-01
Large-scale population-based imaging studies of preclinical and clinical heart disease are becoming possible due to the advent of standardized robust non-invasive imaging methods and infrastructure for big data analysis. This gives an exciting opportunity to gain new information about the development and progression of heart disease across population groups. However, the large amount of image data and prohibitive time required for image analysis present challenges for obtaining useful derived data from the images. Automated analysis tools for cardiac image analysis are only now becoming available. This paper reviews the challenges and possible solutions to the analysis of big imaging data in population studies. We also highlight the potential of recent large epidemiological studies using cardiac imaging to discover new knowledge on heart health and well-being. PMID:25648627
Hilbert-Huang Transform and Scaling Analysis of Various Geoscience data
NASA Astrophysics Data System (ADS)
Schmitt, Francois; Huang, Yongxiang
2014-05-01
In geoscience, the field observation data are always nonlinear and nonstationary. They are also showing multiscale property since different spatial and temporal scales are involved. It is found that traditional methodologies, e.g., Fourier spectral analysis, structure-function analysis, etc., are strongly influenced by either nonlinear or nonstationary events. The Hilbert-Huang Transform (the combination of Empirical Mode Decomposition and Hilbert Spectral analysis) is efficient to handle both the nonlinearity and nonstationarity. In this talk, we apply this Hilbert-based methodology to various geoscience data collected from different field observations to characterize their scaling property. The collected data are daily river discharge, sea level, copepod abundance, wind energy, environmental data (temperature, dissolved oxygen) from coastal line, etc. Scaling property of these processes are then characterized in the frame of Hilbert spectral analysis. Reference 1. Huang Y, Schmitt F, Lu Z and Liu Y 2008 Europhys. Lett. 84, 40010. 2. Huang Y, Schmitt F, Lu Z and Liu Y 2009 J. Hydrol. 373, 103-111. 3. Schmitt F.G., Huang Y., Lu, Z., Liu Y., and Fernandez N. J. Mar. Sys., 2009, 77, 473-481 4. Schmitt F. G., Huang Y., Lu, Z., Zongo S. B., Molinero J. C. and Liu Y. Nonlinear Dynamics in Geosciences. edited by A. Tsonis and J. Elsner, Springer, 2007, 261-280 5. Huang Y. and Schmitt F. G. J. Mar. Sys., 2014, 130, 90-100 6. Calif R., Schmitt F. G. and Huang, Y. Physica A, 2013, 392, 4106-4120
Real-time Visualisation and Analysis of Tera-scale Datasets
NASA Astrophysics Data System (ADS)
Fluke, Christopher J.
2015-03-01
As we move ever closer to the Square Kilometre Array era, support for real-time, interactive visualisation and analysis of tera-scale (and beyond) data cubes will be crucial for on-going knowledge discovery. However, the data-on-the-desktop approach to analysis and visualisation that most astronomers are comfortable with will no longer be feasible: tera-scale data volumes exceed the memory and processing capabilities of standard desktop computing environments. Instead, there will be an increasing need for astronomers to utilise remote high performance computing (HPC) resources. In recent years, the graphics processing unit (GPU) has emerged as a credible, low cost option for HPC. A growing number of supercomputing centres are now investing heavily in GPU technologies to provide O(100) Teraflop/s processing. I describe how a GPU-powered computing cluster allows us to overcome the analysis and visualisation challenges of tera-scale data. With a GPU-based architecture, we have moved the bottleneck from processing-limited to bandwidth-limited, achieving exceptional real-time performance for common visualisation and data analysis tasks.
Analysis Of 2H-Evaporator Scale Pot Bottom Sample [HTF-13-11-28H
Oji, L. N.
2013-07-15
Savannah River Remediation (SRR) is planning to remove a buildup of sodium aluminosilicate scale from the 2H-evaporator pot by loading and soaking the pot with heated 1.5 M nitric acid solution. Sampling and analysis of the scale material from the 2H evaporator has been performed so that the evaporator can be chemically cleaned beginning July of 2013. Historically, since the operation of the Defense Waste Processing Facility (DWPF), silicon in the DWPF recycle stream combines with aluminum in the typical tank farm supernate to form sodium aluminosilicate scale mineral deposits in the 2H-evaporator pot and gravity drain line. The 2H-evaporator scale samples analyzed by Savannah River National Laboratory (SRNL) came from the bottom cone sections of the 2H-evaporator pot. The sample holder from the 2H-evaporator wall was virtually empty and was not included in the analysis. It is worth noting that after the delivery of these 2H-evaporator scale samples to SRNL for the analyses, the plant customer determined that the 2H evaporator could be operated for additional period prior to requiring cleaning. Therefore, there was no need for expedited sample analysis as was presented in the Technical Task Request. However, a second set of 2H evaporator scale samples were expected in May of 2013, which would need expedited sample analysis. X-ray diffraction analysis (XRD) confirmed the bottom cone section sample from the 2H-evaporator pot consisted of nitrated cancrinite, (a crystalline sodium aluminosilicate solid), clarkeite and uranium oxide. There were also mercury compound XRD peaks which could not be matched and further X-ray fluorescence (XRF) analysis of the sample confirmed the existence of elemental mercury or mercuric oxide. On ''as received'' basis, the scale contained an average of 7.09E+00 wt % total uranium (n = 3; st.dev. = 8.31E-01 wt %) with a U-235 enrichment of 5.80E-01 % (n = 3; st.dev. = 3.96E-02 %). The measured U-238 concentration was 7.05E+00 wt % (n=3, st. dev. = 8.25E-01 wt %). Analyses results for Pu-238 and Pu-239, and Pu-241 are 7.06E-05 {+-} 7.63E-06 wt %, 9.45E-04 {+-} 3.52E-05 wt %, and <2.24E-06 wt %, respectively. These results are provided so that SRR can calculate the equivalent uranium-235 concentrations for the NCSA. Because this 2H evaporator pot bottom scale sample contained a significant amount of elemental mercury (11.7 wt % average), it is recommended that analysis for mercury be included in future Technical Task Requests on 2H evaporator sample analysis at SRNL. Results confirm that the uranium contained in the scale remains depleted with respect to natural uranium. SRNL did not calculate an equivalent U-235 enrichment, which takes into account other fissionable isotopes U-233, Pu-239 and Pu-241.
YMD: A microarray database for large-scale gene expression analysis Kei-Hoi Cheung, PhD1
Gerstein, Mark
The use of microarray technology to perform parallel analysis of the expression pattern of a large number such as the Affymetrix GeneChip technology. Similar large-scale microarray database efforts are underway at other uYMD: A microarray database for large-scale gene expression analysis Kei-Hoi Cheung, PhD1 , Kevin
Extraction and analysis of the width, gray scale and radian in Chinese signature handwriting.
Chen, Xiaohong
2015-10-01
Forensic handwriting examination is a relevant identification process in forensic science. This research obtained ideas from the process of features detection and analysis in forensic handwriting examination. A Chinese signature database was developed and comprised original signatures, freehand imitation forgeries, random forgeries and tracing imitation forgeries. The features of width, gray scale and radian combined with stroke orders were automatically extracted after image processing. A correlation coefficient was used to precisely characterize and express the similarities between signatures. To validate the differences between writers, a multivariate analysis of the variance was employed. The canonical discriminant analysis was performed between the original and non-original signatures; the cross-validation estimated the discriminating power of the width, gray scale and radian data. It is suggested that the extraction and analysis of these properties in Chinese signatures is reasonable. Meanwhile, forensic handwriting examination using the quantitative feature extraction and statistical analysis methods in this research could be performed with a satisfactory result in the discriminant analysis. PMID:26209129
Item response analysis of a shortened German version of the morningness-eveningness scale.
Jordan, Pascal; Terschüren, Claudia; Harth, Volker
2015-11-01
A shortened version of the German adaptation of the morningness-eveningness scale of Horne and Östberg is analysed within a large sample of 994 physicians with respect to dimensionality, reliability, gender differences and validity. The psychometric analysis - which incorporates a highly robust method to check for unidimensionality - shows discrepancies towards unidimensionality and highlights three misfitting items. In addition, hypothesis testing indicates the presence of differential item functioning (DIF) with respect to gender which could be caused by differences in response formats. Although, reliability estimates are satisfactory, an overall lack of adequate psychometric properties of the scale within the population of physicians has to be reported. We derive suggestions for improvement of the original morningness-eveningness questionnaire (MEQ)-scale and provide general comments on how to check for unidimensionality without imposing a restrictive response model. PMID:26375194
High-throughput generation, optimization and analysis of genome-scale metabolic models.
Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.
2010-09-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.
NASA Astrophysics Data System (ADS)
Priour, D. J.
2014-01-01
The percolation threshold for flow or conduction through voids surrounding randomly placed spheres is calculated. With large-scale Monte Carlo simulations, we give a rigorous continuum treatment to the geometry of the impenetrable spheres and the spaces between them. To properly exploit finite-size scaling, we examine multiple systems of differing sizes, with suitable averaging over disorder, and extrapolate to the thermodynamic limit. An order parameter based on the statistical sampling of stochastically driven dynamical excursions and amenable to finite-size scaling analysis is defined, calculated for various system sizes, and used to determine the critical volume fraction ?c=0.0317±0.0004 and the correlation length exponent ? =0.92±0.05.
NASA Astrophysics Data System (ADS)
Lin, Aijing; Ma, Hui; Shang, Pengjian
2015-10-01
Here we propose the new method DH-MMA, based on multiscale multifractal detrended fluctuation analysis(MMA), to investigate the scaling properties in stock markets. It is demonstrated that our approach can provide a more stable and faithful description of the scaling properties in comprehensive range rather than fixing the window length and slide length. It allows the assessment of more universal and subtle scaling characteristics. We illustrate DH-MMA by selecting power-law artificial data sets and six stock markets from US and China. The US stocks exhibit very strong multifractality for positive values of q, however, the Chinese stocks show stronger multifractality for negative q than positive q. In general, the US stock markets show similar behaviors, but Chinese stock markets display distinguishing characteristics.
A Boundary-Layer Scaling Analysis Comparing Complex And Flat Terrain
NASA Astrophysics Data System (ADS)
Fitton, George; Tchiguirinskaia, Ioulia; Scherzter, Daniel; Lovejoy, Shaun
2013-04-01
A comparison of two boundary-layer (at approximately 50m) wind datasets shows the existence of reproducible scaling behaviour in two very topographically different sites. The first test site was in Corsica, an island in the South of France, subject to both orographic and convective effects due to its mountainous terrain and close proximity to the sea respectively. The data recorded in Corsica consisted of 10Hz sonic anemometer velocities measured over a six-month period. The second site consists of measurements from the Growian experiment. The testing site for this experiment was also in close proximity to the sea, however, the surrounding terrain is very flat. The data in this experiment was recorded using propellor anemometers at 2.5Hz. Note the resolution of the sonics was better, however, we found in both cases, using spectral methods, that the quality of the data was unusable below frequencies of one second. The scales that we will discuss therefore are from one second to fourteen hours. In both cases three scaling subranges are observed. Starting from the lower frequencies, both datasets have a spectral exponent of approximately two from six hours to fourteen hours. Our first scaling analyses were only done on the Corsica dataset and thus we proposed that this change in scaling was due to the orography. The steep slope of the hill on which the mast was positioned was causing the wind's orientation to be directed vertically. This implied that the vertical shears of the horizontal wind may scale as Bogiano-Obhukov's 11/5 power law. Further analysis on the second (Growian) dataset resulted in the same behaviour over the same time-scales. Since the Growian experiment was performed over nearly homogenous terrain our first hypothesis is questionable. Alternatively we propose that for frequencies above six hours Taylor's hypothesis is no longer valid. This implies that in order to observe the scaling properties of structures with eddy turnover times larger than six hours direct measurements in space are necessary. In again both cases, for time-scales less than six hours up to an hour we observed a scaling power law that resembled something between Kolmogorov's 5/3s and a -1 energy production power law (a spectral exponent of 1.3). Finally from one hour to a second, two very different scaling behaviours occurred. For the Corsica dataset we observe a (close to) purely Kolmogorov 5/3s scaling subrange suggesting surface-layer mixing is the dominant process. For the Growian dataset we observe a scaling subrange that is close to Bolgiano-Obhukov's 11/5s suggesting temperature plays a dominant role. Additionally, for the Growian dataset we found that temperature is an active scaler for time-scales above an hour unlike for the Cosica dataset. This suggests that orographic effects may suppress convective forces over the large scales resulting in different small scale shear profiles in the cascade process. Given we can reproduce this scaling behaviour within a multifractal framework it will be of great interest to stochastically simulate the corresponding vector fields for the two situations in order to properly understand the physical meaning of our observations.
Water-Level Data Analysis for the Saturated Zone Site-Scale Flow and Transport Model
K. Rehfeldt
2004-10-08
This report is an updated analysis of water-level data performed to provide the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]) (referred to as the saturated zone (SZ) site-scale flow model or site-scale SZ flow model in this report) with the configuration of the potentiometric surface, target water-level data, and hydraulic gradients for calibration of groundwater flow models. This report also contains an expanded discussion of uncertainty in the potentiometric-surface map. The analysis of the potentiometric data presented in Revision 00 of this report (USGS 2001 [DIRS 154625]) provides the configuration of the potentiometric surface, target heads, and hydraulic gradients for the calibration of the SZ site-scale flow model (BSC 2004 [DIRS 170037]). Revision 01 of this report (USGS 2004 [DIRS 168473]) used updated water-level data for selected wells through the year 2000 as the basis for estimating water-level altitudes and the potentiometric surface in the SZ site-scale flow and transport model domain based on an alternative interpretation of perched water conditions. That revision developed computer files containing: Water-level data within the model area (DTN: GS010908312332.002); A table of known vertical head differences (DTN: GS010908312332.003); and A potentiometric-surface map (DTN: GS010608312332.001) using an alternative concept from that presented by USGS (2001 [DIRS 154625]) for the area north of Yucca Mountain. The updated water-level data presented in USGS (2004 [DIRS 168473]) include data obtained from the Nye County Early Warning Drilling Program (EWDP) Phases I and II and data from Borehole USW WT-24. This document is based on Revision 01 (USGS 2004 [DIRS 168473]) and expands the discussion of uncertainty in the potentiometric-surface map. This uncertainty assessment includes an analysis of the impact of more recent water-level data and the impact of adding data from the EWDP Phases III and IV wells. In addition to being utilized by the SZ site-scale flow model, the water-level data and potentiometric-surface map contained within this report will be available to other government agencies and water users for groundwater management purposes. The potentiometric surface defines an upper boundary of the site-scale flow model and provides information useful to estimation of the magnitude and direction of lateral groundwater flow within the flow system. Therefore, the analysis documented in this revision is important to SZ flow and transport calculations in support of total system performance assessment (TSPA).