PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
NASA Astrophysics Data System (ADS)
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
A Bootstrap Generalization of Modified Parallel Analysis for IRT Dimensionality Assessment
ERIC Educational Resources Information Center
Finch, Holmes; Monahan, Patrick
2008-01-01
This article introduces a bootstrap generalization to the Modified Parallel Analysis (MPA) method of test dimensionality assessment using factor analysis. This methodology, based on the use of Marginal Maximum Likelihood nonlinear factor analysis, provides for the calculation of a test statistic based on a parametric bootstrap using the MPA…
Kim, Hak-Jin; Kim, Bong Chul; Kim, Jin-Geun; Zhengguo, Piao; Kang, Sang Hoon; Lee, Sang-Hwy
2014-03-01
The objective of this study was to determine the reliable midsagittal (MS) reference plane in practical ways for the three-dimensional craniofacial analysis on three-dimensional computed tomography images. Five normal human dry skulls and 20 normal subjects without any dysmorphoses or asymmetries were used. The accuracies and stability on repeated plane construction for almost every possible candidate MS plane based on the skull base structures were examined by comparing the discrepancies in distances and orientations from the reference points and planes of the skull base and facial bones on three-dimensional computed tomography images. The following reference points of these planes were stable, and their distribution was balanced: nasion and foramen cecum at the anterior part of the skull base, sella at the middle part, and basion and opisthion at the posterior part. The candidate reference planes constructed using the aforementioned reference points were thought to be reliable for use as an MS reference plane for the three-dimensional analysis of maxillofacial dysmorphosis.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
A variable-order laminated plate theory based on the variational-asymptotical method
NASA Technical Reports Server (NTRS)
Lee, Bok W.; Sutyrin, Vladislav G.; Hodges, Dewey H.
1993-01-01
The variational-asymptotical method is a mathematical technique by which the three-dimensional analysis of laminated plate deformation can be split into a linear, one-dimensional, through-the-thickness analysis and a nonlinear, two-dimensional, plate analysis. The elastic constants used in the plate analysis are obtained from the through-the-thickness analysis, along with approximate, closed-form three-dimensional distributions of displacement, strain, and stress. In this paper, a theory based on this technique is developed which is capable of approximating three-dimensional elasticity to any accuracy desired. The asymptotical method allows for the approximation of the through-the-thickness behavior in terms of the eigenfunctions of a certain Sturm-Liouville problem associated with the thickness coordinate. These eigenfunctions contain all the necessary information about the nonhomogeneities along the thickness coordinate of the plate and thus possess the appropriate discontinuities in the derivatives of displacement. The theory is presented in this paper along with numerical results for the eigenfunctions of various laminated plates.
NASA Astrophysics Data System (ADS)
Fan, Qingju; Wu, Yonghong
2015-08-01
In this paper, we develop a new method for the multifractal characterization of two-dimensional nonstationary signal, which is based on the detrended fluctuation analysis (DFA). By applying to two artificially generated signals of two-component ARFIMA process and binomial multifractal model, we show that the new method can reliably determine the multifractal scaling behavior of two-dimensional signal. We also illustrate the applications of this method in finance and physiology. The analyzing results exhibit that the two-dimensional signals under investigation are power-law correlations, and the electricity market consists of electricity price and trading volume is multifractal, while the two-dimensional EEG signal in sleep recorded for a single patient is weak multifractal. The new method based on the detrended fluctuation analysis may add diagnostic power to existing statistical methods.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral
2002-01-01
An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.
ERIC Educational Resources Information Center
Svetina, Dubravka
2013-01-01
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in noncompensatory multidimensional item response models using dimensionality assessment procedures based on DETECT (dimensionality evaluation to enumerate contributing traits) and NOHARM (normal ogive harmonic analysis robust method). Five…
Dimensionality Analysis of "CBAL"™ Writing Tests. Research Report. ETS RR-13-10
ERIC Educational Resources Information Center
Fu, Jianbin; Chung, Seunghee; Wise, Maxwell
2013-01-01
The Cognitively Based Assessment of, for, and as Learning ("CBAL"™) research initiative is aimed at developing an innovative approach to K-12 assessment based on cognitive competency models. Because the choice of scoring and equating approaches depends on test dimensionality, the dimensional structure of CBAL tests must be understood.…
NASA Astrophysics Data System (ADS)
Dong, S.; Yan, Q.; Xu, Y.; Bai, J.
2018-04-01
In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.
Reaction-Infiltration Instabilities in Fractured and Porous Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Anthony
In this project we are developing a multiscale analysis of the evolution of fracture permeability, using numerical simulations and linear stability analysis. Our simulations include fully three-dimensional simulations of the fracture topography, fluid flow, and reactant transport, two-dimensional simulations based on aperture models, and linear stability analysis.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
A frequency-based window width optimized two-dimensional S-Transform profilometry
NASA Astrophysics Data System (ADS)
Zhong, Min; Chen, Feng; Xiao, Chao
2017-11-01
A new scheme is proposed to as a frequency-based window width optimized two-dimensional S-Transform profilometry, in which parameters pu and pv are introduced to control the width of a two-dimensional Gaussian window. Unlike the standard two-dimensional S-transform using the Gaussian window with window width proportional to the reciprocal local frequency of the tested signal, the size of window width for the optimized two-dimensional S-Transform varies with the pu th (pv th) power of the reciprocal local frequency fx (fy) in x (y) direction. The paper gives a detailed theoretical analysis of optimized two-dimensional S-Transform in fringe analysis as well as the characteristics of the modified Gauss window. Simulations are applied to evaluate the proposed scheme, the results show that the new scheme has better noise reduction ability and can extract phase distribution more precise in comparison with the standard two-dimensional S-transform even though the surface of the measured object varies sharply. Finally, the proposed scheme is demonstrated on three-dimensional surface reconstruction for a complex plastic cat mask to show its effectiveness.
Impact of comprehensive two-dimensional gas chromatography with mass spectrometry on food analysis.
Tranchida, Peter Q; Purcaro, Giorgia; Maimone, Mariarosa; Mondello, Luigi
2016-01-01
Comprehensive two-dimensional gas chromatography with mass spectrometry has been on the separation-science scene for about 15 years. This three-dimensional method has made a great positive impact on various fields of research, and among these that related to food analysis is certainly at the forefront. The present critical review is based on the use of comprehensive two-dimensional gas chromatography with mass spectrometry in the untargeted (general qualitative profiling and fingerprinting) and targeted analysis of food volatiles; attention is focused not only on its potential in such applications, but also on how recent advances in comprehensive two-dimensional gas chromatography with mass spectrometry will potentially be important for food analysis. Additionally, emphasis is devoted to the many instances in which straightforward gas chromatography with mass spectrometry is a sufficiently-powerful analytical tool. Finally, possible future scenarios in the comprehensive two-dimensional gas chromatography with mass spectrometry food analysis field are discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
LOVE EUGENE S
1957-01-01
An analysis has been made of available experimental data to show the effects of most of the variables that are more predominant in determining base pressure at supersonic speeds. The analysis covers base pressures for two-dimensional airfoils and for bodies of revolution with and without stabilizing fins and is restricted to turbulent boundary layers. The present status of available experimental information is summarized as are the existing methods for predicting base pressure. A simple semiempirical method is presented for estimating base pressure. For two-dimensional bases, this method stems from an analogy established between the base-pressure phenomena and the peak pressure rise associated with the separation of the boundary layer. An analysis made for axially symmetric flow indicates that the base pressure for bodies of revolution is subject to the same analogy. Based upon the methods presented, estimations are made of such effects as Mach number, angle of attack, boattailing, fineness ratio, and fins. These estimations give fair predictions of experimental results. (author)
A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package
ERIC Educational Resources Information Center
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.
2013-01-01
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Yu, L; Batlle, F
2011-12-01
Limited space for accommodating the ever increasing mounds of municipal solid waste (MSW) demands the capacity of MSW landfill be maximized by building landfills to greater heights with steeper slopes. This situation has raised concerns regarding the stability of high MSW landfills. A hybrid method for quasi-three-dimensional slope stability analysis based on the finite element stress analysis was applied in a case study at a MSW landfill in north-east Spain. Potential slides can be assumed to be located within the waste mass due to the lack of weak foundation soils and geosynthetic membranes at the landfill base. The only triggering factor of deep-seated slope failure is the higher leachate level and the relatively high and steep slope in the front. The valley-shaped geometry and layered construction procedure at the site make three-dimensional slope stability analyses necessary for this landfill. In the finite element stress analysis, variations of leachate level during construction and continuous settlement of the landfill were taken into account. The "equivalent" three-dimensional factor of safety (FoS) was computed from the individual result of the two-dimensional analysis for a series of evenly spaced cross sections within the potential sliding body. Results indicate that the hybrid method for quasi-three-dimensional slope stability analysis adopted in this paper is capable of locating roughly the spatial position of the potential sliding mass. This easy to manipulate method can serve as an engineering tool in the preliminary estimate of the FoS as well as the approximate position and extent of the potential sliding mass. The result that FoS obtained from three-dimensional analysis increases as much as 50% compared to that from two-dimensional analysis implies the significance of the three-dimensional effect for this study-case. Influences of shear parameters, time elapse after landfill closure, leachate level as well as unit weight of waste on FoS were also investigated in this paper. These sensitivity analyses serve as the guidelines of construction practices and operating procedures for the MSW landfill under study. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés
2016-07-15
Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.
Chen Peng; Ao Li
2017-01-01
The emergence of multi-dimensional data offers opportunities for more comprehensive analysis of the molecular characteristics of human diseases and therefore improving diagnosis, treatment, and prevention. In this study, we proposed a heterogeneous network based method by integrating multi-dimensional data (HNMD) to identify GBM-related genes. The novelty of the method lies in that the multi-dimensional data of GBM from TCGA dataset that provide comprehensive information of genes, are combined with protein-protein interactions to construct a weighted heterogeneous network, which reflects both the general and disease-specific relationships between genes. In addition, a propagation algorithm with resistance is introduced to precisely score and rank GBM-related genes. The results of comprehensive performance evaluation show that the proposed method significantly outperforms the network based methods with single-dimensional data and other existing approaches. Subsequent analysis of the top ranked genes suggests they may be functionally implicated in GBM, which further corroborates the superiority of the proposed method. The source code and the results of HNMD can be downloaded from the following URL: http://bioinformatics.ustc.edu.cn/hnmd/ .
Analysis of Aerospike Plume Induced Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1998-01-01
Computational analysis is conducted to study the effect of an aerospike engine plume on X-33 base-heating environment during ascent flight. To properly account for the effect of forebody and aftbody flowfield such as shocks and to allow for potential plume-induced flow-separation, thermo-flowfield of trajectory points is computed. The computational methodology is based on a three-dimensional finite-difference, viscous flow, chemically reacting, pressure-base computational fluid dynamics formulation, and a three-dimensional, finite-volume, spectral-line based weighted-sum-of-gray-gases radiation absorption model computational heat transfer formulation. The predicted convective and radiative base-heat fluxes are presented.
Three-dimensional magnetophotonic crystals based on artificial opals
NASA Astrophysics Data System (ADS)
Baryshev, A. V.; Kodama, T.; Nishimura, K.; Uchida, H.; Inoue, M.
2004-06-01
We fabricated and experimentally investigated three-dimensional magnetophotonic crystals (3D MPCs) based on artificial opals. Opal samples with three-dimensional dielectric lattices were impregnated with different types of magnetic material. Magnetic and structural properties of 3D MPCs were studied by field emission scanning electron microscopy, x-ray diffraction analysis, and vibrating sample magnetometer. We have shown that magnetic materials synthesized in voids of opal lattices and the composites obtained have typical magnetic properties.
Schiek, Richard [Albuquerque, NM
2006-06-20
A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.
NASA Technical Reports Server (NTRS)
Watanabe, M.; Actor, G.; Gatos, H. C.
1977-01-01
Quantitative analysis of the electron beam induced current in conjunction with high-resolution scanning makes it possible to evaluate the minority-carrier lifetime three dimensionally in the bulk and the surface recombination velocity two dimensionally, with a high spacial resolution. The analysis is based on the concept of the effective excitation strength of the carriers which takes into consideration all possible recombination sources. Two-dimensional mapping of the surface recombination velocity of phosphorus-diffused silicon diodes is presented as well as a three-dimensional mapping of the changes in the minority-carrier lifetime in ion-implanted silicon.
NASA Technical Reports Server (NTRS)
Chan, S. T. K.; Lee, C. H.; Brashears, M. R.
1975-01-01
A finite element algorithm for solving unsteady, three-dimensional high velocity impact problems is presented. A computer program was developed based on the Eulerian hydroelasto-viscoplastic formulation and the utilization of the theorem of weak solutions. The equations solved consist of conservation of mass, momentum, and energy, equation of state, and appropriate constitutive equations. The solution technique is a time-dependent finite element analysis utilizing three-dimensional isoparametric elements, in conjunction with a generalized two-step time integration scheme. The developed code was demonstrated by solving one-dimensional as well as three-dimensional impact problems for both the inviscid hydrodynamic model and the hydroelasto-viscoplastic model.
Lü, Gui-Cai; Zhao, Wei-Hong; Wang, Jiang-Tao
2011-01-01
The identification techniques for 10 species of red tide algae often found in the coastal areas of China were developed by combining the three-dimensional fluorescence spectra of fluorescence dissolved organic matter (FDOM) from the cultured red tide algae with principal component analysis. Based on the results of principal component analysis, the first principal component loading spectrum of three-dimensional fluorescence spectrum was chosen as the identification characteristic spectrum for red tide algae, and the phytoplankton fluorescence characteristic spectrum band was established. Then the 10 algae species were tested using Bayesian discriminant analysis with a correct identification rate of more than 92% for Pyrrophyta on the level of species, and that of more than 75% for Bacillariophyta on the level of genus in which the correct identification rates were more than 90% for the phaeodactylum and chaetoceros. The results showed that the identification techniques for 10 species of red tide algae based on the three-dimensional fluorescence spectra of FDOM from the cultured red tide algae and principal component analysis could work well.
Purnell, Mark; Seehausen, Ole; Galis, Frietson
2012-01-01
Resource polymorphisms and competition for resources are significant factors in speciation. Many examples come from fishes, and cichlids are of particular importance because of their role as model organisms at the interface of ecology, development, genetics and evolution. However, analysis of trophic resource use in fishes can be difficult and time-consuming, and for fossil fish species it is particularly problematic. Here, we present evidence from cichlids that analysis of tooth microwear based on high-resolution (sub-micrometre scale) three-dimensional data and new ISO standards for quantification of surface textures provides a powerful tool for dietary discrimination and investigation of trophic resource exploitation. Our results suggest that three-dimensional approaches to analysis offer significant advantages over two-dimensional operator-scored methods of microwear analysis, including applicability to rough tooth surfaces that lack distinct scratches and pits. Tooth microwear textures develop over a longer period of time than is represented by stomach contents, and analyses based on textures are less prone to biases introduced by opportunistic feeding. They are more sensitive to subtle dietary differences than isotopic analysis. Quantitative textural analysis of tooth microwear has a useful role to play, complementing existing approaches, in trophic analysis of fishes—both extant and extinct. PMID:22491979
Robust video copy detection approach based on local tangent space alignment
NASA Astrophysics Data System (ADS)
Nie, Xiushan; Qiao, Qianping
2012-04-01
We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.
NASA Astrophysics Data System (ADS)
Tong, Xiaojun; Cui, Minggen; Wang, Zhu
2009-07-01
The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.
An Analysis of Base Pressure at Supersonic Velocities and Comparison with Experiment
NASA Technical Reports Server (NTRS)
Chapman, Dean R
1951-01-01
In the first part of the investigation an analysis is made of base pressure in an inviscid fluid, both for two-dimensional and axially symmetric flow. It is shown that for two-dimensional flow, and also for the flow over a body of revolution with a cylindrical sting attached to the base, there are an infinite number of possible solutions satisfying all necessary boundary conditions at any given free-stream Mach number. For the particular case of a body having no sting attached only one solution is possible in an inviscid flow, but it corresponds to zero base drag. Accordingly, it is concluded that a strictly inviscid-fluid theory cannot be satisfactory for practical applications. An approximate semi-empirical analysis for base pressure in a viscous fluid is developed in a second part of the investigation. The semi-empirical analysis is based partly on inviscid-flow calculations.
Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ochilov, S.; Alam, M. S.; Bal, A.
2006-05-01
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
NASA Astrophysics Data System (ADS)
Sivasubramaniam, Kiruba
This thesis makes advances in three dimensional finite element analysis of electrical machines and the quantification of their parameters and performance. The principal objectives of the thesis are: (1)the development of a stable and accurate method of nonlinear three-dimensional field computation and application to electrical machinery and devices; and (2)improvement in the accuracy of determination of performance parameters, particularly forces and torque computed from finite elements. Contributions are made in two general areas: a more efficient formulation for three dimensional finite element analysis which saves time and improves accuracy, and new post-processing techniques to calculate flux density values from a given finite element solution. A novel three-dimensional magnetostatic solution based on a modified scalar potential method is implemented. This method has significant advantages over the traditional total scalar, reduced scalar or vector potential methods. The new method is applied to a 3D geometry of an iron core inductor and a permanent magnet motor. The results obtained are compared with those obtained from traditional methods, in terms of accuracy and speed of computation. A technique which has been observed to improve force computation in two dimensional analysis using a local solution of Laplace's equation in the airgap of machines is investigated and a similar method is implemented in the three dimensional analysis of electromagnetic devices. A new integral formulation to improve force calculation from a smoother flux-density profile is also explored and implemented. Comparisons are made and conclusions drawn as to how much improvement is obtained and at what cost. This thesis also demonstrates the use of finite element analysis to analyze torque ripples due to rotor eccentricity in permanent magnet BLDC motors. A new method for analyzing torque harmonics based on data obtained from a time stepping finite element analysis of the machine is explored and implemented.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots.
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-11
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-01
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
[Postmortem CT examination in a case of alleged drowning--a case report].
Woźniak, Krzysztof; Urbanik, Andrzej; Rzepecka-Woźniak, Ewa; Moskała, Artur; Kłys, Małgorzata
2009-01-01
The authors present an analysis of postmortem CT examination in a case of drowning in fresh water of a young male. Both the results of conventional forensic autopsy and radiologic examination have been compared. The analysis is illustrated by two-dimensional and three-dimensional reconstructions based on the DICOM files obtained during postmortem CT examination.
Consideration of Moving Tooth Load in Gear Crack Propagation Predictions
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Handschuh, Robert F.; Spievak, Lisa E.; Wawrzynek, Paul A.; Ingraffea, Anthony R.
2001-01-01
Robust gear designs consider not only crack initiation, but crack propagation trajectories for a fail-safe design. In actual gear operation, the magnitude as well as the position of the force changes as the gear rotates through the mesh. A study to determine the effect of moving gear tooth load on crack propagation predictions was performed. Two-dimensional analysis of an involute spur gear and three-dimensional analysis of a spiral-bevel pinion gear using the finite element method and boundary element method were studied and compared to experiments. A modified theory for predicting gear crack propagation paths based on the criteria of Erdogan and Sih was investigated. Crack simulation based on calculated stress intensity factors and mixed mode crack angle prediction techniques using a simple static analysis in which the tooth load was located at the highest point of single tooth contact was validated. For three-dimensional analysis, however, the analysis was valid only as long as the crack did not approach the contact region on the tooth.
Chen, Hanchi; Abhayapala, Thushara D; Zhang, Wen
2015-11-01
Soundfield analysis based on spherical harmonic decomposition has been widely used in various applications; however, a drawback is the three-dimensional geometry of the microphone arrays. In this paper, a method to design two-dimensional planar microphone arrays that are capable of capturing three-dimensional (3D) spatial soundfields is proposed. Through the utilization of both omni-directional and first order microphones, the proposed microphone array is capable of measuring soundfield components that are undetectable to conventional planar omni-directional microphone arrays, thus providing the same functionality as 3D arrays designed for the same purpose. Simulations show that the accuracy of the planar microphone array is comparable to traditional spherical microphone arrays. Due to its compact shape, the proposed microphone array greatly increases the feasibility of 3D soundfield analysis techniques in real-world applications.
GAC: Gene Associations with Clinical, a web based application.
Zhang, Xinyan; Rupji, Manali; Kowalski, Jeanne
2017-01-01
We present GAC, a shiny R based tool for interactive visualization of clinical associations based on high-dimensional data. The tool provides a web-based suite to perform supervised principal component analysis (SuperPC), an approach that uses both high-dimensional data, such as gene expression, combined with clinical data to infer clinical associations. We extended the approach to address binary outcomes, in addition to continuous and time-to-event data in our package, thereby increasing the use and flexibility of SuperPC. Additionally, the tool provides an interactive visualization for summarizing results based on a forest plot for both binary and time-to-event data. In summary, the GAC suite of tools provide a one stop shop for conducting statistical analysis to identify and visualize the association between a clinical outcome of interest and high-dimensional data types, such as genomic data. Our GAC package has been implemented in R and is available via http://shinygispa.winship.emory.edu/GAC/. The developmental repository is available at https://github.com/manalirupji/GAC.
ERIC Educational Resources Information Center
Ellis, Jennifer T.
2013-01-01
This study was designed to evaluate the effects of a proprietary software program on students' conceptual and visual understanding of dimensional analysis. The participants in the study were high school general chemistry students enrolled in two public schools with different demographics (School A and School B) in the Chattanooga, Tennessee,…
Development of a linearized unsteady Euler analysis for turbomachinery blade rows
NASA Technical Reports Server (NTRS)
Verdon, Joseph M.; Montgomery, Matthew D.; Kousen, Kenneth A.
1995-01-01
A linearized unsteady aerodynamic analysis for axial-flow turbomachinery blading is described in this report. The linearization is based on the Euler equations of fluid motion and is motivated by the need for an efficient aerodynamic analysis that can be used in predicting the aeroelastic and aeroacoustic responses of blade rows. The field equations and surface conditions required for inviscid, nonlinear and linearized, unsteady aerodynamic analyses of three-dimensional flow through a single, blade row operating within a cylindrical duct, are derived. An existing numerical algorithm for determining time-accurate solutions of the nonlinear unsteady flow problem is described, and a numerical model, based upon this nonlinear flow solver, is formulated for the first-harmonic linear unsteady problem. The linearized aerodynamic and numerical models have been implemented into a first-harmonic unsteady flow code, called LINFLUX. At present this code applies only to two-dimensional flows, but an extension to three-dimensions is planned as future work. The three-dimensional aerodynamic and numerical formulations are described in this report. Numerical results for two-dimensional unsteady cascade flows, excited by prescribed blade motions and prescribed aerodynamic disturbances at inlet and exit, are also provided to illustrate the present capabilities of the LINFLUX analysis.
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.
Gupta, Vikas; Bustamante, Mariana; Fredriksson, Alexandru; Carlhäll, Carl-Johan; Ebbers, Tino
2018-01-01
Assessment of blood flow in the left ventricle using four-dimensional flow MRI requires accurate left ventricle segmentation that is often hampered by the low contrast between blood and the myocardium. The purpose of this work is to improve left-ventricular segmentation in four-dimensional flow MRI for reliable blood flow analysis. The left ventricle segmentations are first obtained using morphological cine-MRI with better in-plane resolution and contrast, and then aligned to four-dimensional flow MRI data. This alignment is, however, not trivial due to inter-slice misalignment errors caused by patient motion and respiratory drift during breath-hold based cine-MRI acquisition. A robust image registration based framework is proposed to mitigate such errors automatically. Data from 20 subjects, including healthy volunteers and patients, was used to evaluate its geometric accuracy and impact on blood flow analysis. High spatial correspondence was observed between manually and automatically aligned segmentations, and the improvements in alignment compared to uncorrected segmentations were significant (P < 0.01). Blood flow analysis from manual and automatically corrected segmentations did not differ significantly (P > 0.05). Our results demonstrate the efficacy of the proposed approach in improving left-ventricular segmentation in four-dimensional flow MRI, and its potential for reliable blood flow analysis. Magn Reson Med 79:554-560, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Some theorems and properties of multi-dimensional fractional Laplace transforms
NASA Astrophysics Data System (ADS)
Ahmood, Wasan Ajeel; Kiliçman, Adem
2016-06-01
The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.
Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu
2017-01-01
We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Three-dimensional head anthropometric analysis
NASA Astrophysics Data System (ADS)
Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James
2003-05-01
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
Fischer, Claudia; Voss, Andreas
2014-01-01
Hypertensive pregnancy disorders affect 6 to 8 percent of all pregnancies which can cause severe complications for the mother and the fetus. The aim of this study was to develop a new method suitable for a three dimensional coupling analysis. Therefore, the three-dimensional segmented Poincaré plot analysis (SPPA3) is introduced that represents the Poincare analysis based on a cubic box model representation. The box representing the three dimensional phase space is (based on the SPPA method) subdivided into 12×12×12 equal cubelets according to the predefined range of signals and all single probabilities of occurring points in a specific cubelet related to the total number of points are calculated. From 10 healthy non-pregnant women, 66 healthy pregnant women and 56 hypertensive pregnant women suffering from chronic hypertension, gestational hypertension and preeclampsia, 30 minutes of beat-to-beat intervals (BBI), noninvasive blood pressure and respiration (RESP) were continuously recorded and analyzed. Couplings between the different signals were analyzed. The ability of SPPA3 for a screening could be confirmed by multivariate discriminant analysis differentiating between all pregnant woman and preeclampsia (index BBI3_SBP9_RESP6/ BBI8_SBP11_RESP4 leads to an area under the ROC curve of AUC=91.2%). In conclusion, SPPA3 could be a useful method for enhanced risk stratification in pregnant women.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Sipkova, Zuzana; Lam, Fook Chang; Francis, Ian; Herold, Jim; Liu, Christopher
2013-04-01
To assess the use of serial computed tomography (CT) in the detection of osteo-odonto-lamina resorption in osteo-odonto-keratoprosthesis (OOKP) and to investigate the use of new volumetric software, Advanced Lung Analysis software (3D-ALA; GE Healthcare), for detecting changes in OOKP laminar volume. A retrospective assessment of the radiological databases and hospital records was performed for 22 OOKP patients treated at the National OOKP referral center in Brighton, United Kingdom. Three-dimensional surface reconstructions of the OOKP laminae were performed using stored CT data. For the 2-dimensional linear analysis, the linear dimensions of the reconstructed laminae were measured, compared with original measurements taken at the time of surgery, and then assigned a CT grade based on a predetermined resorption grading scale. The volumetric analysis involved calculating the laminar volumes using 3D-ALA. The effectiveness of 2-dimensional linear analysis, volumetric analysis, and clinical examination in detecting laminar resorption was compared. The mean change in laminar volume between the first and second scans was -6.67% (range, +10.13% to -24.86%). CT grades assigned to patients based on laminar dimension measurements remained the same, despite significant changes in laminar volumes. Clinical examination failed to identify 60% of patients who were found to have resorption on volumetric analysis. Currently, the detection of laminar resorption relies on clinical examination and the measurement of laminar dimensions on the 2- and 3-dimensional radiological images. Laminar volume measurement is a useful new addition to the armamentarium. It provides an objective tool that allows for a precise and reproducible assessment of laminar resorption.
Dimensional Analysis Applied to Electricity and Mechanics.
ERIC Educational Resources Information Center
Thomas, G.
1979-01-01
Suggests an alternative system of measurement to be used in engineering, which provides theoretical insight and leads to definitions of dual and analogous physical quantities. The system is based on the notion that the dimensional product of three fundamental quantities should be energy. (GA)
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
Packaging consideration of two-dimensional polymer-based photonic crystals for laser beam steering
NASA Astrophysics Data System (ADS)
Dou, Xinyuan; Chen, Xiaonan; Chen, Maggie Yihong; Wang, Alan Xiaolong; Jiang, Wei; Chen, Ray T.
2009-02-01
In this paper, we report the theoretical study of polymer-based photonic crystals for laser beam steering which is based on the superprism effect as well as the experiment fabrication of the two dimensional photonic crystals for the laser beam steering. Superprism effect, the principle for beam steering, was separately studied in details through EFC (Equifrequency Contour) analysis. Polymer based photonic crystals were fabricated through double exposure holographic interference method using SU8-2007. The experiment results were also reported.
Sex determination by three-dimensional geometric morphometrics of the palate and cranial base.
Chovalopoulou, Maria-Eleni; Valakos, Efstratios D; Manolis, Sotiris K
2013-01-01
The purpose of this study is to assess sexual dimorphism in the palate and base of adult crania using three-dimensional geometric morphometric methods. The study sample consisted of 176 crania of known sex (94 males, 82 females) belonging to individuals who lived during the 20th century in Greece. The three-dimensional co-ordinates of 30 ectocranial landmarks were digitized using a MicroScribe 3DX contact digitizer. Generalized Procrustes Analysis (GPA) was used to obtain size and shape variables for statistical analysis. Three discriminant function analyses were carried out: (1) using PC scores from Procrustes shape space, (2) centroid size alone, and (3) PC scores of GPA residuals which includes InCS for analysis in Procrustes form space. Results indicate that there are shape differences between sexes. In males, the palate is deepest and more elongated; the cranial base is shortened. Sex-specific shape differences for the cross-validated data give better classification results in the cranial base (77.2%) compared with the palate (68.9%). Size alone yielded better results for cranial base (82%) in opposition to palate (63.1%). As anticipated, the classification accuracy improves when both size and shape are combined (90.4% for cranial base, and 74.8% for palate).
Reaction time for trimolecular reactions in compartment-based reaction-diffusion models
NASA Astrophysics Data System (ADS)
Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang
2018-05-01
Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.
Inverse finite-size scaling for high-dimensional significance analysis
NASA Astrophysics Data System (ADS)
Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki
2018-06-01
We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.
TPSLVM: a dimensionality reduction algorithm based on thin plate splines.
Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming
2014-10-01
Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.
Modelling of Heat and Moisture Loss Through NBC Ensembles
1991-11-01
the heat and moisture transport through various NBC clothing ensembles. The analysis involves simplifying the three dimensional physical problem of... clothing on a person to that of a one dimensional problem of flow through parallel layers of clothing and air. Body temperatures are calculated based on...prescribed work rates, ambient conditions and clothing properties. Sweat response and respiration rates are estimated based on empirical data to
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
FPCAS3D User's guide: A three dimensional full potential aeroelastic program, version 1
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.
1995-01-01
The FPCAS3D computer code has been developed for aeroelastic stability analysis of bladed disks such as those in fans, compressors, turbines, propellers, or propfans. The aerodynamic analysis used in this code is based on the unsteady three-dimensional full potential equation which is solved for a blade row. The structural analysis is based on a finite-element model for each blade. Detailed explanations of the aerodynamic analysis, the numerical algorithms, and the aeroelastic analysis are not given in this report. This guide can be used to assist in the preparation of the input data required by the FPCAS3D code. A complete description of the input data is provided in this report. In addition, six examples, including inputs and outputs, are provided.
Eigenmode Analysis of Boundary Conditions for One-Dimensional Preconditioned Euler Equations
NASA Technical Reports Server (NTRS)
Darmofal, David L.
1998-01-01
An analysis of the effect of local preconditioning on boundary conditions for the subsonic, one-dimensional Euler equations is presented. Decay rates for the eigenmodes of the initial boundary value problem are determined for different boundary conditions. Riemann invariant boundary conditions based on the unpreconditioned Euler equations are shown to be reflective with preconditioning, and, at low Mach numbers, disturbances do not decay. Other boundary conditions are investigated which are non-reflective with preconditioning and numerical results are presented confirming the analysis.
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Calculation of three-dimensional, inviscid, supersonic, steady flows
NASA Technical Reports Server (NTRS)
Moretti, G.
1981-01-01
A detailed description of a computational program for the evaluation of three dimensional supersonic, inviscid, steady flow past airplanes is presented. Emphasis was put on how a powerful, automatic mapping technique is coupled to the fluid mechanical analysis. Each of the three constituents of the analysis (body geometry, mapping technique, and gas dynamical effects) was carefully coded and described. Results of computations based on sample geometrics and discussions are also presented.
Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations
NASA Technical Reports Server (NTRS)
Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.
Behavior analysis of video object in complicated background
NASA Astrophysics Data System (ADS)
Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang
2016-10-01
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
GAC: Gene Associations with Clinical, a web based application
Zhang, Xinyan; Rupji, Manali; Kowalski, Jeanne
2018-01-01
We present GAC, a shiny R based tool for interactive visualization of clinical associations based on high-dimensional data. The tool provides a web-based suite to perform supervised principal component analysis (SuperPC), an approach that uses both high-dimensional data, such as gene expression, combined with clinical data to infer clinical associations. We extended the approach to address binary outcomes, in addition to continuous and time-to-event data in our package, thereby increasing the use and flexibility of SuperPC. Additionally, the tool provides an interactive visualization for summarizing results based on a forest plot for both binary and time-to-event data. In summary, the GAC suite of tools provide a one stop shop for conducting statistical analysis to identify and visualize the association between a clinical outcome of interest and high-dimensional data types, such as genomic data. Our GAC package has been implemented in R and is available via http://shinygispa.winship.emory.edu/GAC/. The developmental repository is available at https://github.com/manalirupji/GAC. PMID:29263780
Linear and volumetric dimensional changes of injection-molded PMMA denture base resins.
El Bahra, Shadi; Ludwig, Klaus; Samran, Abdulaziz; Freitag-Wolf, Sandra; Kern, Matthias
2013-11-01
The aim of this study was to evaluate the linear and volumetric dimensional changes of six denture base resins processed by their corresponding injection-molding systems at 3 time intervals of water storage. Two heat-curing (SR Ivocap Hi Impact and Lucitone 199) and four auto-curing (IvoBase Hybrid, IvoBase Hi Impact, PalaXpress, and Futura Gen) acrylic resins were used with their specific injection-molding technique to fabricate 6 specimens of each material. Linear and volumetric dimensional changes were determined by means of a digital caliper and an electronic hydrostatic balance, respectively, after water storage of 1, 30, or 90 days. Means and standard deviations of linear and volumetric dimensional changes were calculated in percentage (%). Statistical analysis was done using Student's and Welch's t tests with Bonferroni-Holm correction for multiple comparisons (α=0.05). Statistically significant differences in linear dimensional changes between resins were demonstrated at all three time intervals of water immersion (p≤0.05), with exception of the following comparisons which showed no significant difference: IvoBase Hi Impact/SR Ivocap Hi Impact and PalaXpress/Lucitone 199 after 1 day, Futura Gen/PalaXpress and PalaXpress/Lucitone 199 after 30 days, and IvoBase Hybrid/IvoBase Hi Impact after 90 days. Also, statistically significant differences in volumetric dimensional changes between resins were found at all three time intervals of water immersion (p≤0.05), with exception of the comparison between PalaXpress and Futura Gen. Denture base resins (IvoBase Hybrid and IvoBase Hi Impact) processed by the new injection-molding system (IvoBase), revealed superior dimensional precision. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jung, Joon Hee; Jang, Gang-Won; Shin, Dongil; Kim, Yoon Young
2018-03-01
This paper presents a method to analyze thin-walled beams with quadrilateral cross sections reinforced with diaphragms using a one-dimensional higher-order beam theory. The effect of a diaphragm is reflected focusing on the increase of static stiffness. The deformations on the beam-interfacing boundary of a thin diaphragm are described by using deformation modes of the beam cross section while the deformations inside the diaphragm are approximated in the form of complete cubic polynomials. By using the principle of minimum potential energy, its stiffness that significantly affects distortional deformation of a thin-walled beam can be considered in the one-dimensional beam analysis. It is shown that the accuracy of the resulting one-dimensional analysis is comparable with that by a shell element based analysis. As a means to demonstrate the usefulness of the present approach for design, position optimization problems of diaphragms for stiffness reinforcement of an automotive side frame are solved.
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
Dimensional changes of acrylic resin denture bases: conventional versus injection-molding technique.
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-07-01
Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding.
Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-01-01
Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050
Development of an Aeroelastic Analysis Including a Viscous Flow Model
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Bakhle, Milind A.
2001-01-01
Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.
NASA Astrophysics Data System (ADS)
Aigner, M.; Köpplmayr, T.; Kneidinger, C.; Miethlinger, J.
2014-05-01
Barrier screws are widely used in the plastics industry. Due to the extreme diversity of their geometries, describing the flow behavior is difficult and rarely done in practice. We present a systematic approach based on networks that uses tensor algebra and numerical methods to model and calculate selected barrier screw geometries in terms of pressure, mass flow, and residence time. In addition, we report the results of three-dimensional simulations using the commercially available ANSYS Polyflow software. The major drawbacks of three-dimensional finite-element-method (FEM) simulations are that they require vast computational power and, large quantities of memory, and consume considerable time to create a geometric model created by computer-aided design (CAD) and complete a flow calculation. Consequently, a modified 2.5-dimensional finite volume method, termed network analysis is preferable. The results obtained by network analysis and FEM simulations correlated well. Network analysis provides an efficient alternative to complex FEM software in terms of computing power and memory consumption. Furthermore, typical barrier screw geometries can be parameterized and used for flow calculations without timeconsuming CAD-constructions.
SUPIN: A Computational Tool for Supersonic Inlet Design
NASA Technical Reports Server (NTRS)
Slater, John W.
2016-01-01
A computational tool named SUPIN is being developed to design and analyze the aerodynamic performance of supersonic inlets. The inlet types available include the axisymmetric pitot, three-dimensional pitot, axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flow-field is divided into parts to provide a framework for the geometry and aerodynamic modeling. Each part of the inlet is defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick design and analysis. SUPIN provides inlet geometry in the form of coordinates, surface angles, and cross-sectional areas. SUPIN can generate inlet surface grids and three-dimensional, structured volume grids for use with higher-fidelity computational fluid dynamics (CFD) analysis. Capabilities highlighted in this paper include the design and analysis of streamline-traced external-compression inlets, modeling of porous bleed, and the design and analysis of mixed-compression inlets. CFD analyses are used to verify the SUPIN results.
Dimensionality and Psychometric Analysis of an Alcohol Protective Behavioral Strategies Scale
ERIC Educational Resources Information Center
Novik, Melinda G.; Boekeloo, Bradley O.
2011-01-01
Objective: The current study examined the dimensionality of a protective behavioral strategies (PBS) measure among undergraduate, predominantly freshmen (92.5%) college students reporting recent alcohol use (n = 320). Method: Participants completed a web-based survey assessing 22 PBS items. Factor analyses determined the underlying factor…
Three-Dimensional Extension of a Digital Library Service System
ERIC Educational Resources Information Center
Xiao, Long
2010-01-01
Purpose: The paper aims to provide an overall methodology and case study for the innovation and extension of a digital library, especially the service system. Design/methodology/approach: Based on the three-dimensional structure theory of the information service industry, this paper combines a comprehensive analysis with the practical experiences…
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
NASA Astrophysics Data System (ADS)
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
On the strain energy of laminated composite plates
NASA Technical Reports Server (NTRS)
Atilgan, Ali R.; Hodges, Dewey H.
1991-01-01
The present effort to obtain the asymptotically correct form of the strain energy in inhomogeneous laminated composite plates proceeds from the geometrically nonlinear elastic theory-based three-dimensional strain energy by decomposing the nonlinear three-dimensional problem into a linear, through-the-thickness analysis and a nonlinear, two-dimensional analysis analyzing plate formation. Attention is given to the case in which each lamina exhibits material symmetry about its middle surface, deriving closed-form analytical expressions for the plate elastic constants and the displacement and strain distributions through the plate's thickness. Despite the simplicity of the plate strain energy's form, there are no restrictions on the magnitudes of displacement and rotation measures.
MHOST version 4.2. Volume 1: Users' manual
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1989-01-01
This manual describes the user options available for running the MHOST finite element analysis package. MHOST is a solid and structural analysis program based on mixed finite element technology, and is specifically designed for three-dimensional inelastic analysis. A family of two- and three-dimensional continuum elements along with beam and shell structural elements can be utilized. Many options are available in the constitutive equation library, the solution algorithms and the analysis capabilities. An overview of the algorithms, a general description of the input data formats, and a discussion of input data for selecting solution algorithms are given.
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
NASA Technical Reports Server (NTRS)
Baker, A. J.; Orzechowski, J. A.
1980-01-01
A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
An adaptive front tracking technique for three-dimensional transient flows
NASA Astrophysics Data System (ADS)
Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.
2000-01-01
An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright
NASA Astrophysics Data System (ADS)
Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried
2017-02-01
We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.
Data Warehouse Design from HL7 Clinical Document Architecture Schema.
Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L
2015-01-01
This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows
NASA Technical Reports Server (NTRS)
Montgomery, Matthew D.; Verdon, Joseph M.
1996-01-01
A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2017-03-01
In this paper, smooth curve models of meminductor and memcapacitor are designed, which are generalized from a memristor. Based on these models, a new five-dimensional chaotic oscillator that contains a meminductor and memcapacitor is proposed. By dimensionality reducing, this five-dimensional system can be transformed into a three-dimensional system. The main work of this paper is to give the comparisons between the five-dimensional system and its dimensionality reduction model. To investigate dynamics behaviors of the two systems, equilibrium points and stabilities are analyzed. And the bifurcation diagrams and Lyapunov exponent spectrums are used to explore their properties. In addition, digital signal processing technologies are used to realize this chaotic oscillator, and chaotic sequences are generated by the experimental device, which can be used in encryption applications.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri
2018-07-01
This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Fromm, F. R.; Northouse, R. A.
1974-01-01
A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
Results of an Oncology Clinical Trial Nurse Role Delineation Study.
Purdom, Michelle A; Petersen, Sandra; Haas, Barbara K
2017-09-01
To evaluate the relevance of a five-dimensional model of clinical trial nursing practice in an oncology clinical trial nurse population. . Web-based cross-sectional survey. . Online via Qualtrics. . 167 oncology nurses throughout the United States, including 41 study coordinators, 35 direct care providers, and 91 dual-role nurses who provide direct patient care and trial coordination. . Principal components analysis was used to determine the dimensions of oncology clinical trial nursing practice. . Self-reported frequency of 59 activities. . The results did not support the original five-dimensional model of nursing care but revealed a more multidimensional model. . An analysis of frequency data revealed an eight-dimensional model of oncology research nursing, including care, manage study, expert, lead, prepare, data, advance science, and ethics. . This evidence-based model expands understanding of the multidimensional roles of oncology nurses caring for patients with cancer enrolled in clinical trials.
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok
2012-01-01
This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.
Anomalous heat conduction in a one-dimensional ideal gas.
Casati, Giulio; Prosen, Tomaz
2003-01-01
We provide firm convincing evidence that the energy transport in a one-dimensional gas of elastically colliding free particles of unequal masses is anomalous, i.e., the Fourier law does not hold. Our conclusions are confirmed by a theoretical and numerical analysis based on a Green-Kubo-type approach specialized to momentum-conserving lattices.
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
Kinematics of swimming of the manta ray: three-dimensional analysis of open-water maneuverability.
Fish, Frank E; Kolpas, Allison; Crossett, Andrew; Dudas, Michael A; Moored, Keith W; Bart-Smith, Hilary
2018-03-22
For aquatic animals, turning maneuvers represent a locomotor activity that may not be confined to a single coordinate plane, making analysis difficult, particularly in the field. To measure turning performance in a three-dimensional space for the manta ray ( Mobula birostris ), a large open-water swimmer, scaled stereo video recordings were collected. Movements of the cephalic lobes, eye and tail base were tracked to obtain three-dimensional coordinates. A mathematical analysis was performed on the coordinate data to calculate the turning rate and curvature (1/turning radius) as a function of time by numerically estimating the derivative of manta trajectories through three-dimensional space. Principal component analysis was used to project the three-dimensional trajectory onto the two-dimensional turn. Smoothing splines were applied to these turns. These are flexible models that minimize a cost function with a parameter controlling the balance between data fidelity and regularity of the derivative. Data for 30 sequences of rays performing slow, steady turns showed the highest 20% of values for the turning rate and smallest 20% of turn radii were 42.65±16.66 deg s -1 and 2.05±1.26 m, respectively. Such turning maneuvers fall within the range of performance exhibited by swimmers with rigid bodies. © 2018. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Moin, Parviz; Spalart, Philippe R.
1987-01-01
The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.
NASA Astrophysics Data System (ADS)
Song, Huimin
In the aerospace and automotive industries, many finite element analyses use lower-dimensional finite elements such as beams, plates and shells, to simplify the modeling. These simplified models can greatly reduce the computation time and cost; however, reduced-dimensional models may introduce inaccuracies, particularly near boundaries and near portions of the structure where reduced-dimensional models may not apply. Another factor in creation of such models is that beam-like structures frequently have complex geometry, boundaries and loading conditions, which may make them unsuitable for modeling with single type of element. The goal of this dissertation is to develop a method that can accurately and efficiently capture the response of a structure by rigorous combination of a reduced-dimensional beam finite element model with a model based on full two-dimensional (2D) or three-dimensional (3D) finite elements. The first chapter of the thesis gives the background of the present work and some related previous work. The second chapter is focused on formulating a system of equations that govern the joining of a 2D model with a beam model for planar deformation. The essential aspect of this formulation is to find the transformation matrices to achieve deflection and load continuity on the interface. Three approaches are provided to obtain the transformation matrices. An example based on joining a beam to a 2D finite element model is examined, and the accuracy of the analysis is studied by comparing joint results with the full 2D analysis. The third chapter is focused on formulating the system of equations for joining a beam to a 3D finite element model for static and free-vibration problems. The transition between the 3D elements and beam elements is achieved by use of the stress recovery technique of the variational-asymptotic method as implemented in VABS (the Variational Asymptotic Beam Section analysis). The formulations for an interface transformation matrix and the generalized Timoshenko beam are discussed in this chapter. VABS is also used to obtain the beam constitutive properties and warping functions for stress recovery. Several 3D-beam joint examples are presented to show the convergence and accuracy of the analysis. Accuracy is accessed by comparing the joint results with the full 3D analysis. The fourth chapter provides conclusions from present studies and recommendations for future work.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
Dou, Xinyuan; Chen, Xiaonan; Chen, Maggie Yihong; Wang, Alan Xiaolong; Jiang, Wei; Chen, Ray T
2010-03-01
In this paper, we report the theoretical study of polymer-based photonic crystals for laser beam steering which is based on the superprism effect as well as the experiment fabrication of the two dimensional photonic crystals for the laser beam steering. Superprism effect, the principle for beam steering, was separately studied in details through EFC (Equifrequency Contour) analysis. Polymer based photonic crystals were fabricated through double exposure holographic interference method using SU8-2007. The experiment results showed a beam steering angle of 10 degree for 30 nm wavelength variation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven
During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less
Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua
2011-12-01
A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.
Two-dimensional random surface model for asperity-contact in elastohydrodynamic lubrication
NASA Technical Reports Server (NTRS)
Coy, J. J.; Sidik, S. M.
1979-01-01
Relations for the asperity-contact time function during elastohydrodynamic lubrication of a ball bearing are presented. The analysis is based on a two-dimensional random surface model, and actual profile traces of the bearing surfaces are used as statistical sample records. The results of the analysis show that transition from 90 percent contact to 1 percent contact occurs within a dimensionless film thickness range of approximately four to five. This thickness ratio is several times large than reported in the literature where one-dimensional random surface models were used. It is shown that low pass filtering of the statistical records will bring agreement between the present results and those in the literature.
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Fast, Exact Bootstrap Principal Component Analysis for p > 1 million
Fisher, Aaron; Caffo, Brian; Schwartz, Brian; Zipunnikov, Vadim
2015-01-01
Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (p) is much larger than the number of subjects (n), calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same n-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same n-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the p-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram recordings (p = 900, n = 392), and to a dataset of brain magnetic resonance images (MRIs) (p ≈ 3 million, n = 352). For the MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods. PMID:27616801
CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets
Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.
2017-01-01
High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787
Analysis of absorption and reflection mechanisms in a three-dimensional plate silencer
NASA Astrophysics Data System (ADS)
Wang, Chunqi; Huang, Lixi
2008-06-01
When a segment of a rigid duct is replaced by a plate backed by a hard-walled cavity, grazing incident sound waves induce plate vibration, hence sound reflection. Based on this mechanism, a broadband plate silencer, which works effectively from low-to-medium frequencies have been developed recently. A typical plate silencer consists of an expansion chamber with two side-branch cavities covered by light but extremely stiff plates. Such a configuration is two-dimensional in nature. In this paper, numerical study is extended to three-dimensional configurations to investigate the potential improvement in sound reflection. Finite element simulation shows that the three-dimensional configurations perform better than the corresponding two-dimensional design, especially in the relatively high frequency region. Further analysis shows that the three-dimensional design gives better plate response at higher axial modes than the simple two-dimensional design. Sound absorption mechanism is also introduced to the plate silencer by adding two dissipative chambers on the two lateral sides of a two-cavity wave reflector, hence a hybrid silencer. Numerical simulation shows that the proposed hybrid silencer is able to achieve a good moderate bandwidth with much reduced total length in comparison with pure absorption design.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
NASA Astrophysics Data System (ADS)
Aksenova, Olesya; Pachkina, Anna
2017-11-01
The article deals with the problem of necessity of educational process transformation to meet the requirements of modern miming industry; cooperative developing of new educational programs and implementation of educational process taking into account modern manufacturability. The paper proves the idea of introduction into mining professionals learning process studying of three-dimensional models of surface technological complex, ore reserves and underground digging complex as well as creating these models in different graphic editors and working with the information analysis model obtained on the basis of these three-dimensional models. The technological process of manless coal mining at the premises of the mine Polysaevskaya controlled by the information analysis models built on the basis of three-dimensional models of individual objects and technological process as a whole, and at the same time requiring the staff able to use the programs of three-dimensional positioning in the miners and equipment global frame of reference is covered.
On the precision of quasi steady state assumptions in stochastic dynamics
NASA Astrophysics Data System (ADS)
Agarwal, Animesh; Adams, Rhys; Castellani, Gastone C.; Shouval, Harel Z.
2012-07-01
Many biochemical networks have complex multidimensional dynamics and there is a long history of methods that have been used for dimensionality reduction for such reaction networks. Usually a deterministic mass action approach is used; however, in small volumes, there are significant fluctuations from the mean which the mass action approach cannot capture. In such cases stochastic simulation methods should be used. In this paper, we evaluate the applicability of one such dimensionality reduction method, the quasi-steady state approximation (QSSA) [L. Menten and M. Michaelis, "Die kinetik der invertinwirkung," Biochem. Z 49, 333369 (1913)] for dimensionality reduction in case of stochastic dynamics. First, the applicability of QSSA approach is evaluated for a canonical system of enzyme reactions. Application of QSSA to such a reaction system in a deterministic setting leads to Michaelis-Menten reduced kinetics which can be used to derive the equilibrium concentrations of the reaction species. In the case of stochastic simulations, however, the steady state is characterized by fluctuations around the mean equilibrium concentration. Our analysis shows that a QSSA based approach for dimensionality reduction captures well the mean of the distribution as obtained from a full dimensional simulation but fails to accurately capture the distribution around that mean. Moreover, the QSSA approximation is not unique. We have then extended the analysis to a simple bistable biochemical network model proposed to account for the stability of synaptic efficacies; the substrate of learning and memory [J. E. Lisman, "A mechanism of memory storage insensitive to molecular turnover: A bistable autophosphorylating kinase," Proc. Natl. Acad. Sci. U.S.A. 82, 3055-3057 (1985)], 10.1073/pnas.82.9.3055. Our analysis shows that a QSSA based dimensionality reduction method results in errors as big as two orders of magnitude in predicting the residence times in the two stable states.
NASA Astrophysics Data System (ADS)
Bo, Z.; Chen, J. H.
2010-02-01
The dimensional analysis technique is used to formulate a correlation between ozone generation rate and various parameters that are important in the design and operation of positive wire-to-plate corona discharges in indoor air. The dimensionless relation is determined by linear regression analysis based on the results from 36 laboratory-scale experiments. The derived equation is validated by experimental data and a numerical model published in the literature. Applications of such derived equation are illustrated through an example selection of the appropriate set of operating conditions in the design/operation of a photocopier to follow the federal regulations of ozone emission. Finally, a new current-voltage characteristic equation is proposed for positive wire-to-plate corona discharges based on the derived dimensionless equation.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.
2017-08-01
An inverse thermal analysis of Alloy 690 laser and hybrid laser-GMA welds is presented that uses numerical-analytical basis functions and boundary constraints based on measured solidification cross sections. In particular, the inverse analysis procedure uses three-dimensional constraint conditions such that two-dimensional projections of calculated solidification boundaries are constrained to map within experimentally measured solidification cross sections. Temperature histories calculated by this analysis are input data for computational procedures that predict solid-state phase transformations and mechanical response. These temperature histories can be used for inverse thermal analysis of welds corresponding to other welding processes whose process conditions are within similar regimes.
Skin microrelief profiles as a cutaneous aging index.
Kim, Dai Hyun; Rhyu, Yeon Seung; Ahn, Hyo Hyun; Hwang, Eenjun; Uhm, Chang Sub
2016-10-01
An objective measurement of cutaneous topographical information is important for quantifying the degree of skin aging. Our aim was to improve methods for measuring microrelief patterns using a three-dimensional analysis based on silicone replicas and scanning electron microscope (SEM). Another objective was to compare the results with those obtained using a two-dimensional analysis method based on dermoscopy. Silicone replicas were obtained from forearms, dorsum of the hands and fingers of 51 volunteers. Cutaneous profiles obtained by SEM with silicone replicas showed more consistent correlations with age than data obtained by dermoscopy. This indicates the advantage of three-dimensional topography analysis using silicone replicas and SEM over the widely used dermoscopic assessment. The cutaneous age was calculated using stepwise linear regression, and the result was 57.40-9.47 × (number of furrows on dorsum of the hand) × (width of furrows on dorsum of the hand). © The Author 2016. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Starnes, James H., Jr.
1996-01-01
An efficient, approximate analysis for calculating complete three-dimensional stress fields near regions of geometric discontinuities in laminated composite structures is presented. An approximate three-dimensional local analysis is used to determine the detailed local response due to far-field stresses obtained from a global two-dimensional analysis. The stress results from the global analysis are used as traction boundary conditions for the local analysis. A generalized plane deformation assumption is made in the local analysis to reduce the solution domain to two dimensions. This assumption allows out-of-plane deformation to occur. The local analysis is based on the principle of minimum complementary energy and uses statically admissible stress functions that have an assumed through-the-thickness distribution. Examples are presented to illustrate the accuracy and computational efficiency of the local analysis. Comparisons of the results of the present local analysis with the corresponding results obtained from a finite element analysis and from an elasticity solution are presented. These results indicate that the present local analysis predicts the stress field accurately. Computer execution-times are also presented. The demonstrated accuracy and computational efficiency of the analysis make it well suited for parametric and design studies.
Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred
2013-01-01
Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014
CytoSPADE: high-performance analysis and visualization of high-dimensional cytometry data
Linderman, Michael D.; Simonds, Erin F.; Qiu, Peng; Bruggner, Robert V.; Sheode, Ketaki; Meng, Teresa H.; Plevritis, Sylvia K.; Nolan, Garry P.
2012-01-01
Motivation: Recent advances in flow cytometry enable simultaneous single-cell measurement of 30+ surface and intracellular proteins. CytoSPADE is a high-performance implementation of an interface for the Spanning-tree Progression Analysis of Density-normalized Events algorithm for tree-based analysis and visualization of this high-dimensional cytometry data. Availability: Source code and binaries are freely available at http://cytospade.org and via Bioconductor version 2.10 onwards for Linux, OSX and Windows. CytoSPADE is implemented in R, C++ and Java. Contact: michael.linderman@mssm.edu Supplementary Information: Additional documentation available at http://cytospade.org. PMID:22782546
Variational asymptotic modeling of composite dimensionally reducible structures
NASA Astrophysics Data System (ADS)
Yu, Wenbin
A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin
2015-03-01
We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.
NASA Astrophysics Data System (ADS)
Ignatov, D.; Zhurbina, N.; Gerasimenko, A.
2017-01-01
3-D composites are widely used in tissue engineering. A comprehensive analysis by X-ray microtomography was conducted to study the structure of the 3-D composites. Comprehensive analysis of the structure of the 3-D composites consisted of scanning, image reconstruction of shadow projections, two-dimensional and three-dimensional visualization of the reconstructed images and quantitative analysis of the samples. Experimental samples of composites were formed by laser vaporization of the aqueous dispersion BSA and single-walled (SWCNTs) and multi-layer (MWCNTs) carbon nanotubes. The samples have a homogeneous structure over the entire volume, the percentage of porosity of 3-D composites based on SWCNTs and MWCNTs - 16.44%, 28.31%, respectively. An average pore diameter of 3-D composites based on SWCNTs and MWCNTs - 45 μm 93 μm. 3-D composites based on carbon nanotubes in bovine serum albumin matrix can be used in tissue engineering of bone and cartilage, providing cell proliferation and blood vessel sprouting.
On-line analysis of algae in water by discrete three-dimensional fluorescence spectroscopy.
Zhao, Nanjing; Zhang, Xiaoling; Yin, Gaofang; Yang, Ruifang; Hu, Li; Chen, Shuang; Liu, Jianguo; Liu, Wenqing
2018-03-19
In view of the problem of the on-line measurement of algae classification, a method of algae classification and concentration determination based on the discrete three-dimensional fluorescence spectra was studied in this work. The discrete three-dimensional fluorescence spectra of twelve common species of algae belonging to five categories were analyzed, the discrete three-dimensional standard spectra of five categories were built, and the recognition, classification and concentration prediction of algae categories were realized by the discrete three-dimensional fluorescence spectra coupled with non-negative weighted least squares linear regression analysis. The results show that similarities between discrete three-dimensional standard spectra of different categories were reduced and the accuracies of recognition, classification and concentration prediction of the algae categories were significantly improved. By comparing with that of the chlorophyll a fluorescence excitation spectra method, the recognition accuracy rate in pure samples by discrete three-dimensional fluorescence spectra is improved 1.38%, and the recovery rate and classification accuracy in pure diatom samples 34.1% and 46.8%, respectively; the recognition accuracy rate of mixed samples by discrete-three dimensional fluorescence spectra is enhanced by 26.1%, the recovery rate of mixed samples with Chlorophyta 37.8%, and the classification accuracy of mixed samples with diatoms 54.6%.
Huang, Xiao Yan; Shan, Zhi Jie; Zhai, Hong Lin; Li, Li Na; Zhang, Xiao Yun
2011-08-22
Heat shock protein 90 (Hsp90) takes part in the developments of several cancers. Novobiocin, a typically C-terminal inhibitor for Hsp90, will probably used as an important anticancer drug in the future. In this work, we explored the valuable information and designed new novobiocin derivatives based on a three-dimensional quantitative structure-activity relationship (3D QSAR). The comparative molecular field analysis and comparative molecular similarity indices analysis models with high predictive capability were established, and their reliabilities are supported by the statistical parameters. Based on the several important influence factors obtained from these models, six new novobiocin derivatives with higher inhibitory activities were designed and confirmed by the molecular simulation with our models, which provide the potential anticancer drug leads for further research.
Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection
NASA Astrophysics Data System (ADS)
Mingji, Huang; Geng, Wu; yan, Shan
2017-11-01
The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.
van Unen, Vincent; Höllt, Thomas; Pezzotti, Nicola; Li, Na; Reinders, Marcel J T; Eisemann, Elmar; Koning, Frits; Vilanova, Anna; Lelieveldt, Boudewijn P F
2017-11-23
Mass cytometry allows high-resolution dissection of the cellular composition of the immune system. However, the high-dimensionality, large size, and non-linear structure of the data poses considerable challenges for the data analysis. In particular, dimensionality reduction-based techniques like t-SNE offer single-cell resolution but are limited in the number of cells that can be analyzed. Here we introduce Hierarchical Stochastic Neighbor Embedding (HSNE) for the analysis of mass cytometry data sets. HSNE constructs a hierarchy of non-linear similarities that can be interactively explored with a stepwise increase in detail up to the single-cell level. We apply HSNE to a study on gastrointestinal disorders and three other available mass cytometry data sets. We find that HSNE efficiently replicates previous observations and identifies rare cell populations that were previously missed due to downsampling. Thus, HSNE removes the scalability limit of conventional t-SNE analysis, a feature that makes it highly suitable for the analysis of massive high-dimensional data sets.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia
2014-03-01
Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.
3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform
NASA Astrophysics Data System (ADS)
Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul
2018-03-01
This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Vibrational response analysis of tires using a three-dimensional flexible ring-based model
NASA Astrophysics Data System (ADS)
Matsubara, Masami; Tajiri, Daiki; Ise, Tomohiko; Kawamura, Shozo
2017-11-01
Tire vibration characteristics influence noise, vibration, and harshness. Hence, there have been many investigations of the dynamic responses of tires. In this paper, we present new formulations for the prediction of tire tread vibrations below 150 Hz using a three-dimensional flexible ring-based model. The ring represents the tread including the belt, and the springs represent the tire sidewall stiffness. The equations of motion for lateral, longitudinal, and radial vibration on the tread are derived based on the assumption of inextensional deformation. Many of the associated numerical parameters are identified from experimental tests. Unlike most studies of flexible ring models, which mainly discussed radial and circumferential vibration, this study presents steady response functions concerning not only radial and circumferential but also lateral vibration using the three-dimensional flexible ring-based model. The results of impact tests described confirm the theoretical findings. The results show reasonable agreement with the predictions.
NASA Technical Reports Server (NTRS)
Kradinov, V.; Madenci, E.; Ambur, D. R.
2004-01-01
Although two-dimensional methods provide accurate predictions of contact stresses and bolt load distribution in bolted composite joints with multiple bolts, they fail to capture the effect of thickness on the strength prediction. Typically, the plies close to the interface of laminates are expected to be the most highly loaded, due to bolt deformation, and they are usually the first to fail. This study presents an analysis method to account for the variation of stresses in the thickness direction by augmenting a two-dimensional analysis with a one-dimensional through the thickness analysis. The two-dimensional in-plane solution method based on the combined complex potential and variational formulation satisfies the equilibrium equations exactly, and satisfies the boundary conditions and constraints by minimizing the total potential. Under general loading conditions, this method addresses multiple bolt configurations without requiring symmetry conditions while accounting for the contact phenomenon and the interaction among the bolts explicitly. The through-the-thickness analysis is based on the model utilizing a beam on an elastic foundation. The bolt, represented as a short beam while accounting for bending and shear deformations, rests on springs, where the spring coefficients represent the resistance of the composite laminate to bolt deformation. The combined in-plane and through-the-thickness analysis produces the bolt/hole displacement in the thickness direction, as well as the stress state in each ply. The initial ply failure predicted by applying the average stress criterion is followed by a simple progressive failure. Application of the model is demonstrated by considering single- and double-lap joints of metal plates bolted to composite laminates.
Chen, Mounter C Y; Lu, Po-Chien; Chen, James S Y; Hwang, Ned H C
2005-01-01
Coronary stents are supportive wire meshes that keep narrow coronary arteries patent, reducing the risk of restenosis. Despite the common use of coronary stents, approximately 20-35% of them fail due to restenosis. Flow phenomena adjacent to the stent may contribute to restenosis. Three-dimensional computational fluid dynamics (CFD) and reconstruction based on biplane cine angiography were used to assess coronary geometry and volumetric blood flows. A patient-specific left anterior descending (LAD) artery was reconstructed from single-plane x-ray imaging. With corresponding electrocardiographic signals, images from the same time phase were selected from the angiograms for dynamic three-dimensional reconstruction. The resultant three-dimensional LAD artery at end-diastole was adopted for detailed analysis. Both the geometries and flow fields, based on a computational model from CAE software (ANSYS and CATIA) and full three-dimensional Navier-Stroke equations in the CFD-ACE+ software, respectively, changed dramatically after stent placement. Flow fields showed a complex three-dimensional spiral motion due to arterial tortuosity. The corresponding wall shear stresses, pressure gradient, and flow field all varied significantly after stent placement. Combined angiography and CFD techniques allow more detailed investigation of flow patterns in various segments. The implanted stent(s) may be quantitatively studied from the proposed hemodynamic modeling approach.
A 3-D turbulent flow analysis using finite elements with k-ɛ model
NASA Astrophysics Data System (ADS)
Okuda, H.; Yagawa, G.; Eguchi, Y.
1989-03-01
This paper describes the finite element turbulent flow analysis, which is suitable for three-dimensional large scale problems. The k-ɛ turbulence model as well as the conservation equations of mass and momentum are discretized in space using rather low order elements. Resulting coefficient matrices are evaluated by one-point quadrature in order to reduce the computational storage and the CPU cost. The time integration scheme based on the velocity correction method is employed to obtain steady state solutions. For the verification of this FEM program, two-dimensional plenum flow is simulated and compared with experiment. As the application to three-dimensional practical problems, the turbulent flows in the upper plenum of the fast breeder reactor are calculated for various boundary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Brendon A.; Marney, Luke C.; Siegler, William C.
Multi-dimensional chromatographic instrumentation produces information-rich, and chemically complex data containing meaningful chemical signals and/or chemical patterns. Two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a prominent instrumental platform that has been applied extensively for discovery-based experimentation, where samples are sufficiently volatile or amenable to derivatization. Use of GC × GC – TOFMS and associated data analysis strategies aim to uncover meaningful chemical signals or chemical patterns. However, for complex samples, meaningful chemical information is often buried in a background of less meaningful chemical signal and noise. In this report, we utilize the tile-based F-ratiomore » software in concert with the standard addition method by spiking non-native chemicals into a diesel fuel matrix at low concentrations. While the previous work studied the concentration range of 100-1000 ppm, the current study focuses on the 0 ppm to 100 ppm analyte spike range. This study demonstrates the sensitivity and selectivity of the tile-based F-ratio software for discovery of true positives in the non-targeted analysis of a chemically complex and analytically challenging sample matrix. By exploring the low concentration spike levels, we gain a better understanding of the limit of detection (LOD) of the tile-based F-ratio software with GC × GC – TOFMS data.« less
The Dynamics of Development on the Dimensional Change Card Sorting Task
ERIC Educational Resources Information Center
van Bers, Bianca M. C. W.; Visser, Ingmar; van Schijndel, Tessa J. P.; Mandell, Dorothy J.; Raijmakers, Maartje E. J.
2011-01-01
A widely used paradigm to study cognitive flexibility in preschoolers is the Dimensional Change Card Sorting (DCCS) task. The developmental dynamics of DCCS performance was studied in a cross-sectional design (N = 93, 3 to 5 years of age) using a computerized version of the standard DCCS task. A model-based analysis of the data showed that…
The Two- and Three-Dimensional Models of the HK-WISC: A Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Chan, David W.; Lin, Wen-Ying
1996-01-01
Confirmatory analyses on the Hong Kong Wechsler Intelligence Scale for Children (HK-WISC) provided support for composite score interpretation based on the two- and three-dimensional models across age levels. Test sample was comprised of 1,100 children, ranging in age from 5 to 15 years at all 11 age levels specified by the HK-WISC. (KW)
NASA Astrophysics Data System (ADS)
Chen, Min; Pukhov, Alexander; Peng, Xiao-Yu; Willi, Oswald
2008-10-01
Terahertz (THz) radiation from the interaction of ultrashort laser pulses with gases is studied both by theoretical analysis and particle-in-cell (PIC) simulations. A one-dimensional THz generation model based on the transient ionization electric current mechanism is given, which explains the results of one-dimensional PIC simulations. At the same time the relation between the final THz field and the initial transient ionization current is shown. One- and two-dimensional simulations show that for the THz generation the contribution of the electric current due to ionization is much larger than the one driven by the usual ponderomotive force. Ionization current generated by different laser pulses and gases is also studied numerically. Based on the numerical results we explain the scaling laws for THz emission observed in the recent experiments performed by Xie [Phys. Rev. Lett. 96, 075005 (2006)]. We also study the effective parameter region for the carrier envelop phase measurement by the use of THz generation.
NASA Astrophysics Data System (ADS)
Jiang, Li; Xuan, Jianping; Shi, Tielin
2013-12-01
Generally, the vibration signals of faulty machinery are non-stationary and nonlinear under complicated operating conditions. Therefore, it is a big challenge for machinery fault diagnosis to extract optimal features for improving classification accuracy. This paper proposes semi-supervised kernel Marginal Fisher analysis (SSKMFA) for feature extraction, which can discover the intrinsic manifold structure of dataset, and simultaneously consider the intra-class compactness and the inter-class separability. Based on SSKMFA, a novel approach to fault diagnosis is put forward and applied to fault recognition of rolling bearings. SSKMFA directly extracts the low-dimensional characteristics from the raw high-dimensional vibration signals, by exploiting the inherent manifold structure of both labeled and unlabeled samples. Subsequently, the optimal low-dimensional features are fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories and severities of bearings. The experimental results demonstrate that the proposed approach improves the fault recognition performance and outperforms the other four feature extraction methods.
Chen, Min; Pukhov, Alexander; Peng, Xiao-Yu; Willi, Oswald
2008-10-01
Terahertz (THz) radiation from the interaction of ultrashort laser pulses with gases is studied both by theoretical analysis and particle-in-cell (PIC) simulations. A one-dimensional THz generation model based on the transient ionization electric current mechanism is given, which explains the results of one-dimensional PIC simulations. At the same time the relation between the final THz field and the initial transient ionization current is shown. One- and two-dimensional simulations show that for the THz generation the contribution of the electric current due to ionization is much larger than the one driven by the usual ponderomotive force. Ionization current generated by different laser pulses and gases is also studied numerically. Based on the numerical results we explain the scaling laws for THz emission observed in the recent experiments performed by Xie et al. [Phys. Rev. Lett. 96, 075005 (2006)]. We also study the effective parameter region for the carrier envelop phase measurement by the use of THz generation.
Lessard-Phillips, Laurence
2015-01-01
In this article I explore the dimensionality of the long-term experiences of the main ethnic minority groups (their adaptation) in Britain. Using recent British data, I apply factor analysis to uncover the underlying number of factors behind variables deemed to be representative of the adaptation experience within the literature. I then attempt to assess the groupings of adaptation present in the data, to see whether a typology of adaptation exists (i.e. whether adaptation in different dimensions can be concomitant with others). The analyses provide an empirical evidence base to reflect on: (1) the extent of group differences in the adaptation process, which may cut across ethnic and generational lines; and (2) whether the uncovered dimensions of adaptation match existing theoretical views and empirical evidence. Results suggest that adaptation should be regarded as a multi-dimensional phenomenon where clear typologies of adaptation based on specific trade-offs (mostly cultural) appear to exist. PMID:28502998
Spectral analysis of magnetic anomalies in and around the Philippine Sea
NASA Astrophysics Data System (ADS)
Tanaka, A.; Ishihara, T.
2009-12-01
Regional compilations of lithospheric structure from various methods and data and comparison among them are useful to understand lithospheric structure and the processes behind its formation and evolution. We present constraints on the regional variations of the magnetic thicknesses in and around the Philippine Sea. We used a new global magnetic anomaly data [Quesnel et al, 2009], which is CM4-corrected [Comprehensive Model 4; Sabaka et al., 2004], cleaned and leveled to clarify the three-dimensional crustal magnetic structure of the Philippine Sea. The Philippine Sea is one of the largest marginal seas of the world. The north-south-trending Kyushu-Palau Ridge divides it into two parts: the West Philippine Basin and the Daito Ridge province in the west and the Shikoku and Parece Vela Basins in the east. The age of the basins increases westward [Karig, 1971]. And, there are three ridges in the Daito Ridge province west of the Kyushu-Palau Ridge; the Oki-Daito, Daito Ridges and the Amami Plateau from south to north, and small basins among them. Two-dimensional spectral analysis of marine magnetic anomalies is used to estimate the centroid of magnetic sources (Zo) to constrain the lithospheric structure [Tanaka and Ishihara, 2008]. The method is based on that of Spector and Grant [1970]. Zo distribution of the Philippine Sea shows occurrence of shallow magnetic layer areas with approximately less than 10 km in the Shikoku Basin. It also shows variations in deep and shallow magnetic layer areas in the Amami-Daito Province. These patters correspond to spatial variations of the crustal thickness deduced from the three-dimensional gravity modeling [Ishihara and Koda, 2007] and acoustic basement structures [Higuchi et al., 2007]. These three spatial distributions are roughly consistent with each other, although they may contain some scatters and bias due to the different characteristics and errors. This two-dimensional spectral analysis method is based upon an assumption that source distribution is random; therefore when magnetic anomalies represent linear features, this analysis based on ensembles of thin prisms may produce unreliable results. In this case, one-dimensional spectrum analysis based on a thin plate model composed of long bars is preferable. Makino and Okubo [1988] developed one-dimensional spectral analysis for marine linear magnetic anomalies. A linear relationship between the natural log of (power-density spectrum of magnetic profile) and wavelength gives the centroid depth of magnetic sources. The same method is applied to this area. This analysis requires a long profile to see deeper structure. It may not be possible to find good enough data. However, both methods give consistent results, and the obtained Zo distribution provides a comprehensive view of regional-scale features. The correlation between crustal thickness and Zo and its correspondence with tectonic regime indicates that Zo is useful to delineate regional crustal thermal structure. It is expected that Zo combined with multidisciplinary data should help to infer geophysical and geological information in the less explored regions.
NASA Astrophysics Data System (ADS)
Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.
2007-02-01
Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.
Transient Two-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2004-01-01
Two-dimensional planar and axisymmetric numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to develop a computational methodology to identify nozzle side load physics using simplified two-dimensional geometries, in order to come up with a computational strategy to eventually predict the three-dimensional side loads. The computational methodology is based on a multidimensional, finite-volume, viscous, chemically reacting, unstructured-grid, and pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system modeling. The side load physics captured in the low aspect-ratio, two-dimensional planar nozzle include the Coanda effect, afterburning wave, and the associated lip free-shock oscillation. Results of parametric studies indicate that equivalence ratio, combustion and ramp rate affect the side load physics. The side load physics inferred in the high aspect-ratio, axisymmetric nozzle study include the afterburning wave; transition from free-shock to restricted-shock separation, reverting back to free-shock separation, and transforming to restricted-shock separation again; and lip restricted-shock oscillation. The Mach disk loci and wall pressure history studies reconfirm that combustion and the associated thermodynamic properties affect the formation and duration of the asymmetric flow.
Lump and lump-soliton solutions to the (2+1) -dimensional Ito equation
NASA Astrophysics Data System (ADS)
Yang, Jin-Yun; Ma, Wen-Xiu; Qin, Zhenyun
2017-06-01
Based on the Hirota bilinear form of the (2+1) -dimensional Ito equation, one class of lump solutions and two classes of interaction solutions between lumps and line solitons are generated through analysis and symbolic computations with Maple. Analyticity is naturally guaranteed for the presented lump and interaction solutions, and the interaction solutions reduce to lumps (or line solitons) while the hyperbolic-cosine (or the quadratic function) disappears. Three-dimensional plots and contour plots are made for two specific examples of the resulting interaction solutions.
Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)
NASA Technical Reports Server (NTRS)
Fan, Mark S.; Christou, Aris; Pecht, Michael G.
1992-01-01
Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.
Strategic planning for aircraft noise route impact analysis: A three dimensional approach
NASA Technical Reports Server (NTRS)
Bragdon, C. R.; Rowan, M. J.; Ahuja, K. K.
1993-01-01
The strategic routing of aircraft through navigable and controlled airspace to minimize adverse noise impact over sensitive areas is critical in the proper management and planning of the U.S. based airport system. A major objective of this phase of research is to identify, inventory, characterize, and analyze the various environmental, land planning, and regulatory data bases, along with potential three dimensional software and hardware systems that can be potentially applied for an impact assessment of any existing or planned air route. There are eight data bases that have to be assembled and developed in order to develop three dimensional aircraft route impact methodology. These data bases which cover geographical information systems, sound metrics, land use, airspace operational control measures, federal regulations and advisories, census data, and environmental attributes have been examined and aggregated. A three dimensional format is necessary for planning, analyzing space and possible noise impact, and formulating potential resolutions. The need to develop this three dimensional approach is essential due to the finite capacity of airspace for managing and planning a route system, including airport facilities. It appears that these data bases can be integrated effectively into a strategic aircraft noise routing system which should be developed as soon as possible, as part of a proactive plan applied to our FAA controlled navigable airspace for the United States.
Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
Bayesian Analysis of High Dimensional Classification
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Subhadeep; Liang, Faming
2009-12-01
Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.
An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow
Hu, Rui
2017-03-27
Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less
An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
NASA Astrophysics Data System (ADS)
Hano, Mitsuo; Hotta, Masashi
A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.
NASA Astrophysics Data System (ADS)
Yi, Dake; Wang, TzuChiang
2018-06-01
In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.
Thermal breakage of a discrete one-dimensional string.
Lee, Chiu Fan
2009-09-01
We study the thermal breakage of a discrete one-dimensional string, with open and fixed ends, in the heavily damped regime. Basing our analysis on the multidimensional Kramers escape theory, we are able to make analytical predictions on the mean breakage rate and on the breakage propensity with respect to the breakage location on the string. We then support our predictions with numerical simulations.
NASA Astrophysics Data System (ADS)
Shen, Wei; Li, Dongsheng; Zhang, Shuaifang; Ou, Jinping
2017-07-01
This paper presents a hybrid method that combines the B-spline wavelet on the interval (BSWI) finite element method and spectral analysis based on fast Fourier transform (FFT) to study wave propagation in One-Dimensional (1D) structures. BSWI scaling functions are utilized to approximate the theoretical wave solution in the spatial domain and construct a high-accuracy dynamic stiffness matrix. Dynamic reduction on element level is applied to eliminate the interior degrees of freedom of BSWI elements and substantially reduce the size of the system matrix. The dynamic equations of the system are then transformed and solved in the frequency domain through FFT-based spectral analysis which is especially suitable for parallel computation. A comparative analysis of four different finite element methods is conducted to demonstrate the validity and efficiency of the proposed method when utilized in high-frequency wave problems. Other numerical examples are utilized to simulate the influence of crack and delamination on wave propagation in 1D rods and beams. Finally, the errors caused by FFT and their corresponding solutions are presented.
A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid
2004-01-01
Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.
2016-01-01
Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804
Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R
2016-09-01
Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe
2015-01-01
Purpose We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). Materials and Methods The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. Results VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). Conclusion It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method. PMID:25793178
DIGE Analysis of Human Tissues.
Gelfi, Cecilia; Capitanio, Daniele
2018-01-01
Two-dimensional difference gel electrophoresis (2-D DIGE) is an advanced and elegant gel electrophoretic analytical tool for comparative protein assessment. It is based on two-dimensional gel electrophoresis (2-DE) separation of fluorescently labeled protein extracts. The tagging procedures are designed to not interfere with the chemical properties of proteins with respect to their pI and electrophoretic mobility, once a proper labeling protocol is followed. The two-dye or three-dye systems can be adopted and their choice depends on specific applications. Furthermore, the use of an internal pooled standard makes 2-D DIGE a highly accurate quantitative method enabling multiple protein samples to be separated on the same two-dimensional gel. The image matching and cross-gel statistical analysis generates robust quantitative results making data validation by independent technologies successful.
NASA Technical Reports Server (NTRS)
Olson, L. E.; Dvorak, F. A.
1975-01-01
The viscous subsonic flow past two-dimensional and infinite-span swept multi-component airfoils is studied theoretically and experimentally. The computerized analysis is based on iteratively coupled boundary layer and potential flow analysis. The method, which is restricted to flows with only slight separation, gives surface pressure distribution, chordwise and spanwise boundary layer characteristics, lift, drag, and pitching moment for airfoil configurations with up to four elements. Merging confluent boundary layers are treated. Theoretical predictions are compared with an exact theoretical potential flow solution and with experimental measures made in the Ames 40- by 80-Foot Wind Tunnel for both two-dimensional and infinite-span swept wing configurations. Section lift characteristics are accurately predicted for zero and moderate sweep angles where flow separation effects are negligible.
Development of an integrated BEM for hot fluid-structure interaction
NASA Technical Reports Server (NTRS)
Banerjee, P. K.; Dargush, G. F.
1989-01-01
The Boundary Element Method (BEM) is chosen as a basic analysis tool principally because the definition of quantities like fluxes, temperature, displacements, and velocities is very precise on a boundary base discretization scheme. One fundamental difficulty is, of course, that the entire analysis requires a very considerable amount of analytical work which is not present in other numerical methods. During the last 18 months all of this analytical work was completed and a two-dimensional, general purpose code was written. Some of the early results are described. It is anticipated that within the next two to three months almost all two-dimensional idealizations will be examined. It should be noted that the analytical work for the three-dimensional case has also been done and numerical implementation will begin next year.
Xarray: multi-dimensional data analysis in Python
NASA Astrophysics Data System (ADS)
Hoyer, Stephan; Hamman, Joe; Maussion, Fabien
2017-04-01
xarray (http://xarray.pydata.org) is an open source project and Python package that provides a toolkit and data structures for N-dimensional labeled arrays, which are the bread and butter of modern geoscientific data analysis. Key features of the package include label-based indexing and arithmetic, interoperability with the core scientific Python packages (e.g., pandas, NumPy, Matplotlib, Cartopy), out-of-core computation on datasets that don't fit into memory, a wide range of input/output options, and advanced multi-dimensional data manipulation tools such as group-by and resampling. In this contribution we will present the key features of the library and demonstrate its great potential for a wide range of applications, from (big-)data processing on super computers to data exploration in front of a classroom.
Mission Analysis and Design for Space Based Inter-Satellite Laser Power Beaming
2010-03-01
56 4.3.1 Darwin Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4 Obscuration Analysis...81 Appendix B. Additional Multi-Dimensional Darwin Plots from ModelCenter . 86 Appendix C. STK Access Report for... Darwin Data Explorer Window Showing Optimized Results in Tabular Form
Generation Algorithm of Discrete Line in Multi-Dimensional Grids
NASA Astrophysics Data System (ADS)
Du, L.; Ben, J.; Li, Y.; Wang, R.
2017-09-01
Discrete Global Grids System (DGGS) is a kind of digital multi-resolution earth reference model, in terms of structure, it is conducive to the geographical spatial big data integration and mining. Vector is one of the important types of spatial data, only by discretization, can it be applied in grids system to make process and analysis. Based on the some constraint conditions, this paper put forward a strict definition of discrete lines, building a mathematic model of the discrete lines by base vectors combination method. Transforming mesh discrete lines issue in n-dimensional grids into the issue of optimal deviated path in n-minus-one dimension using hyperplane, which, therefore realizing dimension reduction process in the expression of mesh discrete lines. On this basis, we designed a simple and efficient algorithm for dimension reduction and generation of the discrete lines. The experimental results show that our algorithm not only can be applied in the two-dimensional rectangular grid, also can be applied in the two-dimensional hexagonal grid and the three-dimensional cubic grid. Meanwhile, when our algorithm is applied in two-dimensional rectangular grid, it can get a discrete line which is more similar to the line in the Euclidean space.
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
DD-HDS: A method for visualization and exploration of high-dimensional data.
Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard
2007-09-01
Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.
The Use of Atomic Force Microscopy for 3D Analysis of Nucleic Acid Hybridization on Microarrays.
Dubrovin, E V; Presnova, G V; Rubtsova, M Yu; Egorov, A M; Grigorenko, V G; Yaminsky, I V
2015-01-01
Oligonucleotide microarrays are considered today to be one of the most efficient methods of gene diagnostics. The capability of atomic force microscopy (AFM) to characterize the three-dimensional morphology of single molecules on a surface allows one to use it as an effective tool for the 3D analysis of a microarray for the detection of nucleic acids. The high resolution of AFM offers ways to decrease the detection threshold of target DNA and increase the signal-to-noise ratio. In this work, we suggest an approach to the evaluation of the results of hybridization of gold nanoparticle-labeled nucleic acids on silicon microarrays based on an AFM analysis of the surface both in air and in liquid which takes into account of their three-dimensional structure. We suggest a quantitative measure of the hybridization results which is based on the fraction of the surface area occupied by the nanoparticles.
Geometrically nonlinear analysis of layered composite plates and shells
NASA Technical Reports Server (NTRS)
Chao, W. C.; Reddy, J. N.
1983-01-01
A degenerated three dimensional finite element, based on the incremental total Lagrangian formulation of a three dimensional layered anisotropic medium was developed. Its use in the geometrically nonlinear, static and dynamic, analysis of layered composite plates and shells is demonstrated. A two dimenisonal finite element based on the Sanders shell theory with the von Karman (nonlinear) strains was developed. It is shown that the deflections obtained by the 2D shell element deviate from those obtained by the more accurate 3D element for deep shells. The 3D degenerated element can be used to model general shells that are not necessarily doubly curved. The 3D degenerated element is computationally more demanding than the 2D shell theory element for a given problem. It is found that the 3D element is an efficient element for the analysis of layered composite plates and shells undergoing large displacements and transient motion.
A simple method of fabricating mask-free microfluidic devices for biological analysis
Yi, Xin; Kodzius, Rimantas; Gong, Xiuqing; Xiao, Kang; Wen, Weijia
2010-01-01
We report a simple, low-cost, rapid, and mask-free method to fabricate two-dimensional (2D) and three-dimensional (3D) microfluidic chip for biological analysis researches. In this fabrication process, a laser system is used to cut through paper to form intricate patterns and differently configured channels for specific purposes. Bonded with cyanoacrylate-based resin, the prepared paper sheet is sandwiched between glass slides (hydrophilic) or polymer-based plates (hydrophobic) to obtain a multilayer structure. In order to examine the chip’s biocompatibility and applicability, protein concentration was measured while DNA capillary electrophoresis was carried out, and both of them show positive results. With the utilization of direct laser cutting and one-step gas-sacrificing techniques, the whole fabrication processes for complicated 2D and 3D microfluidic devices are shorten into several minutes which make it a good alternative of poly(dimethylsiloxane) microfluidic chips used in biological analysis researches. PMID:20890452
Two-dimensional PCA-based human gait identification
NASA Astrophysics Data System (ADS)
Chen, Jinyan; Wu, Rongteng
2012-11-01
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.
Border and surface tracing--theoretical foundations.
Brimkov, Valentin E; Klette, Reinhard
2008-04-01
In this paper we define and study digital manifolds of arbitrary dimension, and provide (in particular)a general theoretical basis for curve or surface tracing in picture analysis. The studies involve properties such as one-dimensionality of digital curves and (n-1)-dimensionality of digital hypersurfaces that makes them discrete analogs of corresponding notions in continuous topology. The presented approach is fully based on the concept of adjacency relation and complements the concept of dimension as common in combinatorial topology. This work appears to be the first one on digital manifolds based ona graph-theoretical definition of dimension. In particular, in the n-dimensional digital space, a digital curve is a one-dimensional object and a digital hypersurface is an (n-1)-dimensional object, as it is in the case of curves and hypersurfaces in the Euclidean space. Relying on the obtained properties of digital hypersurfaces, we propose a uniform approach for studying good pairs defined by separations and obtain a classification of good pairs in arbitrary dimension. We also discuss possible applications of the presented definitions and results.
Analysis of rosen piezoelectric transformers with a varying cross-section.
Xue, H; Yang, J; Hu, Y
2008-07-01
We study the effects of a varying cross-section on the performance of Rosen piezoelectric transformers operating with length extensional modes of rods. A theoretical analysis is performed using an extended version of a one-dimensional model developed in a previous paper. Numerical results based on the theoretical analysis are presented.
Affective Outcomes of Schooling: Full-Information Item Factor Analysis of a Student Questionnaire.
ERIC Educational Resources Information Center
Muraki, Eiji; Engelhard, George, Jr.
Recent developments in dichotomous factor analysis based on multidimensional item response models (Bock and Aitkin, 1981; Muthen, 1978) provide an effective method for exploring the dimensionality of questionnaire items. Implemented in the TESTFACT program, this "full information" item factor analysis accounts not only for the pairwise joint…
Katwal, Santosh B; Gore, John C; Marois, Rene; Rogers, Baxter P
2013-09-01
We present novel graph-based visualizations of self-organizing maps for unsupervised functional magnetic resonance imaging (fMRI) analysis. A self-organizing map is an artificial neural network model that transforms high-dimensional data into a low-dimensional (often a 2-D) map using unsupervised learning. However, a postprocessing scheme is necessary to correctly interpret similarity between neighboring node prototypes (feature vectors) on the output map and delineate clusters and features of interest in the data. In this paper, we used graph-based visualizations to capture fMRI data features based upon 1) the distribution of data across the receptive fields of the prototypes (density-based connectivity); and 2) temporal similarities (correlations) between the prototypes (correlation-based connectivity). We applied this approach to identify task-related brain areas in an fMRI reaction time experiment involving a visuo-manual response task, and we correlated the time-to-peak of the fMRI responses in these areas with reaction time. Visualization of self-organizing maps outperformed independent component analysis and voxelwise univariate linear regression analysis in identifying and classifying relevant brain regions. We conclude that the graph-based visualizations of self-organizing maps help in advanced visualization of cluster boundaries in fMRI data enabling the separation of regions with small differences in the timings of their brain responses.
Assessment of numerical techniques for unsteady flow calculations
NASA Technical Reports Server (NTRS)
Hsieh, Kwang-Chung
1989-01-01
The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.
NASA Astrophysics Data System (ADS)
Dey, Pinkee; Suslov, Sergey A.
2016-12-01
A finite amplitude instability has been analysed to discover the exact mechanism leading to the appearance of stationary magnetoconvection patterns in a vertical layer of a non-conducting ferrofluid heated from the side and placed in an external magnetic field perpendicular to the walls. The physical results have been obtained using a version of a weakly nonlinear analysis that is based on the disturbance amplitude expansion. It enables a low-dimensional reduction of a full nonlinear problem in supercritical regimes away from a bifurcation point. The details of the reduction are given in comparison with traditional small-parameter expansions. It is also demonstrated that Squire’s transformation can be introduced for higher-order nonlinear terms thus reducing the full three-dimensional problem to its equivalent two-dimensional counterpart and enabling significant computational savings. The full three-dimensional instability patterns are subsequently recovered using the inverse transforms The analysed stationary thermomagnetic instability is shown to occur as a result of a supercritical pitchfork bifurcation.
NASA Technical Reports Server (NTRS)
Vinokur, M.
1979-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.
van Stee, Leo L P; Brinkman, Udo A Th
2011-10-28
A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.
Three-Dimensional Numerical Simulation to Mud Turbine for LWD
NASA Astrophysics Data System (ADS)
Yao, Xiaojiang; Dong, Jingxin; Shang, Jie; Zhang, Guanqi
Hydraulic performance analysis was discussed for a type of turbine on generator used for LWD. The simulation models were built by CFD analysis software FINE/Turbo, and full three-dimensional numerical simulation was carried out for impeller group. The hydraulic parameter such as power, speed and pressure drop, were calculated in two kinds of medium water and mud. Experiment was built in water environment. The error of numerical simulation was less than 6%, verified by experiment. Based on this rationalization proposals would be given to choice appropriate impellers, and the rationalization of methods would be explored.
Three dimensional radiative flow of magnetite-nanofluid with homogeneous-heterogeneous reactions
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed
2018-03-01
Present communication deals with the effects of homogeneous-heterogeneous reactions in flow of nanofluid by non-linear stretching sheet. Water based nanofluid containing magnetite nanoparticles is considered. Non-linear radiation and non-uniform heat sink/source effects are examined. Non-linear differential systems are computed by Optimal homotopy analysis method (OHAM). Convergent solutions of nonlinear systems are established. The optimal data of auxiliary variables is obtained. Impact of several non-dimensional parameters for velocity components, temperature and concentration fields are examined. Graphs are plotted for analysis of surface drag force and heat transfer rate.
NASA Technical Reports Server (NTRS)
Pathak, P. H.; Altintas, A.
1988-01-01
A high-frequency analysis of electromagnetic modal reflection and transmission coefficients is presented for waveguide discontinuities formed by joining different waveguide sections. The analysis uses an extended version of the concept of geometrical theory of diffraction based equivalent edge currents in conjunction with the reciprocity theorem to describe interior scattering effects. If the waveguide modes and their associated modal rays can be found explicitly, general two- and three-dimensional waveguide geometries can be analyzed. Expressions are developed for two-dimensional reflection and transmission coefficients. Numerical results are given for a flanged, semi-infinite parallel plate waveguide and for the junction between two linearly tapered waveguides.
Guppy-Coles, Kristyan B; Prasad, Sandhir B; Smith, Kym C; Hillier, Samuel; Lo, Ada; Atherton, John J
2015-06-01
We aimed to determine the feasibility of training cardiac nurses to evaluate left ventricular function utilising a semi-automated, workstation-based protocol on three dimensional echocardiography images. Assessment of left ventricular function by nurses is an attractive concept. Recent developments in three dimensional echocardiography coupled with border detection assistance have reduced inter- and intra-observer variability and analysis time. This could allow abbreviated training of nurses to assess cardiac function. A comparative, diagnostic accuracy study evaluating left ventricular ejection fraction assessment utilising a semi-automated, workstation-based protocol performed by echocardiography-naïve nurses on previously acquired three dimensional echocardiography images. Nine cardiac nurses underwent two brief lectures about cardiac anatomy, physiology and three dimensional left ventricular ejection fraction assessment, before a hands-on demonstration in 20 cases. We then selected 50 cases from our three dimensional echocardiography library based on optimal image quality with a broad range of left ventricular ejection fractions, which was quantified by two experienced sonographers and the average used as the comparator for the nurses. Nurses independently measured three dimensional left ventricular ejection fraction using the Auto lvq package with semi-automated border detection. The left ventricular ejection fraction range was 25-72% (70% with a left ventricular ejection fraction <55%). All nurses showed excellent agreement with the sonographers. Minimal intra-observer variability was noted on both short-term (same day) and long-term (>2 weeks later) retest. It is feasible to train nurses to measure left ventricular ejection fraction utilising a semi-automated, workstation-based protocol on previously acquired three dimensional echocardiography images. Further study is needed to determine the feasibility of training nurses to acquire three dimensional echocardiography images on real-world patients to measure left ventricular ejection fraction. Nurse-performed evaluation of left ventricular function could facilitate the broader application of echocardiography to allow cost-effective screening and monitoring for left ventricular dysfunction in high-risk populations. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
Tang, Long; Wang, Ji-Jiang; Fu, Feng; Wang, Sheng-Wen; Liu, Qi-Rui
2016-02-01
With regard to crystal engineering, building block or modular assembly methodologies have shown great success in the design and construction of metal-organic coordination polymers. The critical factor for the construction of coordination polymers is the rational choice of the organic building blocks and the metal centre. The reaction of Zn(OAc)2·2H2O (OAc is acetate) with 3-nitrobenzoic acid (HNBA) and 4,4'-bipyridine (4,4'-bipy) under hydrothermal conditions produced a two-dimensional zinc(II) supramolecular architecture, catena-poly[[bis(3-nitrobenzoato-κ(2)O,O')zinc(II)]-μ-4,4'-bipyridine-κ(2)N:N'], [Zn(C7H4NO4)2(C10H8N2)]n or [Zn(NBA)2(4,4'-bipy)]n, which was characterized by elemental analysis, IR spectroscopy, thermogravimetric analysis and single-crystal X-ray diffraction analysis. The Zn(II) ions are connected by the 4,4'-bipy ligands to form a one-dimensional zigzag chain and the chains are decorated with anionic NBA ligands which interact further through aromatic π-π stacking interactions, expanding the structure into a threefold interpenetrated two-dimensional supramolecular architecture. The solid-state fluorescence analysis indicates a slight blue shift compared with pure 4,4'-bipyridine and HNBA.
NASA Astrophysics Data System (ADS)
Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping
2015-01-01
As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.
Information extraction from multivariate images
NASA Technical Reports Server (NTRS)
Park, S. K.; Kegley, K. A.; Schiess, J. R.
1986-01-01
An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.
Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder
Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi
2018-01-01
Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931
Towards a voxel-based geographic automata for the simulation of geospatial processes
NASA Astrophysics Data System (ADS)
Jjumba, Anthony; Dragićević, Suzana
2016-07-01
Many geographic processes evolve in a three dimensional space and time continuum. However, when they are represented with the aid of geographic information systems (GIS) or geosimulation models they are modelled in a framework of two-dimensional space with an added temporal component. The objective of this study is to propose the design and implementation of voxel-based automata as a methodological approach for representing spatial processes evolving in the four-dimensional (4D) space-time domain. Similar to geographic automata models which are developed to capture and forecast geospatial processes that change in a two-dimensional spatial framework using cells (raster geospatial data), voxel automata rely on the automata theory and use three-dimensional volumetric units (voxels). Transition rules have been developed to represent various spatial processes which range from the movement of an object in 3D to the diffusion of airborne particles and landslide simulation. In addition, the proposed 4D models demonstrate that complex processes can be readily reproduced from simple transition functions without complex methodological approaches. The voxel-based automata approach provides a unique basis to model geospatial processes in 4D for the purpose of improving representation, analysis and understanding their spatiotemporal dynamics. This study contributes to the advancement of the concepts and framework of 4D GIS.
An integrated approach to reservoir modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donaldson, K.
1993-08-01
The purpose of this research is to evaluate the usefulness of the following procedural and analytical methods in investigating the heterogeneity of the oil reserve for the Mississipian Big Injun Sandstone of the Granny Creek field, Clay and Roane counties, West Virginia: (1) relational database, (2) two-dimensional cross sections, (3) true three-dimensional modeling, (4) geohistory analysis, (5) a rule-based expert system, and (6) geographical information systems. The large data set could not be effectively integrated and interpreted without this approach. A relational database was designed to fully integrate three- and four-dimensional data. The database provides an effective means for maintainingmore » and manipulating the data. A two-dimensional cross section program was designed to correlate stratigraphy, depositional environments, porosity, permeability, and petrographic data. This flexible design allows for additional four-dimensional data. Dynamic Graphics[sup [trademark
Boundary control for a flexible manipulator based on infinite dimensional disturbance observer
NASA Astrophysics Data System (ADS)
Jiang, Tingting; Liu, Jinkun; He, Wei
2015-07-01
This paper focuses on disturbance observer and boundary control design for the flexible manipulator in presence of both boundary disturbance and spatially distributed disturbance. Taking the infinite-dimensionality of the flexural dynamics into account, this study proposes a partial differential equation (PDE) model. Since the spatially distributed disturbance is infinite dimensional, it cannot be compensated by the typical disturbance observer, which is designed by finite dimensional approach. To estimate the spatially distributed disturbance, we propose a novel infinite dimensional disturbance observer (IDDO). Applying the IDDO as a feedforward compensator, a boundary control scheme is designed to regulate the joint position and eliminate the elastic vibration simultaneously. Theoretical analysis validates the stability of both the proposed disturbance observer and the boundary controller. The performance of the closed-loop system is demonstrated by numerical simulations.
Li, Ying-Jun; Yang, Cong; Wang, Gui-Cong; Zhang, Hui; Cui, Huan-Yong; Zhang, Yong-Liang
2017-09-01
This paper presents a novel integrated piezoelectric six-dimensional force sensor which can realize dynamic measurement of multi-dimensional space load. Firstly, the composition of the sensor, the spatial layout of force-sensitive components, and measurement principle are analyzed and designed. There is no interference of piezoelectric six-dimensional force sensor in theoretical analysis. Based on the principle of actual work and deformation compatibility coherence, this paper deduces the parallel load sharing principle of the piezoelectric six-dimensional force sensor. The main effect factors which affect the load sharing ratio are obtained. The finite element model of the piezoelectric six-dimensional force sensor is established. In order to verify the load sharing principle of the sensor, a load sharing test device of piezoelectric force sensor is designed and fabricated. The load sharing experimental platform is set up. The experimental results are in accordance with the theoretical analysis and simulation results. The experiments show that the multi-dimensional and heavy force measurement can be realized by the parallel arrangement of the load sharing ring and the force sensitive element in the novel integrated piezoelectric six-dimensional force sensor. The ideal load sharing effect of the sensor can be achieved by appropriate size parameters. This paper has an important guide for the design of the force measuring device according to the load sharing mode. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hieber, Simone E.; Bikis, Christos; Khimchenko, Anna; Schulz, Georg; Deyhle, Hans; Thalmann, Peter; Chicherova, Natalia; Rack, Alexander; Zdora, Marie-Christine; Zanette, Irene; Schweighauser, Gabriel; Hench, Jürgen; Müller, Bert
2016-10-01
Cell visualization and counting plays a crucial role in biological and medical research including the study of neurodegenerative diseases. The neuronal cell loss is typically determined to measure the extent of the disease. Its characterization is challenging because the cell density and size already differs by more than three orders of magnitude in a healthy cerebellum. Cell visualization is commonly performed by histology and fluorescence microscopy. These techniques are limited to resolve complex microstructures in the third dimension. Phase- contrast tomography has been proven to provide sufficient contrast in the three-dimensional imaging of soft tissue down to the cell level and, therefore, offers the basis for the three-dimensional segmentation. Within this context, a human cerebellum sample was embedded in paraffin and measured in local phase-contrast mode at the beamline ID19 (ESRF, Grenoble, France) and the Diamond Manchester Imaging Branchline I13-2 (Diamond Light Source, Didcot, UK). After the application of Frangi-based filtering the data showed sufficient contrast to automatically identify the Purkinje cells and to quantify their density to 177 cells per mm3 within the volume of interest. Moreover, brain layers were segmented in a region of interest based on edge detection. Subsequently performed histological analysis validated the presence of the cells, which required a mapping from the two- dimensional histological slices to the three-dimensional tomogram. The methodology can also be applied to further tissue types and shows potential for the computational tissue analysis in health and disease.
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.
Latha, Indu; Reichenbach, Stephen E; Tao, Qingping
2011-09-23
Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
A new hyperchaotic map and its application for image encryption
NASA Astrophysics Data System (ADS)
Natiq, Hayder; Al-Saidi, N. M. G.; Said, M. R. M.; Kilicman, Adem
2018-01-01
Based on the one-dimensional Sine map and the two-dimensional Hénon map, a new two-dimensional Sine-Hénon alteration model (2D-SHAM) is hereby proposed. Basic dynamic characteristics of 2D-SHAM are studied through the following aspects: equilibria, Jacobin eigenvalues, trajectory, bifurcation diagram, Lyapunov exponents and sensitivity dependence test. The complexity of 2D-SHAM is investigated using Sample Entropy algorithm. Simulation results show that 2D-SHAM is overall hyperchaotic with the high complexity, and high sensitivity to its initial values and control parameters. To investigate its performance in terms of security, a new 2D-SHAM-based image encryption algorithm (SHAM-IEA) is also proposed. In this algorithm, the essential requirements of confusion and diffusion are accomplished, and the stochastic 2D-SHAM is used to enhance the security of encrypted image. The stochastic 2D-SHAM generates random values, hence SHAM-IEA can produce different encrypted images even with the same secret key. Experimental results and security analysis show that SHAM-IEA has strong capability to withstand statistical analysis, differential attack, chosen-plaintext and chosen-ciphertext attacks.
2013-01-01
Peak alignment is a critical procedure in mass spectrometry-based biomarker discovery in metabolomics. One of peak alignment approaches to comprehensive two-dimensional gas chromatography mass spectrometry (GC×GC-MS) data is peak matching-based alignment. A key to the peak matching-based alignment is the calculation of mass spectral similarity scores. Various mass spectral similarity measures have been developed mainly for compound identification, but the effect of these spectral similarity measures on the performance of peak matching-based alignment still remains unknown. Therefore, we selected five mass spectral similarity measures, cosine correlation, Pearson's correlation, Spearman's correlation, partial correlation, and part correlation, and examined their effects on peak alignment using two sets of experimental GC×GC-MS data. The results show that the spectral similarity measure does not affect the alignment accuracy significantly in analysis of data from less complex samples, while the partial correlation performs much better than other spectral similarity measures when analyzing experimental data acquired from complex biological samples. PMID:24151524
Three-dimensional baroclinic instability of a Hadley cell for small Richardson number
NASA Technical Reports Server (NTRS)
Antar, B. N.; Fowlis, W. W.
1983-01-01
For the case of a baroclinic flow whose Richardson number, Ri, is of order unity, a three-dimensional linear stability analysis is conducted on the basis of a model for a thin, horizontal, rotating fluid layer which is subjected to horizontal and vertical temperature gradients. The Hadley cell basic state and stability analysis are both based on the Navier-Stokes and energy equations, and perturbations possessing zonal, meridional, and vertical structures are considered. An attempt is made to extend the previous theoretical work on three-dimensional baroclinic instability for small Ri to a more realistic model involving the Prandtl and Ekman numbers, as well as to finite growth rates and a wider range of the zonal wavenumber. In general, it is found that the symmetric modes of maximum growth are not purely symmetric, but have a weak zonal structure.
Electric potential calculation in molecular simulation of electric double layer capacitors
NASA Astrophysics Data System (ADS)
Wang, Zhenxing; Olmsted, David L.; Asta, Mark; Laird, Brian B.
2016-11-01
For the molecular simulation of electric double layer capacitors (EDLCs), a number of methods have been proposed and implemented to determine the one-dimensional electric potential profile between the two electrodes at a fixed potential difference. In this work, we compare several of these methods for a model LiClO4-acetonitrile/graphite EDLC simulated using both the traditional fixed-charged method (FCM), in which a fixed charge is assigned a priori to the electrode atoms, or the recently developed constant potential method (CPM) (2007 J. Chem. Phys. 126 084704), where the electrode charges are allowed to fluctuate to keep the potential fixed. Based on an analysis of the full three-dimensional electric potential field, we suggest a method for determining the averaged one-dimensional electric potential profile that can be applied to both the FCM and CPM simulations. Compared to traditional methods based on numerically solving the one-dimensional Poisson’s equation, this method yields better accuracy and no supplemental assumptions.
Kimizuka, Hajime; Kurokawa, Shu; Yamaguchi, Akihiro; Sakai, Akira; Ogata, Shigenobu
2014-01-01
Predicting the equilibrium ordered structures at internal interfaces, especially in the case of nanometer-scale chemical heterogeneities, is an ongoing challenge in materials science. In this study, we established an ab-initio coarse-grained modeling technique for describing the phase-like behavior of a close-packed stacking-fault-type interface containing solute nanoclusters, which undergo a two-dimensional disorder-order transition, depending on the temperature and composition. Notably, this approach can predict the two-dimensional medium-range ordering in the nanocluster arrays realized in Mg-based alloys, in a manner consistent with scanning tunneling microscopy-based measurements. We predicted that the repulsively interacting solute-cluster system undergoes a continuous evolution into a highly ordered densely packed morphology while maintaining a high degree of six-fold orientational order, which is attributable mainly to an entropic effect. The uncovered interaction-dependent ordering properties may be useful for the design of nanostructured materials utilizing the self-organization of two-dimensional nanocluster arrays in the close-packed interfaces. PMID:25471232
Zhang, Miaomiao; Wells, William M; Golland, Polina
2017-10-01
We present an efficient probabilistic model of anatomical variability in a linear space of initial velocities of diffeomorphic transformations and demonstrate its benefits in clinical studies of brain anatomy. To overcome the computational challenges of the high dimensional deformation-based descriptors, we develop a latent variable model for principal geodesic analysis (PGA) based on a low dimensional shape descriptor that effectively captures the intrinsic variability in a population. We define a novel shape prior that explicitly represents principal modes as a multivariate complex Gaussian distribution on the initial velocities in a bandlimited space. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than the state-of-the-art method such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA) that operate in the high dimensional image space. Copyright © 2017 Elsevier B.V. All rights reserved.
Experimental study on water content detection of traditional masonry based on infrared thermal image
NASA Astrophysics Data System (ADS)
Zhang, Baoqing; Lei, Zukang
2017-10-01
Based on infrared thermal imaging technology for seepage test of two kinds of brick masonry, find out the relationship between the distribution of one-dimensional two brick surface temperature distribution and one-dimensional surface moisture content were determined after seepage brick masonry minimum temperature zone and water content determination method of the highest point of the regression equation, the relationship between temperature and moisture content of the brick masonry reflected the quantitative and establish the initial wet masonry building disease analysis method, then the infrared technology is applied to the protection of historic buildings in.
NASA Astrophysics Data System (ADS)
Schroer, M. A.; Gutt, C.; Grübel, G.
2014-07-01
Recently the analysis of scattering patterns by angular cross-correlation analysis (CCA) was introduced to reveal the orientational order in disordered samples with special focus to future applications on x-ray free-electron laser facilities. We apply this CCA approach to ultra-small-angle light-scattering data obtained from two-dimensional monolayers of microspheres. The films were studied in addition by optical microscopy. This combined approach allows to calculate the cross-correlations of the scattering patterns, characterized by the orientational correlation function Ψl(q), as well as to obtain the real-space structure of the monolayers. We show that CCA is sensitive to the orientational order of monolayers formed by the microspheres which are not directly visible from the scattering patterns. By mixing microspheres of different radii the sizes of ordered monolayer domains is reduced. For these samples it is shown that Ψl(q) quantitatively describes the degree of hexagonal order of the two-dimensional films. The experimental CCA results are compared with calculations based on the microscopy images. Both techniques show qualitatively similar features. Differences can be attributed to the wave-front distortion of the laser beam in the experiment. This effect is discussed by investigating the effect of different wave fronts on the cross-correlation analysis results. The so-determined characteristics of the cross-correlation analysis will be also relevant for future x-ray-based studies.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.
2018-05-01
We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.
Neuschulz, J; Schaefer, I; Scheer, M; Christ, H; Braumann, B
2013-07-01
In order to visualize and quantify the direction and extent of morphological upper-jaw changes in infants with unilateral cleft lip and palate (UCLP) during early orthodontic treatment, a three-dimensional method of cast analysis for routine application was developed. In the present investigation, this method was used to identify reaction patterns associated with specific cleft forms. The study included a cast series reflecting the upper-jaw situations of 46 infants with complete (n=27) or incomplete (n=19) UCLP during week 1 and months 3, 6, and 12 of life. Three-dimensional datasets were acquired and visualized with scanning software (DigiModel®; OrthoProof, The Netherlands). Following interactive identification of landmarks on the digitized surface relief, a defined set of representative linear parameters were three-dimensionally measured. At the same time, the three-dimensional surfaces of one patient series were superimposed based on a defined reference plane. Morphometric differences were statistically analyzed. Thanks to the user-friendly software, all landmarks could be identified quickly and reproducibly, thus, allowing for simultaneous three-dimensional measurement of all defined parameters. The measured values revealed that significant morphometric differences were present in all three planes of space between the two patient groups. Patients with complete UCLP underwent significantly larger reductions in cleft width (p<0.001), and sagittal growth in the complete UCLP group exceeded sagittal growth in the incomplete UCLP group by almost 50% within the first year of life. Based on patients with incomplete versus complete UCLP, different reaction patterns were identified that depended not on apparent severities of malformation but on cleft forms.
GATE: software for the analysis and visualization of high-dimensional time series expression data.
MacArthur, Ben D; Lachmann, Alexander; Lemischka, Ihor R; Ma'ayan, Avi
2010-01-01
We present Grid Analysis of Time series Expression (GATE), an integrated computational software platform for the analysis and visualization of high-dimensional biomolecular time series. GATE uses a correlation-based clustering algorithm to arrange molecular time series on a two-dimensional hexagonal array and dynamically colors individual hexagons according to the expression level of the molecular component to which they are assigned, to create animated movies of systems-level molecular regulatory dynamics. In order to infer potential regulatory control mechanisms from patterns of correlation, GATE also allows interactive interroga-tion of movies against a wide variety of prior knowledge datasets. GATE movies can be paused and are interactive, allowing users to reconstruct networks and perform functional enrichment analyses. Movies created with GATE can be saved in Flash format and can be inserted directly into PDF manuscript files as interactive figures. GATE is available for download and is free for academic use from http://amp.pharm.mssm.edu/maayan-lab/gate.htm
Analysis and Design of Rotors at Ultra-Low Reynolds Numbers
NASA Technical Reports Server (NTRS)
Kunz, Peter J.; Strawn, Roger C.
2003-01-01
Design tools have been developed for ultra-low Reynolds number rotors, combining enhanced actuator-ring / blade-element theory with airfoil section data based on two-dimensional Navier-Stokes calculations. This performance prediction method is coupled with an optimizer for both design and analysis applications. Performance predictions from these tools have been compared with three-dimensional Navier Stokes analyses and experimental data for a 2.5 cm diameter rotor with chord Reynolds numbers below 10,000. Comparisons among the analyses and experimental data show reasonable agreement both in the global thrust and power required, but the spanwise distributions of these quantities exhibit significant deviations. The study also reveals that three-dimensional and rotational effects significantly change local airfoil section performance. The magnitude of this issue, unique to this operating regime, may limit the applicability of blade-element type methods for detailed rotor design at ultra-low Reynolds numbers, but these methods are still useful for evaluating concept feasibility and rapidly generating initial designs for further analysis and optimization using more advanced tools.
Three-dimensional elastic-plastic finite-element analysis of fatigue crack propagation
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Chermahini, R. G.
1985-01-01
Fatigue cracks are a major problem in designing structures subjected to cyclic loading. Cracks frequently occur in structures such as aircraft and spacecraft. The inspection intervals of many aircraft structures are based on crack-propagation lives. Therefore, improved prediction of propagation lives under flight-load conditions (variable-amplitude loading) are needed to provide more realistic design criteria for these structures. The main thrust was to develop a three-dimensional, nonlinear, elastic-plastic, finite element program capable of extending a crack and changing boundary conditions for the model under consideration. The finite-element model is composed of 8-noded (linear-strain) isoparametric elements. In the analysis, the material is assumed to be elastic-perfectly plastic. The cycle stress-strain curve for the material is shown Zienkiewicz's initial-stress method, von Mises's yield criterion, and Drucker's normality condition under small-strain assumptions are used to account for plasticity. The three-dimensional analysis is capable of extending the crack and changing boundary conditions under cyclic loading.
Zhou, Xuan; Chen, Cen; Ye, Xiaolan; Song, Fenyun; Fan, Guorong; Wu, Fuhai
2017-01-01
In this paper, by coupling reversed phase liquid chromatography (RPLC) and hydrophilic interaction liquid chromatography (HILIC), a two-dimensional liquid chromatography system was developed for separation and identification of the active ingredients in Gardenia jasminoides Ellis (GJE). By applying the semi-preparative C18 column as the first dimension and the core-shell column as the second dimension, a total of 896 peaks of GJE were separated. Among the 896 peaks, 16 active ingredients including geniposide, gardenoside, gardoside, etc. were identified by mass spectrometry analysis. The results indicated that the proposed two-dimensional RPLC/HILIC system was an effective method for the analysis of GJE and might hold a high potential to become a useful tool for analysis of other complex mixtures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stoogiometry: A Cognitive Approach to Teaching Stoichiometry.
ERIC Educational Resources Information Center
Krieger, Carla R.
1997-01-01
Describes the use of Moe's Mall, a locational device designed to be used by learners, as a simple algorithm for solving mole-based exercises efficiently and accurately using dimensional analysis. (DDR)
Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde
2016-03-01
Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling of Melt-Infiltrated SiC/SiC Composite Properties
NASA Technical Reports Server (NTRS)
Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.; Lang, Jerry
2009-01-01
The elastic properties of a two-dimensional five-harness melt-infiltrated silicon carbide fiber reinforced silicon carbide matrix (MI SiC/SiC) ceramic matrix composite (CMC) were predicted using several methods. Methods used in this analysis are multiscale laminate analysis, micromechanics-based woven composite analysis, a hybrid woven composite analysis, and two- and three-dimensional finite element analyses. The elastic properties predicted are in good agreement with each other as well as with the available measured data. However, the various methods differ from each other in three key areas: (1) the fidelity provided, (2) the efforts required for input data preparation, and (3) the computational resources required. Results also indicate that efficient methods are also able to provide a reasonable estimate of local stress fields.
Three-dimensional passive sensing photon counting for object classification
NASA Astrophysics Data System (ADS)
Yeom, Seokwon; Javidi, Bahram; Watson, Edward
2007-04-01
In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.
A novel four-dimensional analytical approach for analysis of complex samples.
Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J
2016-05-01
A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation.
Sparks, Rachel; Madabhushi, Anant
2016-01-01
Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
1981-08-01
meaning of dimensionality of achievement data. It also shows the importance of construct validity, even in criterion referenced testing of the cognitive ... aspect of performance, and that the traditional means of item analysis that are based on taking the variances of binary scores and content analysis
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
Sautier, L P; Scherwath, A; Weis, J; Sarkar, S; Bosbach, M; Schendel, M; Ladehoff, N; Koch, U; Mehnert, A
2015-10-01
Our purpose was the psychometric evaluation of the German version of the Utrecht Work Engagement Scale-9 (UWES-9), a self-assessment tool measuring work-related resources consisting of 9 items. Based on a sample of 179 patients with hematological malignancies in in-patient and rehabilitative oncological settings, we tested the dimensional structure by confirmatory and explorative factor analysis. We further evaluated reliability, item characteristics, and construct validity of the UWES-9. The confirmatory factor analysis showed acceptable fit for both a 1-dimensional factor structure and the original 3-factor model. Based on an explorative principal component analysis, we were able to replicate the 1-dimensional factor accounting for 67% of the total variance and showing very high internal consistency (α=0.94) and high factor loads (0.73-0.88). The construct validity was further supported by significant positive correlations between work engagement and meaning of work, corporate feeling, commitment to the workplace, and job satisfaction. The German version of the UWES-9 shows good psychometric qualities in measuring dedication to work in patients with hematological malignancies in in-patient and rehabilitative oncological settings. © Georg Thieme Verlag KG Stuttgart · New York.
Optical sensor in planar configuration based on multimode interference
NASA Astrophysics Data System (ADS)
Blahut, Marek
2017-08-01
In the paper a numerical analysis of optical sensors based on multimode interference in planar one-dimensional step-index configuration is presented. The structure consists in single-mode input and output waveguides and multimode waveguide which guide only few modes. Material parameters discussed refer to a SU8 polymer waveguide on SiO2 substrate. The optical system described will be designed to the analysis of biological substances.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
On One-Dimensional Stretching Functions for Finite-Difference Calculations
NASA Technical Reports Server (NTRS)
Vinokur, M.
1980-01-01
The class of one dimensional stretching function used in finite difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two sided stretching function, for which the arbitrary slopes at the two ends of the one dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. The general two sided function has many applications in the construction of finite difference grids.
NASA Technical Reports Server (NTRS)
Vinokur, M.
1983-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055
Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data
Király, András; Abonyi, János
2014-01-01
During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data) and biclustering (applied to gene expression data analysis). The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers. PMID:24616651
NASA Technical Reports Server (NTRS)
Gatos, H. C.; Watanabe, M.; Actor, G.
1977-01-01
Quantitative analysis of the electron beam-induced current and the dependence of the effective diffusion length of the minority carriers on the penetration depth of the electron beam were employed for the analysis of the carrier recombination characteristics in heavily doped silicon layers. The analysis is based on the concept of the effective excitation strength of the carriers which takes into consideration all possible recombination sources. Two dimensional mapping of the surface recombination velocity of P-diffused Si layers will be presented together with a three dimensional mapping of minority carrier lifetime in ion implanted Si. Layers heavily doped with As exhibit improved recombination characteristics as compared to those of the layers doped with P.
Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang
2007-11-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.
Huang, Danyang; Li, Runze; Wang, Hansheng
2014-01-01
Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.
NASA Technical Reports Server (NTRS)
Krishnamoorthy, S.; Ramaswamy, B.; Joo, S. W.
1995-01-01
A thin film draining on an inclined plate has been studied numerically using finite element method. Three-dimensional governing equations of continuity, momentum and energy with a moving boundary are integrated in an arbitrary Lagrangian Eulerian frame of reference. Kinematic equation is solved to precisely update interface location. Rivulet formation based on instability mechanism has been simulated using full-scale computation. Comparisons with long-wave theory are made to validate the numerical scheme. Detailed analysis of two- and three-dimensional nonlinear wave formation and spontaneous rupture forming rivulets under the influence of combined thermocapillary and surface-wave instabilities is performed.
Chen, Xingguo; Fazal, Md. Abul; Dovichi, Norman J.
2007-01-01
Two-dimensional capillary electrophoresis was used for the separation of proteins and biogenic amines from the mouse AtT-20 cell line. The first-dimension capillary contained a TRIS-CHES-SDS-dextran buffer to perform capillary sieving electrophoresis, which is based on molecular weight of proteins. The second-dimension capillary contained a TRIS-CHES-SDS buffer for micel1ar electrokinetic capillary chromatography. After a 61 seconds preliminary separation, fractions from the first-dimension capillary were successively transferred to the second-dimension capillary, where they further separated by MECC. The two-dimensional separation required 60 minutes. PMID:17637850
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Rebaudi, Alberto; Trisi, Paolo; Pagni, Giorgio; Wang, Hom-Lay
The purpose of this study was to compare microcomputed tomography (microCT) and histologic analysis outcomes of a periodontal regeneration of a human defect treated with a polylactic- and polyglycolic-acid copolymer. At 11 months following the grafting procedure, the root with the surrounding periodontal tissues was removed and analyzed using microCT and histologic techniques. The results suggest that microCT three-dimensional analysis may be used in synergy with two-dimensional histologic sections to provide additional information for studying the regeneration outcomes normally reported by histologic biopsies in humans. Additional data is needed to validate these findings.
NASA Astrophysics Data System (ADS)
Shao, Meng; Xiao, Chengsi; Sun, Jinwei; Shao, Zhuxiao; Zheng, Qiuhong
2017-12-01
The paper analyzes hydrodynamic characteristics and the strength of a novel dot-matrix oscillating wave energy converter, which is in accordance with nowadays’ research tendency: high power, high efficiency, high reliability and low cost. Based on three-dimensional potential flow theory, the paper establishes motion control equations of the wave energy converter unit and calculates wave loads and motions. On this basis, a three-dimensional finite element model of the device is built to check its strength. Through the analysis, it can be confirmed that the WEC is feasible and the research results could be a reference for wave energy’s exploration and utilization.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Hosseinian, Banafsheh; Rubin, Marcie S; Clouston, Sean A P; Almaidhan, Asma; Shetye, Pradip R; Cutting, Court B; Grayson, Barry H
2018-01-01
To compare 3-dimensional nasal symmetry in patients with UCLP who had either rotation advancement alone or nasoalveolar molding (NAM) followed by rotation advancement in conjunction with primary nasal repair. Pilot retrospective cohort study. Nasal casts of 23 patients with UCLP from 2 institutions were analyzed; 12 in the rotation advancement only group (Iowa) and 11 in the NAM, rotation advancement with primary nasal repair group (New York). Casts from patients aged 6 to 18 years were scanned using the 3Shape scanner and 3-dimensional analysis of nasal symmetry performed using 3dMD Vultus software, Version 2507, 3dMD, Atlanta, GA. Cleft and noncleft side columellar height, nasal dome height, alar base width, and nasal projection were linearly measured. Inter- and intragroup analyses were performed using t tests and paired t tests as appropriate. A statistically significant difference in mean-scaled 3-dimensional asymmetry index was found between groups with group 1 having a larger measure of asymmetry (4.69 cm 3 ) than group 2 (2.56 cm 3 ; P = .02). Intergroup analysis performed on the most sensitive linear measure, alar base width, revealed significantly less asymmetry on average in group 2 than in group 1 ( P = .013). This study suggests the NAM followed by rotation advancement in conjunction with primary nasal repair approach may result in less nasal asymmetry compared to rotation advancement alone.
Baracat, Patrícia Junqueira Ferraz; de Sá Ferreira, Arthur
2013-12-01
The present study investigated the association between postural tasks and center of pressure spatial patterns of three-dimensional statokinesigrams. Young (n=35; 27.0±7.7years) and elderly (n=38; 67.3±8.7years) healthy volunteers maintained an undisturbed standing position during postural tasks characterized by combined sensory (vision/no vision) and biomechanical challenges (feet apart/together). A method for the analysis of three-dimensional statokinesigrams based on nonparametric statistics and image-processing analysis was employed. Four patterns of spatial distribution were derived from ankle and hip strategies according to the quantity (single; double; multi) and location (anteroposterior; mediolateral) of high-density regions on three-dimensional statokinesigrams. Significant associations between postural task and spatial pattern were observed (young: gamma=0.548, p<.001; elderly: gamma=0.582, p<.001). Robustness analysis revealed small changes related to parameter choices for histogram processing. MANOVA revealed multivariate main effects for postural task [Wilks' Lambda=0.245, p<.001] and age [Wilks' Lambda=0.308, p<.001], with interaction [Wilks' Lambda=0.732, p<.001]. The quantity of high-density regions was positively correlated to stabilogram and statokinesigram variables (p<.05 or lower). In conclusion, postural tasks are associated with center of pressure spatial patterns and are similar in young and elderly healthy volunteers. Single-centered patterns reflected more stable postural conditions and were more frequent with complete visual input and a wide base of support. Copyright © 2013 Elsevier B.V. All rights reserved.
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
An Evidence-Based Videotaped Running Biomechanics Analysis.
Souza, Richard B
2016-02-01
Running biomechanics play an important role in the development of injuries. Performing a running biomechanics analysis on injured runners can help to develop treatment strategies. This article provides a framework for a systematic video-based running biomechanics analysis plan based on the current evidence on running injuries, using 2-dimensional (2D) video and readily available tools. Fourteen measurements are proposed in this analysis plan from lateral and posterior video. Identifying simple 2D surrogates for 3D biomechanic variables of interest allows for widespread translation of best practices, and have the best opportunity to impact the highly prevalent problem of the injured runner. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawai, Kotaro, E-mail: s135016@stn.nagaokaut.ac.jp; Sakamoto, Moritsugu; Noda, Kohei
2016-03-28
A diffractive optical element with a three-dimensional liquid crystal (LC) alignment structure for advanced control of polarized beams was fabricated by a highly efficient one-step photoalignment method. This study is of great significance because different two-dimensional continuous and complex alignment patterns can be produced on two alignment films by simultaneously irradiating an empty glass cell composed of two unaligned photocrosslinkable polymer LC films with three-beam polarized interference beam. The polarization azimuth, ellipticity, and rotation direction of the diffracted beams from the resultant LC grating widely varied depending on the two-dimensional diffracted position and the polarization states of the incident beams.more » These polarization diffraction properties are well explained by theoretical analysis based on Jones calculus.« less
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.
1991-01-01
Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.
Eigenspace-based fuzzy c-means for sensing trending topics in Twitter
NASA Astrophysics Data System (ADS)
Muliawati, T.; Murfi, H.
2017-07-01
As the information and communication technology are developed, the fulfillment of information can be obtained through social media, like Twitter. The enormous number of internet users has triggered fast and large data flow, thus making the manual analysis is difficult or even impossible. An automated methods for data analysis is needed, one of which is the topic detection and tracking. An alternative method other than latent Dirichlet allocation (LDA) is a soft clustering approach using Fuzzy C-Means (FCM). FCM meets the assumption that a document may consist of several topics. However, FCM works well in low-dimensional data but fails in high-dimensional data. Therefore, we propose an approach where FCM works on low-dimensional data by reducing the data using singular value decomposition (SVD). Our simulations show that this approach gives better accuracies in term of topic recall than LDA for sensing trending topic in Twitter about an event.
A combinatorial code for pattern formation in Drosophila oogenesis.
Yakoby, Nir; Bristow, Christopher A; Gong, Danielle; Schafer, Xenia; Lembong, Jessica; Zartman, Jeremiah J; Halfon, Marc S; Schüpbach, Trudi; Shvartsman, Stanislav Y
2008-11-01
Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggests that they follow a simple combinatorial code based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of inductive signals, provided by the highly conserved epidermal growth factor receptor and bone morphogenetic protein signaling pathways. We demonstrate the validity of the code by testing it against a set of patterns obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguish 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterize their joint dynamics over four stages of oogenesis. The proposed combinatorial framework allows systematic analysis of the diversity and dynamics of two-dimensional transcriptional patterns and guides future studies of gene regulation.
NASA Astrophysics Data System (ADS)
Latif, A. Afiff; Ibrahim, M. Rasidi; Rahim, E. A.; Cheng, K.
2017-04-01
The conventional milling has many difficulties in the processing of hard and brittle material. Hence, ultrasonic vibration assisted milling (UVAM) was proposed to overcome this problem. The objective of this research is to study the behavior of compliance mechanism (CM) as the critical part affect the performance of the UVAM. The design of the CM was investigated and focuses on 1-Dimensional. Experimental result was obtained from a portable laser digital vibrometer. While the 1-Dimensional value such as safety factor, deformation of hinges and stress analysis are obtained from finite elements simulation. Finally, the findings help to find the best design judging from the most travelled distance of the piezoelectric actuators. In addition, this paper would provide a clear picture the behavior of the CM embedded in the UVAM, which can provide good data and to improve the machining on reducing tool wear, and lower cutting force on the workpiece surface roughness.
Observation of Quasi-Two-Dimensional Nonlinear Interactions in a Drift-Wave Streamer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, T.; Nagashima, Y.; Itoh, S.-I.
2010-11-26
A streamer, which is a bunching of drift-wave fluctuations, and its mediator, which generates the streamer by coupling with other fluctuations, have been observed in a cylindrical magnetized plasma. Their radial structures were investigated in detail by using the biphase analysis. Their quasi-two-dimensional structures were revealed to be equivalent with a pair of fast and slow modes predicted by a nonlinear Schroedinger equation based on the Hasegawa-Mima model.
High-speed three-dimensional shape measurement using GOBO projection
NASA Astrophysics Data System (ADS)
Heist, Stefan; Lutzke, Peter; Schmidt, Ingo; Dietrich, Patrick; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther
2016-12-01
A projector which uses a rotating slide structure to project aperiodic sinusoidal fringe patterns at high frame rates and with high radiant flux is introduced. It is used in an optical three-dimensional (3D) sensor based on coded-light projection, thus allowing the analysis of fast processes. Measurements of an inflating airbag, a rope skipper, and a soccer ball kick at a 3D frame rate of more than 1300 independent point clouds per second are presented.
Terluin, Berend; Brouwers, Evelien P M; Marchand, Miquelle A G; de Vet, Henrica C W
2018-05-01
Many paper-and-pencil (P&P) questionnaires have been migrated to electronic platforms. Differential item and test functioning (DIF and DTF) analysis constitutes a superior research design to assess measurement equivalence across modes of administration. The purpose of this study was to demonstrate an item response theory (IRT)-based DIF and DTF analysis to assess the measurement equivalence of a Web-based version and the original P&P format of the Four-Dimensional Symptom Questionnaire (4DSQ), measuring distress, depression, anxiety, and somatization. The P&P group (n = 2031) and the Web group (n = 958) consisted of primary care psychology clients. Unidimensionality and local independence of the 4DSQ scales were examined using IRT and Yen's Q3. Bifactor modeling was used to assess the scales' essential unidimensionality. Measurement equivalence was assessed using IRT-based DIF analysis using a 3-stage approach: linking on the latent mean and variance, selection of anchor items, and DIF testing using the Wald test. DTF was evaluated by comparing expected scale scores as a function of the latent trait. The 4DSQ scales proved to be essentially unidimensional in both modalities. Five items, belonging to the distress and somatization scales, displayed small amounts of DIF. DTF analysis revealed that the impact of DIF on the scale level was negligible. IRT-based DIF and DTF analysis is demonstrated as a way to assess the equivalence of Web-based and P&P questionnaire modalities. Data obtained with the Web-based 4DSQ are equivalent to data obtained with the P&P version.
Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval
NASA Technical Reports Server (NTRS)
Smith, Jeffrey Scott
2006-01-01
Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.
Lawry, Tristan J; Wilt, Kyle R; Scarton, Henry A; Saulnier, Gary J
2012-11-01
The linear propagation of electromagnetic and dilatational waves through a sandwiched plate piezoelectric transformer (SPPT)-based acoustic-electric transmission channel is modeled using the transfer matrix method with mixed-domain two-port ABCD parameters. This SPPT structure is of great interest because it has been explored in recent years as a mechanism for wireless transmission of electrical signals through solid metallic barriers using ultrasound. The model we present is developed to allow for accurate channel performance prediction while greatly reducing the computational complexity associated with 2- and 3-dimensional finite element analysis. As a result, the model primarily considers 1-dimensional wave propagation; however, approximate solutions for higher-dimensional phenomena (e.g., diffraction in the SPPT's metallic core layer) are also incorporated. The model is then assessed by comparing it to the measured wideband frequency response of a physical SPPT-based channel from our previous work. Very strong agreement between the modeled and measured data is observed, confirming the accuracy and utility of the presented model.
Liu, Xueqin; Li, Ning; Yuan, Shuai; Xu, Ning; Shi, Wenqin; Chen, Weibin
2015-12-15
As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. Copyright © 2015 Elsevier B.V. All rights reserved.
Graph-based analysis of kinetics on multidimensional potential-energy surfaces.
Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y
2009-09-01
The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.
Trajectory-based heating analysis for the European Space Agency/Rosetta Earth Return Vehicle
NASA Technical Reports Server (NTRS)
Henline, William D.; Tauber, Michael E.
1994-01-01
A coupled, trajectory-based flowfield and material thermal-response analysis is presented for the European Space Agency proposed Rosetta comet nucleus sample return vehicle. The probe returns to earth along a hyperbolic trajectory with an entry velocity of 16.5 km/s and requires an ablative heat shield on the forebody. Combined radiative and convective ablating flowfield analyses were performed for the significant heating portion of the shallow ballistic entry trajectory. Both quasisteady ablation and fully transient analyses were performed for a heat shield composed of carbon-phenolic ablative material. Quasisteady analysis was performed using the two-dimensional axisymmetric codes RASLE and BLIMPK. Transient computational results were obtained from the one-dimensional ablation/conduction code CMA. Results are presented for heating, temperature, and ablation rate distributions over the probe forebody for various trajectory points. Comparison of transient and quasisteady results indicates that, for the heating pulse encountered by this probe, the quasisteady approach is conservative from the standpoint of predicted surface recession.
NASA Astrophysics Data System (ADS)
Duran, Volkan; Gençten, Azmi
2016-03-01
In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and products such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, Volkan, E-mail: volkan.duran8@gmail.com; Gençten, Azmi, E-mail: gencten@omu.edu.tr
In this research the aim is to analyze quantum qutrit entanglements in a new perspective in terms of the reflection of n-dimensional sphere which can be depicted as the set of points equidistant from a fixed central point in three dimensional Euclidian Space which has also real and imaginary dimensions, that can also be depicted similarly as a two unit spheres having same centre in a dome-shaped projection. In order to analyze quantum qutrit entanglements: i- a new type of n dimensional hyper-sphere which is the extend version of Bloch Sphere to hyper-space, is defined ii- new operators and productsmore » such as rotation operator, combining and gluing products in this space are defined, iii-the entangled states are analyzed in terms of those products in order to reach a general formula to depict qutrit entanglements and some new patterns between spheres for the analysis of entanglement for different routes in a more simple way in a four dimensional time independent hypersphere.« less
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.
Design and Analysis Tool for External-Compression Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.
2012-01-01
A computational tool named SUPIN has been developed to design and analyze external-compression supersonic inlets for aircraft at cruise speeds from Mach 1.6 to 2.0. The inlet types available include the axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced Busemann inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flowfield is divided into parts to provide a framework for the geometry and aerodynamic modeling and the parts are defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick analysis. SUPIN provides inlet geometry in the form of coordinates and surface grids useable by grid generation methods for higher-fidelity computational fluid dynamics (CFD) analysis. SUPIN is demonstrated through a series of design studies and CFD analyses were performed to verify some of the analysis results.
Factors Influencing Progressive Failure Analysis Predictions for Laminated Composite Structure
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2008-01-01
Progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model for use with a nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details are described in the present paper. Parametric studies for laminated composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented and to demonstrate their influence on progressive failure analysis predictions.
Assessment of Severity of Ovine Smoke Inhalation Injury by Analysis of Computed Tomographic Scans
2003-09-01
Computerized analysis of three- dimensional reconstructed scans was also performed, based on Hounsfield unit ranges: hyperinflated, 1,000 to 900; normal...the interactive segmentation function of the software. The pulmonary parenchyma was separated into four regions based on the Hounsfield unit (HU...SII) severity. Methods: Twenty anesthetized sheep underwent graded SII: group I, no smoke; group II, 5 smoke units ; group III, 10 units ; and group IV
Three-Dimensional Numerical Analyses of Earth Penetration Dynamics
1979-01-31
Lagrangian formulation based on the HEMP method and has been adapted and validated for treatment of normal-incidence (axisymmetric) impact and...code, is a detailed analysis of the structural response of the EPW. This analysis is generated using a nonlinear dynamic, elastic- plastic finite element...based on the HEMP scheme. Thus, the code has the same material modeling capabilities and abilities to track large scale motion found in the WAVE-L code
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
Quality Assessments of Long-Term Quantitative Proteomic Analysis of Breast Cancer Xenograft Tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Jian-Ying; Chen, Lijun; Zhang, Bai
The identification of protein biomarkers requires large-scale analysis of human specimens to achieve statistical significance. In this study, we evaluated the long-term reproducibility of an iTRAQ (isobaric tags for relative and absolute quantification) based quantitative proteomics strategy using one channel for universal normalization across all samples. A total of 307 liquid chromatography tandem mass spectrometric (LC-MS/MS) analyses were completed, generating 107 one-dimensional (1D) LC-MS/MS datasets and 8 offline two-dimensional (2D) LC-MS/MS datasets (25 fractions for each set) for human-in-mouse breast cancer xenograft tissues representative of basal and luminal subtypes. Such large-scale studies require the implementation of robust metrics to assessmore » the contributions of technical and biological variability in the qualitative and quantitative data. Accordingly, we developed a quantification confidence score based on the quality of each peptide-spectrum match (PSM) to remove quantification outliers from each analysis. After combining confidence score filtering and statistical analysis, reproducible protein identification and quantitative results were achieved from LC-MS/MS datasets collected over a 16 month period.« less
Sanchez Sorzano, Carlos Oscar; Alvarez-Cabrera, Ana Lucia; Kazemi, Mohsen; Carazo, Jose María; Jonić, Slavica
2016-04-26
Single-particle electron microscopy (EM) has been shown to be very powerful for studying structures and associated conformational changes of macromolecular complexes. In the context of analyzing conformational changes of complexes, distinct EM density maps obtained by image analysis and three-dimensional (3D) reconstruction are usually analyzed in 3D for interpretation of structural differences. However, graphic visualization of these differences based on a quantitative analysis of elastic transformations (deformations) among density maps has not been done yet due to a lack of appropriate methods. Here, we present an approach that allows such visualization. This approach is based on statistical analysis of distances among elastically aligned pairs of EM maps (one map is deformed to fit the other map), and results in visualizing EM maps as points in a lower-dimensional distance space. The distances among points in the new space can be analyzed in terms of clusters or trajectories of points related to potential conformational changes. The results of the method are shown with synthetic and experimental EM maps at different resolutions. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Andrews, E. H., Jr.; Mackley, E. A.
1976-01-01
An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
Quasi-Two-Dimensional Magnetism in Co-Based Shandites
NASA Astrophysics Data System (ADS)
Kassem, Mohamed A.; Tabata, Yoshikazu; Waki, Takeshi; Nakamura, Hiroyuki
2016-06-01
We report quasi-two-dimensional (Q2D) itinerant electron magnetism in the layered Co-based shandites. Comprehensive magnetization measurements were performed using single crystals of Co3Sn2-xInxS2 (0 ≤ x ≤ 2) and Co3-yFeySn2S2 (0 ≤ y ≤ 0.5). The magnetic parameters of both systems; the Curie temperature TC, effective moment peff and spontaneous moment ps; exhibit almost identical variations against the In- and Fe-concentrations, indicating significance of the electron count on the magnetism in the Co-based shandite. The ferromagnetic-nonmagnetic quantum phase transition is found around xc ˜ 0.8. Analysis based on the extended Q2D spin fluctuation theory clearly reveals the highly Q2D itinerant electron character of the ferromagnetism in the Co-based shandites.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Palm vein recognition based on directional empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
Design and analysis of photonic crystal micro-cavity based optical sensor platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Amit Kumar, E-mail: amitgoyal.ceeri@gmail.com; Dutta, Hemant Sankar, E-mail: hemantdutta97@gmail.com; Pal, Suchandan, E-mail: spal@ceeri.ernet.in
2016-04-13
In this paper, the design of a two-dimensional photonic crystal micro-cavity based integrated-optic sensor platform is proposed. The behaviour of designed cavity is analyzed using two-dimensional Finite Difference Time Domain (FDTD) method. The structure is designed by deliberately inserting some defects in a photonic crystal waveguide structure. Proposed structure shows a quality factor (Q) of about 1e5 and the average sensitivity of 500nm/RIU in the wavelength range of 1450 – 1580 nm. Sensing technique is based on the detection of shift in upper-edge cut-off wavelength for a reference signal strength of –10 dB in accordance with the change in refractive index ofmore » analyte.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marney, Luke C.; Siegler, William C.; Parsons, Brendon A.
Two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC – TOFMS) is a highly capable instrumental platform that produces complex and information-rich multi-dimensional chemical data. The complex data can be overwhelming, especially when many samples (of various sample classes) are analyzed with multiple injections for each sample. Thus, the data must be analyzed in such a way to extract the most meaningful information. The pixel-based and peak table-based algorithmic use of Fisher ratios has been used successfully in the past to reduce the multi-dimensional data down to those chemical compounds that are changing between classes relative tomore » those that are not (i.e., chemical feature selection). We report on the initial development of a computationally fast novel tile-based Fisher-ratio software that addresses challenges due to 2D retention time misalignment without explicitly aligning the data, which is a problem for both pixel-based and peak table- based methods. Concurrently, the tile-based Fisher-ratio software maximizes the sensitivity contrast of true positives against a background of potential false positives and noise. To study this software, eight compounds, plus one internal standard, were spiked into diesel at various concentrations. The tile-based F-ratio software was able to discover all spiked analytes, within the complex diesel sample matrix with thousands of potential false positives, in each possible concentration comparison, even at the lowest absolute spiked analyte concentration ratio of 1.06.« less
[Nasolabial muscle finite-element study and clinical application].
Yin, Ningbei; Wu, Jiajun; Chen, Bo; Wang, Yongqian; Song, Tao; Ma, Hengyuan
2015-05-01
To investigate the nasolabial muscle anatomy and biomechanical characteristics. Micro-computed tomography scan was performed in 8 cases of spontaneous abortion fetus lip nasal specimens to construct a three-dimensional model. The nasolabial muscle structure was analyzed using Mimics software. The three-dimensional configuration model of nasolabial muscle was established based on local anatomy and tissue section, and compared with tissue section. Three dimensional finite element analysis was performed on lip nasal muscle related biomechanics and surface deformation in Application verification was carried out in 263 cases of microform cleft lip surgery. There was close relationship between nasolabial muscle. The nasolabial muscle tension system was constituted, based on which a new cleft lip repair surgery was designed and satisfied results were achieved. There is close relationship among nasolabial muscle in anatomy, histology and biomechanics. To obtain better effect, cleft lip repair should be performed on the basis of recovering muscle tension system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bessho, Yasunori; Yokomizo, Osamu; Yoshimoto, Yuichiro
1997-03-01
Development and qualification results are described for a three-dimensional, time-domain core dynamics analysis program for commercial boiling water reactors (BWRs). The program allows analysis of the reactor core with a detailed mesh division, which eliminates calculational ambiguity in the nuclear-thermal-hydraulic stability analysis caused by reactor core regional division. During development, emphasis was placed on high calculational speed and large memory size as attained by the latest supercomputer technology. The program consists of six major modules, namely a core neutronics module, a fuel heat conduction/transfer module, a fuel channel thermal-hydraulic module, an upper plenum/separator module, a feedwater/recirculation flow module, and amore » control system module. Its core neutronics module is based on the modified one-group neutron kinetics equation with the prompt jump approximation and with six delayed neutron precursor groups. The module is used to analyze one fuel bundle of the reactor core with one mesh (region). The fuel heat conduction/transfer module solves the one-dimensional heat conduction equation in the radial direction with ten nodes in the fuel pin. The fuel channel thermal-hydraulic module is based on separated three-equation, two-phase flow equations with the drift flux correlation, and it analyzes one fuel bundle of the reactor core with one channel to evaluate flow redistribution between channels precisely. Thermal margin is evaluated by using the GEXL correlation, for example, in the module.« less
NASA Astrophysics Data System (ADS)
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
Spanwise effects on instabilities of compressible flow over a long rectangular cavity
NASA Astrophysics Data System (ADS)
Sun, Y.; Taira, K.; Cattafesta, L. N.; Ukeiley, L. S.
2017-12-01
The stability properties of two-dimensional (2D) and three-dimensional (3D) compressible flows over a rectangular cavity with length-to-depth ratio of L/D=6 are analyzed at a free-stream Mach number of M_∞ =0.6 and depth-based Reynolds number of Re_D=502. In this study, we closely examine the influence of three-dimensionality on the wake mode that has been reported to exhibit high-amplitude fluctuations from the formation and ejection of large-scale spanwise vortices. Direct numerical simulation (DNS) and bi-global stability analysis are utilized to study the stability characteristics of the wake mode. Using the bi-global stability analysis with the time-averaged flow as the base state, we capture the global stability properties of the wake mode at a spanwise wavenumber of β =0. To uncover spanwise effects on the 2D wake mode, 3D DNS are performed with cavity width-to-depth ratio of W/D=1 and 2. We find that the 2D wake mode is not present in the 3D cavity flow with W/D=2, in which spanwise structures are observed near the rear region of the cavity. These 3D instabilities are further investigated via bi-global stability analysis for spanwise wavelengths of λ /D=0.5{-}2.0 to reveal the eigenspectra of the 3D eigenmodes. Based on the findings of 2D and 3D global stability analysis, we conclude that the absence of the wake mode in 3D rectangular cavity flows is due to the release of kinetic energy from the spanwise vortices to the streamwise vortical structures that develops from the spanwise instabilities.
High- and low-level hierarchical classification algorithm based on source separation process
NASA Astrophysics Data System (ADS)
Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber
2016-10-01
High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance semantic capabilities and give good identification rates.
NASA Astrophysics Data System (ADS)
Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong
2018-01-01
Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Velocity filtering applied to optical flow calculations
NASA Technical Reports Server (NTRS)
Barniv, Yair
1990-01-01
Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.
NASA Astrophysics Data System (ADS)
Yeom, Seokwon
2013-05-01
Millimeter waves imaging draws increasing attention in security applications for weapon detection under clothing. In this paper, concealed object segmentation and three-dimensional localization schemes are reviewed. A concealed object is segmented by the k-means algorithm. A feature-based stereo-matching method estimates the longitudinal distance of the concealed object. The distance is estimated by the discrepancy between the corresponding centers of the segmented objects. Experimental results are provided with the analysis of the depth resolution.
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
On Riemann boundary value problems for null solutions of the two dimensional Helmholtz equation
NASA Astrophysics Data System (ADS)
Bory Reyes, Juan; Abreu Blaya, Ricardo; Rodríguez Dagnino, Ramón Martin; Kats, Boris Aleksandrovich
2018-01-01
The Riemann boundary value problem (RBVP to shorten notation) in the complex plane, for different classes of functions and curves, is still widely used in mathematical physics and engineering. For instance, in elasticity theory, hydro and aerodynamics, shell theory, quantum mechanics, theory of orthogonal polynomials, and so on. In this paper, we present an appropriate hyperholomorphic approach to the RBVP associated to the two dimensional Helmholtz equation in R^2 . Our analysis is based on a suitable operator calculus.
Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2012-05-04
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less
The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico
2008-03-01
Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.
Team-Based Simulations: Learning Ethical Conduct in Teacher Trainee Programs
ERIC Educational Resources Information Center
Shapira-Lishchinsky, Orly
2013-01-01
This study aimed to identify the learning aspects of team-based simulations (TBS) through the analysis of ethical incidents experienced by 50 teacher trainees. A four-dimensional model emerged: learning to make decisions in a "supportive-forgiving" environment; learning to develop standards of care; learning to reduce misconduct; and learning to…
APPLE - An aeroelastic analysis system for turbomachines and propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral
1992-01-01
This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Chen; Liu, Liu; Guo, Xu
2016-01-15
Two novel coordination polymers, namely, [Ca(NCP){sub 2}]{sub ∞} (I) and [Sr(NCP){sub 2}]{sub ∞} (II) were synthesized under hydrothermal conditions based on 2-(4-carboxyphenyl)imidazo(4,5-f)-(1,10)phenanthroline (HNCP) and characterized by elemental analysis, infrared spectrometry, X-ray powder diffraction and single crystal X-ray diffraction. Findings indicate that I and II are isomorphous and isostructural, containing the unit of M(NCP{sup −}){sub 4} (M=Ca(II) and Sr(II)), based on which to assemble into three-dimensional (3D) porous 4-fold interpenetration honeycomb-shaped neutral coordination polymers (CPs). Between the adjacent lamellar structures in I and II, there exist π–π interactions between the pyridine rings belonging to phenanthroline of NCP{sup −} which stabilize themore » frameworks. Both I and II display stronger fluorescence emissions as well as high thermal stability. - Graphical abstract: One-dimensional nanotubular channels with the cross dimension of 37.1959(20)×23.6141(11)Å{sup 2} in the three-dimensional honeycomb-shaped coordination network of II are observed. The topological analysis of II indicates that there exists a typical diamond framework possessing large adamantanoid cages, which containing four cyclohexane-shaped patterns in chair conformations. - Highlights: • Two isomorphous and isostructural coordination polymers based on flexible ligand and two alkaline-earth metal salts have been synthesized and characterized. • Structural analysis indicates that I and II are assembled into 3D porous honeycomb-shaped metal-organic frameworks. • Both I and II display stronger fluorescence emissions and higher thermal stability.« less
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485
Welke, Juliane Elisa; Zanus, Mauro; Lazzarotto, Marcelo; Pulgati, Fernando Hepp; Zini, Cláudia Alcaraz
2014-12-01
The main changes in the volatile profile of base wines and their corresponding sparkling wines produced by traditional method were evaluated and investigated for the first time using headspace solid-phase microextraction combined with comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry detection (GC×GC/TOFMS) and chemometric tools. Fisher ratios helped to find the 119 analytes that were responsible for the main differences between base and sparkling wines and principal component analysis explained 93.1% of the total variance related to the selected 78 compounds. It was also possible to observe five subclusters in base wines and four subclusters in sparkling wines samples through hierarchical cluster analysis, which seemed to have an organised distribution according to the regions where the wines came from. Twenty of the most important volatile compounds co-eluted with other components and separation of some of them was possible due to GC×GC/TOFMS performance. Copyright © 2014. Published by Elsevier Ltd.
Perceptual integration of kinematic components in the recognition of emotional facial expressions.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
2018-04-01
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
3MRA UNCERTAINTY AND SENSITIVITY ANALYSIS
This presentation discusses the Multimedia, Multipathway, Multireceptor Risk Assessment (3MRA) modeling system. The outline of the presentation is: modeling system overview - 3MRA versions; 3MRA version 1.0; national-scale assessment dimensionality; SuperMUSE: windows-based super...
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M.
2017-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft® Excel® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet. PMID:28163564
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M
2016-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft ® Excel ® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet.
Direct determination approach for the multifractal detrending moving average analysis
NASA Astrophysics Data System (ADS)
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
Kent, Jack W
2016-02-03
New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.
Pramanik, Brahmananda; Tadepalli, Tezeswi; Mantena, P. Raju
2012-01-01
In this study, the fractal dimensions of failure surfaces of vinyl ester based nanocomposites are estimated using two classical methods, Vertical Section Method (VSM) and Slit Island Method (SIM), based on the processing of 3D digital microscopic images. Self-affine fractal geometry has been observed in the experimentally obtained failure surfaces of graphite platelet reinforced nanocomposites subjected to quasi-static uniaxial tensile and low velocity punch-shear loading. Fracture energy and fracture toughness are estimated analytically from the surface fractal dimensionality. Sensitivity studies show an exponential dependency of fracture energy and fracture toughness on the fractal dimensionality. Contribution of fracture energy to the total energy absorption of these nanoparticle reinforced composites is demonstrated. For the graphite platelet reinforced nanocomposites investigated, surface fractal analysis has depicted the probable ductile or brittle fracture propagation mechanism, depending upon the rate of loading. PMID:28817017
Three-dimensional ray-tracing model for the study of advanced refractive errors in keratoconus.
Schedin, Staffan; Hallberg, Per; Behndig, Anders
2016-01-20
We propose a numerical three-dimensional (3D) ray-tracing model for the analysis of advanced corneal refractive errors. The 3D modeling was based on measured corneal elevation data by means of Scheimpflug photography. A mathematical description of the measured corneal surfaces from a keratoconus (KC) patient was used for the 3D ray tracing, based on Snell's law of refraction. A model of a commercial intraocular lens (IOL) was included in the analysis. By modifying the posterior IOL surface, it was shown that the imaging quality could be significantly improved. The RMS values were reduced by approximately 50% close to the retina, both for on- and off-axis geometries. The 3D ray-tracing model can constitute a basis for simulation of customized IOLs that are able to correct the advanced, irregular refractive errors in KC.
Advanced Software for Analysis of High-Speed Rolling-Element Bearings
NASA Technical Reports Server (NTRS)
Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.
2003-01-01
COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.
Yamaguchi, Satoshi; Tsutsui, Kihei; Satake, Koji; Morikawa, Shigehiro; Shirai, Yoshiaki; Tanaka, Hiromi T
2014-10-01
Our goal was to develop a three-dimensional finite element model that enables dynamic analysis of needle insertion for soft materials. To demonstrate large deformation and fracture, we used the arbitrary Lagrangian-Eulerian (ALE) method for fluid analysis. We performed ALE-based finite element analysis for 3% agar gel and three types of copper needle with bevel tips. To evaluate simulation results, we compared the needle deflection and insertion force with corresponding experimental results acquired with a uniaxial manipulator. We studied the shear stress distribution of agar gel on various time scales. For 30°, 45°, and 60°, differences in deflections of each needle between both sets of results were 2.424, 2.981, and 3.737mm, respectively. For the insertion force, there was no significant difference for mismatching area error (p<0.05) between simulation and experimental results. Our results have the potential to be a stepping stone to develop pre-operative surgical planning to estimate an optimal needle insertion path for MR image-guided microwave coagulation therapy and for analyzing large deformation and fracture in biological tissues. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interactive Visualization of DGA Data Based on Multiple Views
NASA Astrophysics Data System (ADS)
Geng, Yujie; Lin, Ying; Ma, Yan; Guo, Zhihong; Gu, Chao; Wang, Mingtao
2017-01-01
The commission and operation of dissolved gas analysis (DGA) online monitoring makes up for the weakness of traditional DGA method. However, volume and high-dimensional DGA data brings a huge challenge for monitoring and analysis. In this paper, we present a novel interactive visualization model of DGA data based on multiple views. This model imitates multi-angle analysis by combining parallel coordinates, scatter plot matrix and data table. By offering brush, collaborative filter and focus + context technology, this model provides a convenient and flexible interactive way to analyze and understand the DGA data.
Three-dimensional MRI perfusion maps: a step beyond volumetric analysis in mental disorders
Fabene, Paolo F; Farace, Paolo; Brambilla, Paolo; Andreone, Nicola; Cerini, Roberto; Pelizza, Luisa; Versace, Amelia; Rambaldelli, Gianluca; Birbaumer, Niels; Tansella, Michele; Sbarbati, Andrea
2007-01-01
A new type of magnetic resonance imaging analysis, based on fusion of three-dimensional reconstructions of time-to-peak parametric maps and high-resolution T1-weighted images, is proposed in order to evaluate the perfusion of selected volumes of interest. Because in recent years a wealth of data have suggested the crucial involvement of vascular alterations in mental diseases, we tested our new method on a restricted sample of schizophrenic patients and matched healthy controls. The perfusion of the whole brain was compared with that of the caudate nucleus by means of intrasubject analysis. As expected, owing to the encephalic vascular pattern, a significantly lower time-to-peak was observed in the caudate nucleus than in the whole brain in all healthy controls, indicating that the suggested method has enough sensitivity to detect subtle perfusion changes even in small volumes of interest. Interestingly, a less uniform pattern was observed in the schizophrenic patients. The latter finding needs to be replicated in an adequate number of subjects. In summary, the three-dimensional analysis method we propose has been shown to be a feasible tool for revealing subtle vascular changes both in normal subjects and in pathological conditions. PMID:17229290
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Conjugate Heat Transfer Analyses on the Manifold for Ramjet Fuel Injectors
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.
2006-01-01
Three-dimensional conjugate heat transfer analyses on the manifold located upstream of the ramjet fuel injector are performed using CFdesign, a finite-element computational fluid dynamics (CFD) software. The flow field of the hot fuel (JP-7) flowing through the manifold is simulated and the wall temperature of the manifold is computed. The three-dimensional numerical results of the fuel temperature are compared with those obtained using a one-dimensional analysis based on empirical equations, and they showed a good agreement. The numerical results revealed that it takes around 30 to 40 sec to reach the equilibrium where the fuel temperature has dropped about 3 F from the inlet to the exit of the manifold.
Nishizaki, Tatsuya; Matoba, Osamu; Nitta, Kouichi
2014-09-01
The recording properties of three-dimensional speckle-shift multiplexing in reflection-type holographic memory are analyzed numerically. Three-dimensional recording can increase the number of multiplexed holograms by suppressing the cross-talk noise from adjacent holograms by using depth-direction multiplexing rather than in-plane multiplexing. Numerical results indicate that the number of multiplexed holograms in three-layer recording can be increased by 1.44 times as large as that of a single-layer recording when an acceptable signal-to-noise ratio is set to be 2 when NA=0.43 and the thickness of the recording medium is 0.5 mm.
NASA Technical Reports Server (NTRS)
Kennedy, Ronald; Padovan, Joe
1987-01-01
In a three-part series of papers, a generalized finite element solution strategy is developed to handle traveling load problems in rolling, moving and rotating structure. The main thrust of this section consists of the development of three-dimensional and shell type moving elements. In conjunction with this work, a compatible three-dimensional contact strategy is also developed. Based on these modeling capabilities, extensive analytical and experimental benchmarking is presented. Such testing includes traveling loads in rotating structure as well as low- and high-speed rolling contact involving standing wave-type response behavior. These point to the excellent modeling capabilities of moving element strategies.
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin
2017-10-01
Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.
NASA Astrophysics Data System (ADS)
Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei
2015-04-01
Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Dimensionality Reduction Through Classifier Ensembles
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)
1999-01-01
In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246
Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.
Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie
2016-01-01
Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.
Multidimensional data analysis in immunophenotyping.
Loken, M R
2001-05-01
The complexity of cell populations requires careful selection of reagents to detect cells of interest and distinguish them from other types. Additional reagents are frequently used to provide independent criteria for cell identification. Two or three monoclonal antibodies in combination with forward and right-angle light scatter generate a data set that is difficult to visualize because the data must be represented in four- or five-dimensional space. The separation between cell populations provided by the multiple characteristics is best visualized by multidimensional analysis using all parameters simultaneously to identify populations within the resulting hyperspace. Groups of cells are distinguished based on a combination of characteristics not apparent in any usual two-dimensional representation of the data.
Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization
NASA Astrophysics Data System (ADS)
Alekseev, G. V.
2018-04-01
For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.
Two-dimensional DFA scaling analysis applied to encrypted images
NASA Astrophysics Data System (ADS)
Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.
2015-01-01
The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.
Cao, Peng; Liu, Xiaoli; Yang, Jinzhu; Zhao, Dazhe; Huang, Min; Zhang, Jian; Zaiane, Osmar
2017-12-01
Alzheimer's disease (AD) has been not only a substantial financial burden to the health care system but also an emotional burden to patients and their families. Making accurate diagnosis of AD based on brain magnetic resonance imaging (MRI) is becoming more and more critical and emphasized at the earliest stages. However, the high dimensionality and imbalanced data issues are two major challenges in the study of computer aided AD diagnosis. The greatest limitations of existing dimensionality reduction and over-sampling methods are that they assume a linear relationship between the MRI features (predictor) and the disease status (response). To better capture the complicated but more flexible relationship, we propose a multi-kernel based dimensionality reduction and over-sampling approaches. We combined Marginal Fisher Analysis with ℓ 2,1 -norm based multi-kernel learning (MKMFA) to achieve the sparsity of region-of-interest (ROI), which leads to simultaneously selecting a subset of the relevant brain regions and learning a dimensionality transformation. Meanwhile, a multi-kernel over-sampling (MKOS) was developed to generate synthetic instances in the optimal kernel space induced by MKMFA, so as to compensate for the class imbalanced distribution. We comprehensively evaluate the proposed models for the diagnostic classification (binary class and multi-class classification) including all subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The experimental results not only demonstrate the proposed method has superior performance over multiple comparable methods, but also identifies relevant imaging biomarkers that are consistent with prior medical knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rampinelli, Vittorio; Doglietto, Francesco; Mattavelli, Davide; Qiu, Jimmy; Raffetti, Elena; Schreiber, Alberto; Villaret, Andrea Bolzoni; Kucharczyk, Walter; Donato, Francesco; Fontanella, Marco Maria; Nicolai, Piero
2017-09-01
Three-dimensional (3D) endoscopy has been recently introduced in endonasal skull base surgery. Only a relatively limited number of studies have compared it to 2-dimensional, high definition technology. The objective was to compare, in a preclinical setting for endonasal endoscopic surgery, the surgical maneuverability of 2-dimensional, high definition and 3D endoscopy. A group of 68 volunteers, novice and experienced surgeons, were asked to perform 2 tasks, namely simulating grasping and dissection surgical maneuvers, in a model of the nasal cavities. Time to complete the tasks was recorded. A questionnaire to investigate subjective feelings during tasks was filled by each participant. In 25 subjects, the surgeons' movements were continuously tracked by a magnetic-based neuronavigator coupled with dedicated software (ApproachViewer, part of GTx-UHN) and the recorded trajectories were analyzed by comparing jitter, sum of square differences, and funnel index. Total execution time was significantly lower with 3D technology (P < 0.05) in beginners and experts. Questionnaires showed that beginners preferred 3D endoscopy more frequently than experts. A minority (14%) of beginners experienced discomfort with 3D endoscopy. Analysis of jitter showed a trend toward increased effectiveness of surgical maneuvers with 3D endoscopy. Sum of square differences and funnel index analyses documented better values with 3D endoscopy in experts. In a preclinical setting for endonasal skull base surgery, 3D technology appears to confer an advantage in terms of time of execution and precision of surgical maneuvers. Copyright © 2017 Elsevier Inc. All rights reserved.
Kay, Richard; Barton, Chris; Ratcliffe, Lucy; Matharoo-Ball, Balwir; Brown, Pamela; Roberts, Jane; Teale, Phil; Creaser, Colin
2008-10-01
A rapid acetonitrile (ACN)-based extraction method has been developed that reproducibly depletes high abundance and high molecular weight proteins from serum prior to mass spectrometric analysis. A nanoflow liquid chromatography/tandem mass spectrometry (nano-LC/MS/MS) multiple reaction monitoring (MRM) method for 57 high to medium abundance serum proteins was used to characterise the ACN-depleted fraction after tryptic digestion. Of the 57 targeted proteins 29 were detected and albumin, the most abundant protein in serum and plasma, was identified as the 20th most abundant protein in the extract. The combination of ACN depletion and one-dimensional nano-LC/MS/MS enabled the detection of the low abundance serum protein, insulin-like growth factor-I (IGF-I), which has a serum concentration in the region of 100 ng/mL. One-dimensional sodium dodecyl sulfate/polyacrylamide gel electrophoresis (SDS-PAGE) analysis of the depleted serum showed no bands corresponding to proteins of molecular mass over 75 kDa after extraction, demonstrating the efficiency of the method for the depletion of high molecular weight proteins. Total protein analysis of the ACN extracts showed that approximately 99.6% of all protein is removed from the serum. The ACN-depletion strategy offers a viable alternative to the immunochemistry-based protein-depletion techniques commonly used for removing high abundance proteins from serum prior to MS-based proteomic analyses.
NASA Astrophysics Data System (ADS)
Agapov, Vladimir
2018-03-01
The necessity of new approaches to the modeling of rods in the analysis of high-rise constructions is justified. The possibility of the application of the three-dimensional superelements of rods with rectangular cross section for the static and dynamic calculation of the bar and combined structures is considered. The results of the eighteen-story spatial frame free vibrations analysis using both one-dimensional and three-dimensional models of rods are presented. A comparative analysis of the obtained results is carried out and the conclusions on the possibility of three-dimensional superelements application in static and dynamic analysis of high-rise constructions are given on its basis.
Multiple brain atlas database and atlas-based neuroimaging system.
Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A
1997-01-01
For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.
Katayama, Hirohito; Higo, Takashi; Tokunaga, Yuji; Katoh, Shigeo; Hiyama, Yukio; Morikawa, Kaoru
2008-01-01
A practical, risk-based monitoring approach using the combined data collected from actual experiments and computer simulations was developed for the qualification of an EU GMP Annex 1 Grade B, ISO Class 7 area. This approach can locate and minimize the representative number of sampling points used for microbial contamination risk assessment. We conducted a case study on an aseptic clean room, newly constructed and specifically designed for the use of a restricted access barrier system (RABS). Hotspots were located using three-dimensional airflow analysis based on a previously published empirical measurement method, the three-dimensional airflow analysis. Local mean age of air (LMAA) values were calculated based on computer simulations. Comparable results were found using actual measurements and simulations, demonstrating the potential usefulness of such tools in estimating contamination risks based on the airflow characteristics of a clean room. Intensive microbial monitoring and particle monitoring at the Grade B environmental qualification stage, as well as three-dimensional airflow analysis, were also conducted to reveal contamination hotspots. We found representative hotspots were located at perforated panels covering the air exhausts where the major piston airflows collect in the Grade B room, as well as at any locations within the room that were identified as having stagnant air. However, we also found that the floor surface air around the exit airway of the RABS EU GMP Annex 1 Grade A, ISO Class 5 area was always remarkably clean, possibly due to the immediate sweep of the piston airflow, which prevents dispersed human microbes from falling in a Stokes-type manner on settling plates placed on the floor around the Grade A exit airway. In addition, this airflow is expected to be clean with a significantly low LMAA. Based on these observed results, we propose a simplified daily monitoring program to monitor microbial contamination in Grade B environments. To locate hotspots we propose using a combination of computer simulation, actual airflow measurements, and intensive environmental monitoring at the qualification stage. Thereafter, instead of particle or microbial air monitoring, we recommend the use of microbial surface monitoring at the main air exhaust. These measures would be sufficient to assure the efficiency of the monitoring program, as well as to minimize the number of surface sampling points used in environments surrounding a RABS.
Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun; Niu, Hongli
2015-10-01
Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.
Li, Chun-Hong; Zuo, Hua-Li; Zhang, Qian; Wang, Feng-Qin; Hu, Yuan-Jia; Qian, Zheng-Ming; Li, Wen-Jia; Xia, Zhi-Ning; Yang, Feng-Qing
2017-01-01
Background: As one of the bioactive components in Cordyceps sinensis (CS), proteins were rarely used as index components to study the correlation between the protein components and producing areas of natural CS. Objective: Protein components of 26 natural CS samples produced in Qinghai, Tibet, and Sichuan provinces were analyzed and compared to investigate the relationship among 26 different producing areas. Materials and Methods: Proteins from 26 different producing areas were extracted by Tris-HCl buffer with Triton X-100, and separated using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and two-dimensional electrophoresis (2-DE). Results: The SDS-PAGE results indicated that the number of protein bands and optical density curves of proteins in 26 CS samples was a bit different. However, the 2-DE results showed that the numbers and abundance of protein spots in protein profiles of 26 samples were obviously different and showed certain association with producing areas. Conclusions: Based on the expression values of matched protein spots, 26 batches of CS samples can be divided into two main categories (Tibet and Qinghai) by hierarchical cluster analysis. SUMMARY The number of protein bands and optical density curves of proteins in 26 Cordyceps sinensis samples were a bit different on the sodium dodecyl sulfate-polyacrylamide gel electrophoresis protein profilesNumbers and abundance of protein spots in protein profiles of 26 samples were obvious different on two-dimensional electrophoresis mapsTwenty-six different producing areas of natural Cordyceps sinensis samples were divided into two main categories (Tibet and Qinghai) by Hierarchical cluster analysis based on the values of matched protein spots. Abbreviations Used: SDS-PAGE: Sodium dodecyl sulfate polyacrylamide gel electrophoresis, 2-DE: Two-dimensional electrophoresis, Cordyceps sinensis: CS, TCMs: Traditional Chinese medicines PMID:28250651
Li, Chun-Hong; Zuo, Hua-Li; Zhang, Qian; Wang, Feng-Qin; Hu, Yuan-Jia; Qian, Zheng-Ming; Li, Wen-Jia; Xia, Zhi-Ning; Yang, Feng-Qing
2017-01-01
As one of the bioactive components in Cordyceps sinensis (CS), proteins were rarely used as index components to study the correlation between the protein components and producing areas of natural CS. Protein components of 26 natural CS samples produced in Qinghai, Tibet, and Sichuan provinces were analyzed and compared to investigate the relationship among 26 different producing areas. Proteins from 26 different producing areas were extracted by Tris-HCl buffer with Triton X-100, and separated using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and two-dimensional electrophoresis (2-DE). The SDS-PAGE results indicated that the number of protein bands and optical density curves of proteins in 26 CS samples was a bit different. However, the 2-DE results showed that the numbers and abundance of protein spots in protein profiles of 26 samples were obviously different and showed certain association with producing areas. Based on the expression values of matched protein spots, 26 batches of CS samples can be divided into two main categories (Tibet and Qinghai) by hierarchical cluster analysis. The number of protein bands and optical density curves of proteins in 26 Cordyceps sinensis samples were a bit different on the sodium dodecyl sulfate-polyacrylamide gel electrophoresis protein profilesNumbers and abundance of protein spots in protein profiles of 26 samples were obvious different on two-dimensional electrophoresis mapsTwenty-six different producing areas of natural Cordyceps sinensis samples were divided into two main categories (Tibet and Qinghai) by Hierarchical cluster analysis based on the values of matched protein spots. Abbreviations Used : SDS-PAGE: Sodium dodecyl sulfate polyacrylamide gel electrophoresis, 2-DE: Two-dimensional electrophoresis, Cordyceps sinensis : CS, TCMs: Traditional Chinese medicines.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Changes among Israeli Youth Movements: A Structural Analysis Based on Kahane's Code of Informality
ERIC Educational Resources Information Center
Cohen, Erik H.
2015-01-01
Multi-dimensional data analysis tools are applied to Reuven Kahane's data on the informality of youth organizations, yielding a graphic portrayal of Kahane's code of informality. This structure helps address questions of the whether the eight structural components exhaustively cover the field without redundancy. Further, the structure is used to…
Waveguide-based electro-absorption modulator performance: comparative analysis
NASA Astrophysics Data System (ADS)
Amin, Rubab; Khurgin, Jacob B.; Sorger, Volker J.
2018-06-01
Electro-optic modulation is a key function for data communication. Given the vast amount of data handled, understanding the intricate physics and trade-offs of modulators on-chip allows revealing performance regimes not explored yet. Here we show a holistic performance analysis for waveguide-based electro-absorption modulators. Our approach centers around material properties revealing obtainable optical absorption leading to effective modal cross-section, and material broadening effects. Taken together both describe the modulator physical behavior entirely. We consider a plurality of material modulation classes to include two-level absorbers such as quantum dots, free carrier accumulation or depletion such as ITO or Silicon, two-dimensional electron gas in semiconductors such as quantum wells, Pauli blocking in Graphene, and excitons in two-dimensional atomic layered materials such as found in transition metal dichalcogendies. Our results show that reducing the modal area generally improves modulator performance defined by the amount of induced electrical charge, and hence the energy-per-bit function, required switching the signal. We find that broadening increases the amount of switching charge needed. While some material classes allow for reduced broadening such as quantum dots and 2-dimensional materials due to their reduced Coulomb screening leading to increased oscillator strengths, the sharpness of broadening is overshadowed by thermal effects independent of the material class. Further we find that plasmonics allows the switching charge and energy-per-bit function to be reduced by about one order of magnitude compared to bulk photonics. This analysis is aimed as a guide for the community to predict anticipated modulator performance based on both existing and emerging materials.
Kuritz, K; Stöhr, D; Pollak, N; Allgöwer, F
2017-02-07
Cyclic processes, in particular the cell cycle, are of great importance in cell biology. Continued improvement in cell population analysis methods like fluorescence microscopy, flow cytometry, CyTOF or single-cell omics made mathematical methods based on ergodic principles a powerful tool in studying these processes. In this paper, we establish the relationship between cell cycle analysis with ergodic principles and age structured population models. To this end, we describe the progression of a single cell through the cell cycle by a stochastic differential equation on a one dimensional manifold in the high dimensional dataspace of cell cycle markers. Given the assumption that the cell population is in a steady state, we derive transformation rules which transform the number density on the manifold to the steady state number density of age structured population models. Our theory facilitates the study of cell cycle dependent processes including local molecular events, cell death and cell division from high dimensional "snapshot" data. Ergodic analysis can in general be applied to every process that exhibits a steady state distribution. By combining ergodic analysis with age structured population models we furthermore provide the theoretic basis for extensions of ergodic principles to distribution that deviate from their steady state. Copyright © 2016 Elsevier Ltd. All rights reserved.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.
2011-01-01
The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108
Grassi, Mario; Nucera, Andrea
2010-01-01
The objective of this study was twofold: 1) to confirm the hypothetical eight scales and two-component summaries of the questionnaire Short Form 36 Health Survey (SF-36), and 2) to evaluate the performance of two alternative measures to the original physical component summary (PCS) and mental component summary (MCS). We performed principal component analysis (PCA) based on 35 items, after optimal scaling via multiple correspondence analysis (MCA), and subsequently on eight scales, after standard summative scoring. Item-based summary measures were planned. Data from the European Community Respiratory Health Survey II follow-up of 8854 subjects from 25 centers were analyzed to cross-validate the original and the novel PCS and MCS. Overall, the scale- and item-based comparison indicated that the SF-36 scales and summaries meet the supposed dimensionality. However, vitality, social functioning, and general health items did not fit data optimally. The novel measures, derived a posteriori by unit-rule from an oblique (correlated) MCA/PCA solution, are simple item sums or weighted scale sums where the weights are the raw scale ranges. These item-based scores yielded consistent scale-summary results for outliers profiles, with an expected known-group differences validity. We were able to confirm the hypothesized dimensionality of eight scales and two summaries of the SF-36. The alternative scoring reaches at least the same required standards of the original scoring. In addition, it can reduce the item-scale inconsistencies without loss of predictive validity.
Wu, Dingming; Wang, Dongfang; Zhang, Michael Q; Gu, Jin
2015-12-01
One major goal of large-scale cancer omics study is to identify molecular subtypes for more accurate cancer diagnoses and treatments. To deal with high-dimensional cancer multi-omics data, a promising strategy is to find an effective low-dimensional subspace of the original data and then cluster cancer samples in the reduced subspace. However, due to data-type diversity and big data volume, few methods can integrative and efficiently find the principal low-dimensional manifold of the high-dimensional cancer multi-omics data. In this study, we proposed a novel low-rank approximation based integrative probabilistic model to fast find the shared principal subspace across multiple data types: the convexity of the low-rank regularized likelihood function of the probabilistic model ensures efficient and stable model fitting. Candidate molecular subtypes can be identified by unsupervised clustering hundreds of cancer samples in the reduced low-dimensional subspace. On testing datasets, our method LRAcluster (low-rank approximation based multi-omics data clustering) runs much faster with better clustering performances than the existing method. Then, we applied LRAcluster on large-scale cancer multi-omics data from TCGA. The pan-cancer analysis results show that the cancers of different tissue origins are generally grouped as independent clusters, except squamous-like carcinomas. While the single cancer type analysis suggests that the omics data have different subtyping abilities for different cancer types. LRAcluster is a very useful method for fast dimension reduction and unsupervised clustering of large-scale multi-omics data. LRAcluster is implemented in R and freely available via http://bioinfo.au.tsinghua.edu.cn/software/lracluster/ .
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-01-01
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233
Luo, Liyan; Xu, Luping; Zhang, Hua
2015-07-07
In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.
Hopkins, Jesse Bennett; Gillilan, Richard E; Skou, Soren
2017-10-01
BioXTAS RAW is a graphical-user-interface-based free open-source Python program for reduction and analysis of small-angle X-ray solution scattering (SAXS) data. The software is designed for biological SAXS data and enables creation and plotting of one-dimensional scattering profiles from two-dimensional detector images, standard data operations such as averaging and subtraction and analysis of radius of gyration and molecular weight, and advanced analysis such as calculation of inverse Fourier transforms and envelopes. It also allows easy processing of inline size-exclusion chromatography coupled SAXS data and data deconvolution using the evolving factor analysis method. It provides an alternative to closed-source programs such as Primus and ScÅtter for primary data analysis. Because it can calibrate, mask and integrate images it also provides an alternative to synchrotron beamline pipelines that scientists can install on their own computers and use both at home and at the beamline.
Mobile three-dimensional visualisation technologies for clinician-led fall prevention assessments.
Hamm, Julian; Money, Arthur G; Atwal, Anita; Ghinea, Gheorghita
2017-08-01
The assistive equipment provision process is routinely carried out with patients to mitigate fall risk factors via the fitment of assistive equipment within the home. However, currently, over 50% of assistive equipment is abandoned by the patients due to poor fit between the patient and the assistive equipment. This paper explores clinician perceptions of an early stage three-dimensional measurement aid prototype, which provides enhanced assistive equipment provision process guidance to clinicians. Ten occupational therapists trialled the three-dimensional measurement aid prototype application; think-aloud and semi-structured interview data was collected. Usability was measured with the System Usability Scale. Participants scored three-dimensional measurement aid prototype as 'excellent' and agreed strongly with items relating to the usability and learnability of the application. The qualitative analysis identified opportunities for improving existing practice, including, improved interpretation/recording measurements; enhanced collaborative practice within the assistive equipment provision process. Future research is needed to determine the clinical utility of this application compared with two-dimensional counterpart paper-based guidance leaflets.
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Um, Jaegon; Yi, Su Do; Kim, Beom Jun
2011-11-01
In a number of classical statistical-physical models, there exists a characteristic dimensionality called the upper critical dimension above which one observes the mean-field critical behavior. Instead of constructing high-dimensional lattices, however, one can also consider infinite-dimensional structures, and the question is whether this mean-field character extends to quantum-mechanical cases as well. We therefore investigate the transverse-field quantum Ising model on the globally coupled network and on the Watts-Strogatz small-world network by means of quantum Monte Carlo simulations and the finite-size scaling analysis. We confirm that both of the structures exhibit critical behavior consistent with the mean-field description. In particular, we show that the existing cumulant method has difficulty in estimating the correct dynamic critical exponent and suggest that an order parameter based on the quantum-mechanical expectation value can be a practically useful numerical observable to determine critical behavior when there is no well-defined dimensionality.
NASA Astrophysics Data System (ADS)
Li, Lu-Ke; Zhang, Shen-Feng
2018-03-01
Put forward a kind of three-dimensional vibration information technology of vibrating object by the mean of five laser beam of He-Ne laser, and with the help of three-way sensor, measure the three-dimensional laser vibration developed by above mentioned technology. The technology based on the Doppler principle of interference and signal demodulation technology, get the vibration information of the object, through the algorithm processing, extract the three-dimensional vibration information of space objects, and can achieve the function of angle calibration of five beam in the space, which avoid the effects of the mechanical installation error, greatly improve the accuracy of measurement. With the help of a & B K4527 contact three axis sensor, measure and calibrate three-dimensional laser vibrometer, which ensure the accuracy of the measurement data. Summarize the advantages and disadvantages of contact and non-contact sensor, and analysis the future development trends of the sensor industry.
Lenz's law and dimensional analysis
NASA Astrophysics Data System (ADS)
Pelesko, John A.; Cesky, Michael; Huertas, Sharon
2005-01-01
We show that the time it takes a magnet to fall through a nonmagnetic metallic tube may be found via dimensional analysis. The simple analysis makes this classic demonstration of Lenz's law accessible qualitatively and quantitatively to students with little knowledge of electromagnetism and only elementary knowledge of calculus. The analysis provides a new example of the power and limitations of dimensional analysis.
Three-dimensional intraoperative ultrasound of vascular malformations and supratentorial tumors.
Woydt, Michael; Horowski, Anja; Krauss, Juergen; Krone, Andreas; Soerensen, Niels; Roosen, Klaus
2002-01-01
The benefits and limits of a magnetic sensor-based 3-dimensional (3D) intraoperative ultrasound technique during surgery of vascular malformations and supratentorial tumors were evaluated. Twenty patients with 11 vascular malformations and 9 supratentorial tumors undergoing microsurgical resection or clipping were investigated with an interactive magnetic sensor data acquisition system allowing freehand scanning. An ultrasound probe with a mounted sensor was used after craniotomies to localize lesions, outline tumors or malformation margins, and identify supplying vessels. A 3D data set was obtained allowing reformation of multiple slices in all 3 planes and comparison to 2-dimensional (2D) intraoperative ultrasound images. Off-line gray-scale segmentation analysis allowed differentiation between tissue with different echogenicities. Color-coded information about blood flow was extracted from the images with a reconstruction algorithm. This allowed photorealistic surface displays of perfused tissue, tumor, and surrounding vessels. Three-dimensional intraoperative ultrasound data acquisition was obtained within 5 minutes. Off-line analysis and reconstruction time depends on the type of imaging display and can take up to 30 minutes. The spatial relation between aneurysm sac and surrounding vessels or the skull base could be enhanced in 3 out of 6 aneurysms with 3D intraoperative ultrasound. Perforating arteries were visible in 3 cases only by using 3D imaging. 3D ultrasound provides a promising imaging technique, offering the neurosurgeon an intraoperative spatial orientation of the lesion and its vascular relationships. Thereby, it may improve safety of surgery and understanding of 2D ultrasound images.
NASA Astrophysics Data System (ADS)
Nazarinia, M.; Lo Jacono, D.; Thompson, M. C.; Sheridan, J.
2009-06-01
Previous two-dimensional numerical studies have shown that a circular cylinder undergoing both oscillatory rotational and translational motions can generate thrust so that it will actually self-propel through a stationary fluid. Although a cylinder undergoing a single oscillation has been thoroughly studied, the combination of the two oscillations has not received much attention until now. The current research reported here extends the numerical study of Blackburn et al. [Phys. Fluids 11, L4 (1999)] both experimentally and numerically, recording detailed vorticity fields in the wake and using these to elucidate the underlying physics, examining the three-dimensional wake development experimentally, and determining the three-dimensional stability of the wake through Floquet stability analysis. Experiments conducted in the laboratory are presented for a given parameter range, confirming the early results from Blackburn et al. [Phys. Fluids 11, L4 (1999)]. In particular, we confirm the thrust generation ability of a circular cylinder undergoing combined oscillatory motions. Importantly, we also find that the wake undergoes three-dimensional transition at low Reynolds numbers (Re≃100) to an instability mode with a wavelength of about two cylinder diameters. The stability analysis indicates that the base flow is also unstable to another mode at slightly higher Reynolds numbers, broadly analogous to the three-dimensional wake transition mode for a circular cylinder, despite the distinct differences in wake/mode topology. The stability of these flows was confirmed by experimental measurements.
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Research of MPPT for photovoltaic generation based on two-dimensional cloud model
NASA Astrophysics Data System (ADS)
Liu, Shuping; Fan, Wei
2013-03-01
The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
Signal Processing and Interpretation Using Multilevel Signal Abstractions.
1986-06-01
mappings expressed in the Fourier domain. Pre- viously proposed causal analysis techniques for diagnosis are based on the analysis of intermediate data ...can be processed either as individual one-dimensional waveforms or as multichannel data 26 I P- - . . . ." " ." h9. for source detection and direction...microphone data . The signal processing for both spectral analysis of microphone signals and direc- * tion determination of acoustic sources involves
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.
1997-01-01
We used surface-based representations to analyze functional specializations in the human cerebral cortex. A computerized reconstruction of the cortical surface of the Visible Man digital atlas was generated and transformed to the Talairach coordinate system. This surface was also flattened and used to establish a surface-based coordinate system that respects the topology of the cortical sheet. The linkage between two-dimensional and three-dimensional representations allows the locations of published neuroimaging activation foci to be stereotaxically projected onto the Visible Man cortical flat map. An analysis of two activation studies related to the hearing and reading of music and of words illustrates how this approach permits the systematic estimation of the degree of functional segregation and of potential functional overlap for different aspects of sensory processing.
A comparison of two- and three-dimensional stochastic models of regional solute movement
Shapiro, A.M.; Cvetkovic, V.D.
1990-01-01
Recent models of solute movement in porous media that are based on a stochastic description of the porous medium properties have been dedicated primarily to a three-dimensional interpretation of solute movement. In many practical problems, however, it is more convenient and consistent with measuring techniques to consider flow and solute transport as an areal, two-dimensional phenomenon. The physics of solute movement, however, is dependent on the three-dimensional heterogeneity in the formation. A comparison of two- and three-dimensional stochastic interpretations of solute movement in a porous medium having a statistically isotropic hydraulic conductivity field is investigated. To provide an equitable comparison between the two- and three-dimensional analyses, the stochastic properties of the transmissivity are defined in terms of the stochastic properties of the hydraulic conductivity. The variance of the transmissivity is shown to be significantly reduced in comparison to that of the hydraulic conductivity, and the transmissivity is spatially correlated over larger distances. These factors influence the two-dimensional interpretations of solute movement by underestimating the longitudinal and transverse growth of the solute plume in comparison to its description as a three-dimensional phenomenon. Although this analysis is based on small perturbation approximations and the special case of a statistically isotropic hydraulic conductivity field, it casts doubt on the use of a stochastic interpretation of the transmissivity in describing regional scale movement. However, by assuming the transmissivity to be the vertical integration of the hydraulic conductivity field at a given position, the stochastic properties of the hydraulic conductivity can be estimated from the stochastic properties of the transmissivity and applied to obtain a more accurate interpretation of solute movement. ?? 1990 Kluwer Academic Publishers.
ERIC Educational Resources Information Center
Mahmoud, Ali Bassam; Khalifa, Bayan
2015-01-01
Purpose: The purpose of this paper is to confirm the factorial structure of SERVPERF based on an exploration of its dimensionality among Syrian universities' students. It also aimed at assessing the perceived service quality offered at these universities. Design/methodology/approach: A cross-sectional survey was conducted targeting students at…
Combinatorial-topological framework for the analysis of global dynamics.
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
Combinatorial-topological framework for the analysis of global dynamics
NASA Astrophysics Data System (ADS)
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
Principal component analysis on a torus: Theory and application to protein dynamics.
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-28
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib 9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Principal component analysis on a torus: Theory and application to protein dynamics
NASA Astrophysics Data System (ADS)
Sittel, Florian; Filk, Thomas; Stock, Gerhard
2017-12-01
A dimensionality reduction method for high-dimensional circular data is developed, which is based on a principal component analysis (PCA) of data points on a torus. Adopting a geometrical view of PCA, various distance measures on a torus are introduced and the associated problem of projecting data onto the principal subspaces is discussed. The main idea is that the (periodicity-induced) projection error can be minimized by transforming the data such that the maximal gap of the sampling is shifted to the periodic boundary. In a second step, the covariance matrix and its eigendecomposition can be computed in a standard manner. Adopting molecular dynamics simulations of two well-established biomolecular systems (Aib9 and villin headpiece), the potential of the method to analyze the dynamics of backbone dihedral angles is demonstrated. The new approach allows for a robust and well-defined construction of metastable states and provides low-dimensional reaction coordinates that accurately describe the free energy landscape. Moreover, it offers a direct interpretation of covariances and principal components in terms of the angular variables. Apart from its application to PCA, the method of maximal gap shifting is general and can be applied to any other dimensionality reduction method for circular data.
Deep neural networks for texture classification-A theoretical analysis.
Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant
2018-01-01
We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Correia, Gilson; de Brito, Gustavo P.; de Almeida, José; Herdy, Wallace
2017-03-01
Simple prescriptions for computing the D-dimensional classical potential related to electromagnetic and gravitational models, based on the functional generator, are built out. These recipes are employed afterward as a support for probing the premise that renormalizable higher-order systems have a finite classical potential at the origin. It is also shown that the opposite of the conjecture above is not true. In other words, if a higher-order model is renormalizable, it is necessarily endowed with a finite classical potential at the origin, but the reverse of this statement is untrue. The systems used to check the conjecture were D-dimensional fourth-order Lee-Wick electrodynamics, and the D-dimensional fourth- and sixth-order gravity models. A special attention is devoted to New Massive Gravity (NMG) since it was the analysis of this model that inspired our surmise. In particular, we made use of our premise to resolve trivially the issue of the renormalizability of NMG, which was initially considered to be renormalizable, but it was shown some years later to be non-renormalizable. We remark that our analysis is restricted to local models in which the propagator has simple and real poles.
Dimensional analyses of frontal posed smile attractiveness in Japanese female patients.
Hata, Kyoko; Arai, Kazuhito
2016-01-01
To identify appropriate dimensional items in objective diagnostic analysis for attractiveness of frontal posed smile in Japanese female patients by comparing with the result of human judgments. Photographs of frontal posed smiles of 100 Japanese females after orthodontic treatment were evaluated by 20 dental students (10 males and 10 females) using a visual analogue scale (VAS). The photographs were ranked based on the VAS evaluations and the 25 photographs with the highest evaluations were selected as group A, and the 25 photos with the lowest evaluations were designated group B. Then 12 dimensional items of objective analysis selected from a literature review were measured. Means and standard deviations for measurements of the dimensional items were compared between the groups using the unpaired t-test with a significance level of P < .05. Mean values were significantly smaller in group A than in group B for interlabial gap, intervermilion distance, maxillary gingival display, maximum incisor exposure, and lower lip to incisor (P < .05). Significant differences were observed only in the vertical dimension, not in the transverse dimension. Five of the 12 objective diagnostic items were correlated with human judgments of the attractiveness of frontal posed smile in Japanese females after orthodontic treatment.
Molteni, Matteo; Magatti, Davide; Cardinali, Barbara; Rocco, Mattia; Ferri, Fabio
2013-01-01
The average pore size ξ0 of filamentous networks assembled from biological macromolecules is one of the most important physical parameters affecting their biological functions. Modern optical methods, such as confocal microscopy, can noninvasively image such networks, but extracting a quantitative estimate of ξ0 is a nontrivial task. We present here a fast and simple method based on a two-dimensional bubble approach, which works by analyzing one by one the (thresholded) images of a series of three-dimensional thin data stacks. No skeletonization or reconstruction of the full geometry of the entire network is required. The method was validated by using many isotropic in silico generated networks of different structures, morphologies, and concentrations. For each type of network, the method provides accurate estimates (a few percent) of the average and the standard deviation of the three-dimensional distribution of the pore sizes, defined as the diameters of the largest spheres that can be fit into the pore zones of the entire gel volume. When applied to the analysis of real confocal microscopy images taken on fibrin gels, the method provides an estimate of ξ0 consistent with results from elastic light scattering data. PMID:23473499
Bravini, Elisabetta; Franchignoni, Franco; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano; Foti, Calogero
2015-01-01
To perform a comprehensive analysis of the psychometric properties and dimensionality of the Upper Limb Functional Index (ULFI) using both classical test theory and Rasch analysis (RA). Prospective, single-group observational design. Freestanding rehabilitation center. Convenience sample of Italian-speaking subjects with upper limb musculoskeletal disorders (N=174). Not applicable. The Italian version of the ULFI. Data were analyzed using parallel analysis, exploratory factor analysis, and RA for evaluating dimensionality, functioning of rating scale categories, item fit, hierarchy of item difficulties, and reliability indices. Parallel analysis revealed 2 factors explaining 32.5% and 10.7% of the response variance. RA confirmed the failure of the unidimensionality assumption, and 6 items out of the 25 misfitted the Rasch model. When the analysis was rerun excluding the misfitting items, the scale showed acceptable fit values, loading meaningfully to a single factor. Item separation reliability and person separation reliability were .98 and .89, respectively. Cronbach alpha was .92. RA revealed weakness of the scale concerning dimensionality and internal construct validity. However, a set of 19 ULFI items defined through the statistical process demonstrated a unidimensional structure, good psychometric properties, and clinical meaningfulness. These findings represent a useful starting point for further analyses of the tool (based on modern psychometric approaches and confirmatory factor analysis) in larger samples, including different patient populations and nationalities. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Zhang, Ming-cai; Lü, Si-zhe; Cheng, Ying-wu; Gu, Li-xu; Zhan, Hong-sheng; Shi, Yin-yu; Wang, Xiang; Huang, Shi-rong
2011-02-01
To study the effect of vertebrae semi-dislocation on the stress distribution in facet joint and interuertebral disc of patients with cervical syndrome using three dimensional finite element model. A patient with cervical spondylosis was randomly chosen, who was male, 28 years old, and diagnosed as cervical vertebra semidislocation by dynamic and static palpation and X-ray, and scanned from C(1) to C(7) by 0.75 mm slice thickness of CT. Based on the CT data, the software was used to construct the three dimensional finite element model of cervical vertebra semidislocation (C(4)-C(6)). Based on the model,virtual manipulation was used to correct the vertebra semidislocation by the software, and the stress distribution was analyzed. The result of finite element analysis showed that the stress distribution of C(5-6) facet joint and intervertebral disc changed after virtual manipulation. The vertebra semidislocation leads to the abnormal stress distribution of facet joint and intervertebral disc.
NASA Technical Reports Server (NTRS)
Hooper, Steven J.
1989-01-01
Delamination is a common failure mode of laminated composite materials. This type of failure frequently occurs at the free edges of laminates where singular interlaminar stresses are developed due to the difference in Poisson's ratios between adjacent plies. Typically the delaminations develop between 90 degree plies and adjacent angle plies. Edge delamination has been studied by several investigators using a variety of techniques. Recently, Chan and Ochoa applied the quasi-three-dimensional finite element model to the analysis of a laminate subject to bending, extension, and torsion. This problem is of particular significance relative to the structural integrity of composite helicopter rotors. The task undertaken was to incorporate Chan and Ochoa's formulation into a Raju Q3DG program. The resulting program is capable of modeling extension, bending, and torsional mechanical loadings as well as thermal and hygroscopic loadings. The addition of the torsional and bending loading capability will provide the capability to perform a delamination analysis of a general unsymmetric laminate containing four cracks, each of a different length. The solutions obtained using this program are evaluated by comparing them with solutions from a full three-dimensional finite element solution. This comparison facilitates the assessment of three dimensional affects such as the warping constraint imposed by the load frame grips. It wlso facilitates the evaluation of the external load representation employed in the Q3D formulation. Finally, strain energy release rates computed from the three-dimensional results are compared with those predicted using the quasi-three-dimensional formulation.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.
1997-01-01
A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.
Three-dimensional modeling of flexible pavements : research implementation plan.
DOT National Transportation Integrated Search
2006-02-14
Many of the asphalt pavement analysis programs are based on linear elastic models. A linear viscoelastic models : would be superior to linear elastic models for analyzing the response of asphalt concrete pavements to loads. There : is a need to devel...
A Study on Regional Frequency Analysis using Artificial Neural Network - the Sumjin River Basin
NASA Astrophysics Data System (ADS)
Jeong, C.; Ahn, J.; Ahn, H.; Heo, J. H.
2017-12-01
Regional frequency analysis means to make up for shortcomings in the at-site frequency analysis which is about a lack of sample size through the regional concept. Regional rainfall quantile depends on the identification of hydrologically homogeneous regions, hence the regional classification based on hydrological homogeneous assumption is very important. For regional clustering about rainfall, multidimensional variables and factors related geographical features and meteorological figure are considered such as mean annual precipitation, number of days with precipitation in a year and average maximum daily precipitation in a month. Self-Organizing Feature Map method which is one of the artificial neural network algorithm in the unsupervised learning techniques solves N-dimensional and nonlinear problems and be shown results simply as a data visualization technique. In this study, for the Sumjin river basin in South Korea, cluster analysis was performed based on SOM method using high-dimensional geographical features and meteorological factor as input data. then, for the results, in order to evaluate the homogeneity of regions, the L-moment based discordancy and heterogeneity measures were used. Rainfall quantiles were estimated as the index flood method which is one of regional rainfall frequency analysis. Clustering analysis using SOM method and the consequential variation in rainfall quantile were analyzed. This research was supported by a grant(2017-MPSS31-001) from Supporting Technology Development Program for Disaster Management funded by Ministry of Public Safety and Security(MPSS) of the Korean government.
Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter
2016-12-26
A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Lin, Guang
In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
Simulation of springback and microstructural analysis of dual phase steels
NASA Astrophysics Data System (ADS)
Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard
2013-12-01
With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Current Status of 3-Dimensional Speckle Tracking Echocardiography: A Review from Our Experiences
Ishizu, Tomko; Aonuma, Kazutaka
2014-01-01
Cardiac function analysis is the main focus of echocardiography. Left ventricular ejection fraction (LVEF) has been the clinical standard, however, LVEF is not enough to investigate myocardial function. For the last decade, speckle tracking echocardiography (STE) has been the novel clinical tool for regional and global myocardial function analysis. However, 2-dimensional imaging methods have limitations in assessing 3-dimensional (3D) cardiac motion. In contrast, 3D echocardiography also has been widely used, in particular, to measure LV volume measurements and assess valvular diseases. Joining the technology bandwagon, 3D-STE was introduced in 2008. Experimental studies and clinical investigations revealed the reliability and feasibility of 3D-STE-derived data. In addition, 3D-STE provides a novel deformation parameter, area change ratio, which have the potential for more accurate assessment of overall and regional myocardial function. In this review, we introduced the features of the methodology, validation, and clinical application of 3D-STE based on our experiences for 7 years. PMID:25031794
Dynamic analysis of geometrically non-linear three-dimensional beams under moving mass
NASA Astrophysics Data System (ADS)
Zupan, E.; Zupan, D.
2018-01-01
In this paper, we present a coupled dynamic analysis of a moving particle on a deformable three-dimensional frame. The presented numerical model is capable of considering arbitrary curved and twisted initial geometry of the beam and takes into account geometric non-linearity of the structure. Coupled with dynamic equations of the structure, the equations of moving particle are solved. The moving particle represents the dynamic load and varies the mass distribution of the structure and at the same time its path is adapting due to deformability of the structure. A coupled geometrically non-linear behaviour of beam and particle is studied. The equation of motion of the particle is added to the system of the beam dynamic equations and an additional unknown representing the coordinate of the curvilinear path of the particle is introduced. The specially designed finite-element formulation of the three-dimensional beam based on the weak form of consistency conditions is employed where only the boundary conditions are affected by the contact forces.
Three-dimensional baroclinic instability of a Hadley cell for small Richardson number
NASA Technical Reports Server (NTRS)
Antar, B. N.; Fowlis, W. W.
1985-01-01
A three-dimensional, linear stability analysis of a baroclinic flow for Richardson number, Ri, of order unity is presented. The model considered is a thin horizontal, rotating fluid layer which is subjected to horizontal and vertical temperature gradients. The basic state is a Hadley cell which is a solution of the complete set of governing, nonlinear equations and contains both Ekman and thermal boundary layers adjacent to the rigid boundaries; it is given in a closed form. The stability analysis is also based on the complete set of equations; and perturbation possessing zonal, meridional, and vertical structures were considered. Numerical methods were developed for the stability problem which results in a stiff, eighth-order, ordinary differential eigenvalue problem. The previous work on three-dimensional baroclinic instability for small Ri was extended to a more realistic model involving the Prandtl number, sigma, and the Ekman number, E, and to finite growth rates and a wider range of the zonal wavenumber.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
NASA Astrophysics Data System (ADS)
Wang, Xinlong; Qin, Chao; Wang, Enbo; Hu, Changwen; Xu, Lin
2004-07-01
A novel metal-organic coordination polymer, [Zn(PDB)(H 2O) 2] 4 n (H 2PDB=pyridine-2,5-dicarboxylic acid), has been hydrothermally synthesized and characterized by elemental analysis, IR, TG and single crystal X-ray diffraction. Colorless crystals crystallized in the triclinic system, space group P-1, a=7.0562(14) Å, b=7.38526(15) Å, c=18.4611(4) Å, α=90.01(3)°, β=96.98(3)°, γ=115.67(3)°, V=859.1(3) Å 3, Z=1 and R=0.0334. The structure of the compound exhibits a novel three-dimensional supramolecular network, mainly based on multipoint hydrogen bonds originated from within and outside of a large 24-membered ring. Interestingly, the three-dimensional network consists of one-dimensional parallelogrammic channels in which coordinated water molecules point into the channel wall.
NASA Astrophysics Data System (ADS)
Yongzhi, WANG; hui, WANG; Lixia, LIAO; Dongsen, LI
2017-02-01
In order to analyse the geological characteristics of salt rock and stability of salt caverns, rough three-dimensional (3D) models of salt rock stratum and the 3D models of salt caverns on study areas are built by 3D GIS spatial modeling technique. During implementing, multi-source data, such as basic geographic data, DEM, geological plane map, geological section map, engineering geological data, and sonar data are used. In this study, the 3D spatial analyzing and calculation methods, such as 3D GIS intersection detection method in three-dimensional space, Boolean operations between three-dimensional space entities, three-dimensional space grid discretization, are used to build 3D models on wall rock of salt caverns. Our methods can provide effective calculation models for numerical simulation and analysis of the creep characteristics of wall rock in salt caverns.
A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.
Nagaoka, Tomoaki; Watanabe, Soichi
2010-01-01
Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.
Blöchliger, Nicolas; Caflisch, Amedeo; Vitalis, Andreas
2015-11-10
Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction.
Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich
2016-05-17
Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.
The classification of frontal sinus pneumatization patterns by CT-based volumetry.
Yüksel Aslier, Nesibe Gül; Karabay, Nuri; Zeybek, Gülşah; Keskinoğlu, Pembe; Kiray, Amaç; Sütay, Semih; Ecevit, Mustafa Cenk
2016-10-01
We aimed to define the classification of frontal sinus pneumatization patterns according to three-dimensional volume measurements. Datasets of 148 sides of 74 dry skulls were generated by the computerized tomography-based volumetry to measure frontal sinus volumes. The cutoff points for frontal sinus hypoplasia and hyperplasia were tested by ROC curve analysis and the validity of the diagnostic points was measured. The overall frequencies were 4.1, 14.2, 37.2 and 44.5 % for frontal sinus aplasia, hypoplasia, medium size and hyperplasia, respectively. The aplasia was bilateral in all three skulls. Hypoplasia was seen 76 % at the right side and hyperplasia was seen 56 % at the left side. The cutoff points for diagnosing frontal sinus hypoplasia and hyperplasia were '1131.25 mm(3)' (95.2 % sensitivity and 100 % specificity) and '3328.50 mm(3)' (88 % sensitivity and 86 % specificity), respectively. The findings provided in the present study, which define frontal sinus pneumatization patterns by CT-based volumetry, proved that two opposite sides of the frontal sinuses are asymmetric and three-dimensional classification should be developed by CT-based volumetry, because two-dimensional evaluations lack depth measurement.
A Dimensionally Reduced Clustering Methodology for Heterogeneous Occupational Medicine Data Mining.
Saâdaoui, Foued; Bertrand, Pierre R; Boudet, Gil; Rouffiac, Karine; Dutheil, Frédéric; Chamoux, Alain
2015-10-01
Clustering is a set of techniques of the statistical learning aimed at finding structures of heterogeneous partitions grouping homogenous data called clusters. There are several fields in which clustering was successfully applied, such as medicine, biology, finance, economics, etc. In this paper, we introduce the notion of clustering in multifactorial data analysis problems. A case study is conducted for an occupational medicine problem with the purpose of analyzing patterns in a population of 813 individuals. To reduce the data set dimensionality, we base our approach on the Principal Component Analysis (PCA), which is the statistical tool most commonly used in factorial analysis. However, the problems in nature, especially in medicine, are often based on heterogeneous-type qualitative-quantitative measurements, whereas PCA only processes quantitative ones. Besides, qualitative data are originally unobservable quantitative responses that are usually binary-coded. Hence, we propose a new set of strategies allowing to simultaneously handle quantitative and qualitative data. The principle of this approach is to perform a projection of the qualitative variables on the subspaces spanned by quantitative ones. Subsequently, an optimal model is allocated to the resulting PCA-regressed subspaces.
Situation exploration in a persistent surveillance system with multidimensional data
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.
2013-03-01
There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
Computed tomography-based finite element analysis to assess fracture risk and osteoporosis treatment
Imai, Kazuhiro
2015-01-01
Finite element analysis (FEA) is a computer technique of structural stress analysis and developed in engineering mechanics. FEA has developed to investigate structural behavior of human bones over the past 40 years. When the faster computers have acquired, better FEA, using 3-dimensional computed tomography (CT) has been developed. This CT-based finite element analysis (CT/FEA) has provided clinicians with useful data. In this review, the mechanism of CT/FEA, validation studies of CT/FEA to evaluate accuracy and reliability in human bones, and clinical application studies to assess fracture risk and effects of osteoporosis medication are overviewed. PMID:26309819
Application and Analysis on Graphene Materials
NASA Astrophysics Data System (ADS)
Li, Guogang; Qi, Jiaojiao
2018-01-01
Graphene is made up of carbon six-member ring cycle of two dimensional honeycomb lattice structure, it can warp as zero dimension of fullerenes, roll into a one-dimensional of carbon nanotubes or stack into a three dimensional graphite. Because of this kind of structure makes it not only have excellent electrical and mechanical properties, but also can be used as reinforced metal matrix composites, which can be used in catalyst carrier, energy storage and environmental protection. It has become a hot topic in recent years. Based on the existing research both at home and abroad, this paper focuses on the importance of the choice of graphene dispersion method to improve the mechanical properties of graphene materials, and summarizes the existing problems of graphene reinforced metal matrix composites.
Unsteady three-dimensional thermal field prediction in turbine blades using nonlinear BEM
NASA Technical Reports Server (NTRS)
Martin, Thomas J.; Dulikravich, George S.
1993-01-01
A time-and-space accurate and computationally efficient fully three dimensional unsteady temperature field analysis computer code has been developed for truly arbitrary configurations. It uses boundary element method (BEM) formulation based on an unsteady Green's function approach, multi-point Gaussian quadrature spatial integration on each panel, and a highly clustered time-step integration. The code accepts either temperatures or heat fluxes as boundary conditions that can vary in time on a point-by-point basis. Comparisons of the BEM numerical results and known analytical unsteady results for simple shapes demonstrate very high accuracy and reliability of the algorithm. An example of computed three dimensional temperature and heat flux fields in a realistically shaped internally cooled turbine blade is also discussed.
Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold
NASA Astrophysics Data System (ADS)
Anosov, O. L.; Butkovskii, O. Ya.; Kadtke, J.; Kravtsov, Yu. A.; Protopopescu, V.
1997-05-01
We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This method could replace costly and invasive traditional methods such as gas analysis and blood tests.
NASA Astrophysics Data System (ADS)
Mohamed, Omar Ahmed; Hasan Masood, Syed; Lal Bhowmik, Jahar
2018-02-01
In the additive manufacturing (AM) market, the question is raised by industry and AM users on how reproducible and repeatable the fused deposition modeling (FDM) process is in providing good dimensional accuracy. This paper aims to investigate and evaluate the repeatability and reproducibility of the FDM process through a systematic approach to answer this frequently asked question. A case study based on the statistical gage repeatability and reproducibility (gage R&R) technique is proposed to investigate the dimensional variations in the printed parts of the FDM process. After running the simulation and analysis of the data, the FDM process capability is evaluated, which would help the industry for better understanding the performance of FDM technology.
SEURAT: visual analytics for the integrated analysis of microarray data.
Gribov, Alexander; Sill, Martin; Lück, Sonja; Rücker, Frank; Döhner, Konstanze; Bullinger, Lars; Benner, Axel; Unwin, Antony
2010-06-03
In translational cancer research, gene expression data is collected together with clinical data and genomic data arising from other chip based high throughput technologies. Software tools for the joint analysis of such high dimensional data sets together with clinical data are required. We have developed an open source software tool which provides interactive visualization capability for the integrated analysis of high-dimensional gene expression data together with associated clinical data, array CGH data and SNP array data. The different data types are organized by a comprehensive data manager. Interactive tools are provided for all graphics: heatmaps, dendrograms, barcharts, histograms, eventcharts and a chromosome browser, which displays genetic variations along the genome. All graphics are dynamic and fully linked so that any object selected in a graphic will be highlighted in all other graphics. For exploratory data analysis the software provides unsupervised data analytics like clustering, seriation algorithms and biclustering algorithms. The SEURAT software meets the growing needs of researchers to perform joint analysis of gene expression, genomical and clinical data.
Generic Hypersonic Inlet Module Analysis
NASA Technical Reports Server (NTRS)
Cockrell, Chares E., Jr.; Huebner, Lawrence D.
2004-01-01
A computational study associated with an internal inlet drag analysis was performed for a generic hypersonic inlet module. The purpose of this study was to determine the feasibility of computing the internal drag force for a generic scramjet engine module using computational methods. The computational study consisted of obtaining two-dimensional (2D) and three-dimensional (3D) computational fluid dynamics (CFD) solutions using the Euler and parabolized Navier-Stokes (PNS) equations. The solution accuracy was assessed by comparisons with experimental pitot pressure data. The CFD analysis indicates that the 3D PNS solutions show the best agreement with experimental pitot pressure data. The internal inlet drag analysis consisted of obtaining drag force predictions based on experimental data and 3D CFD solutions. A comparative assessment of each of the drag prediction methods is made and the sensitivity of CFD drag values to computational procedures is documented. The analysis indicates that the CFD drag predictions are highly sensitive to the computational procedure used.
Anisotropy of MHD Turbulence at Low Magnetic Reynolds Number
NASA Technical Reports Server (NTRS)
Zikanov, O.; Vorobev, A.; Thess, A.; Davidson, P. A.; Knaepen, B.
2004-01-01
Turbulent fluctuations in MHD flows are known to become dimensionally anisotropic under the action of a sufficiently strong magnetic field. We consider the technologically relevant case of low magnetic Reynolds number and apply the method of DNS of forced flow in a periodic box to generate velocity fields. The analysis based on different anisotropy characteristics shows that the dimensional anisotropy is virtually scale-independent. We also find that, except for the case of very strong magnetic field, the flow is componentally isotropic. Its kinetic energy is practically uniformly distributed among the velocity components.
A functional video-based anthropometric measuring system
NASA Technical Reports Server (NTRS)
Nixon, J. H.; Cater, J. P.
1982-01-01
A high-speed anthropometric three dimensional measurement system using the Selcom Selspot motion tracking instrument for visual data acquisition is discussed. A three-dimensional scanning system was created which collects video, audio, and performance data on a single standard video cassette recorder. Recording rates of 1 megabit per second for periods of up to two hours are possible with the system design. A high-speed off-the-shelf motion analysis system for collecting optical information as used. The video recording adapter (VRA) is interfaced to the Selspot data acquisition system.
Universal Profile of the Vortex Condensate in Two-Dimensional Turbulence
NASA Astrophysics Data System (ADS)
Laurie, Jason; Boffetta, Guido; Falkovich, Gregory; Kolokolov, Igor; Lebedev, Vladimir
2014-12-01
An inverse turbulent cascade in a restricted two-dimensional periodic domain creates a condensate—a pair of coherent system-size vortices. We perform extensive numerical simulations of this system and carry out theoretical analysis based on momentum and energy exchanges between the turbulence and the vortices. We show that the vortices have a universal internal structure independent of the type of small-scale dissipation, small-scale forcing, and boundary conditions. The theory predicts not only the vortex inner region profile, but also the amplitude, which both perfectly agree with the numerical data.
NASA Astrophysics Data System (ADS)
Gulanyan, É. Kh; Mikaélyan, A. L.; Molchanova, L. V.; Sidorov, Vladimir A.; Fedorov, I. V.
1989-08-01
An analysis was made of the results of multichannel recording of one-dimensional holograms in a reusable photothermoplastic carrier, particularly in the form of a disk. It was found experimentally that in development and erasure of optical data recorded in a photothermoelastic carrier one could use not only heating by an electric current, but also heating by YAG:Nd laser radiation (λ = 1.06 μm) or semiconductor laser radiation (λ = 0.82 μm).
Method development of damage detection in asymmetric buildings
NASA Astrophysics Data System (ADS)
Wang, Yi; Thambiratnam, David P.; Chan, Tommy H. T.; Nguyen, Andy
2018-01-01
Aesthetics and functionality requirements have caused most buildings to be asymmetric in recent times. Such buildings exhibit complex vibration characteristics under dynamic loads as there is coupling between the lateral and torsional components of vibration, and are referred to as torsionally coupled buildings. These buildings require three dimensional modelling and analysis. In spite of much recent research and some successful applications of vibration based damage detection methods to civil structures in recent years, the applications to asymmetric buildings has been a challenging task for structural engineers. There has been relatively little research on detecting and locating damage specific to torsionally coupled asymmetric buildings. This paper aims to compare the difference in vibration behaviour between symmetric and asymmetric buildings and then use the vibration characteristics for predicting damage in them. The need for developing a special method to detect damage in asymmetric buildings thus becomes evident. Towards this end, this paper modifies the traditional modal strain energy based damage index by decomposing the mode shapes into their lateral and vertical components and to form component specific damage indices. The improved approach is then developed by combining the modified strain energy based damage indices with the modal flexibility method which was modified to suit three dimensional structures to form a new damage indicator. The procedure is illustrated through numerical studies conducted on three dimensional five-story symmetric and asymmetric frame structures with the same layout, after validating the modelling techniques through experimental testing of a laboratory scale asymmetric building model. Vibration parameters obtained from finite element analysis of the intact and damaged building models are then applied into the proposed algorithms for detecting and locating the single and multiple damages in these buildings. The results obtained from a number of different damage scenarios confirm the feasibility of the proposed vibration based damage detection method for three dimensional asymmetric buildings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.
1987-04-01
The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less
Estimating the functional dimensionality of neural representations.
Ahlheim, Christiane; Love, Bradley C
2018-06-07
Recent advances in multivariate fMRI analysis stress the importance of information inherent to voxel patterns. Key to interpreting these patterns is estimating the underlying dimensionality of neural representations. Dimensions may correspond to psychological dimensions, such as length and orientation, or involve other coding schemes. Unfortunately, the noise structure of fMRI data inflates dimensionality estimates and thus makes it difficult to assess the true underlying dimensionality of a pattern. To address this challenge, we developed a novel approach to identify brain regions that carry reliable task-modulated signal and to derive an estimate of the signal's functional dimensionality. We combined singular value decomposition with cross-validation to find the best low-dimensional projection of a pattern of voxel-responses at a single-subject level. Goodness of the low-dimensional reconstruction is measured as Pearson correlation with a test set, which allows to test for significance of the low-dimensional reconstruction across participants. Using hierarchical Bayesian modeling, we derive the best estimate and associated uncertainty of underlying dimensionality across participants. We validated our method on simulated data of varying underlying dimensionality, showing that recovered dimensionalities match closely true dimensionalities. We then applied our method to three published fMRI data sets all involving processing of visual stimuli. The results highlight three possible applications of estimating the functional dimensionality of neural data. Firstly, it can aid evaluation of model-based analyses by revealing which areas express reliable, task-modulated signal that could be missed by specific models. Secondly, it can reveal functional differences across brain regions. Thirdly, knowing the functional dimensionality allows assessing task-related differences in the complexity of neural patterns. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.
1993-01-01
This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).
Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-07-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.
Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-01-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600
Flyby Error Analysis Based on Contour Plots for the Cassini Tour
NASA Technical Reports Server (NTRS)
Stumpf, P. W.; Gist, E. M.; Goodson, T. D.; Hahn, Y.; Wagner, S. V.; Williams, P. N.
2008-01-01
The maneuver cancellation analysis consists of cost contour plots employed by the Cassini maneuver team. The plots are two-dimensional linear representations of a larger six-dimensional solution to a multi-maneuver, multi-encounter mission at Saturn. By using contours plotted with the dot product of vectors B and R and the dot product of vectors B and T components, it is possible to view the effects delta V on for various encounter positions in the B-plane. The plot is used in operations to help determine if the Approach Maneuver (ensuing encounter minus three days) and/or the Cleanup Maneuver (ensuing encounter plus three days) can be cancelled and also is a linear check of an integrated solution.
Derivative component analysis for mass spectral serum proteomic profiles.
Han, Henry
2014-01-01
As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based diagnosis. Our findings demonstrate the feasibility and power of the proposed DCA-based profile biomarker diagnosis in achieving high sensitivity and conquering the data reproducibility issue in serum proteomics. Furthermore, our proposed derivative component analysis suggests the subtle data characteristics gleaning and de-noising are essential in separating true signals from red herrings for high-dimensional proteomic profiles, which can be more important than the conventional feature selection or dimension reduction. In particular, our profile biomarker diagnosis can be generalized to other omics data for derivative component analysis (DCA)'s nature of generic data analysis.
Kok, Annette M; Nguyen, V Lai; Speelman, Lambert; Brands, Peter J; Schurink, Geert-Willem H; van de Vosse, Frans N; Lopata, Richard G P
2015-05-01
Abdominal aortic aneurysms (AAAs) are local dilations that can lead to a fatal hemorrhage when ruptured. Wall stress analysis of AAAs is a novel tool that has proven high potential to improve risk stratification. Currently, wall stress analysis of AAAs is based on computed tomography (CT) and magnetic resonance imaging; however, three-dimensional (3D) ultrasound (US) has great advantages over CT and magnetic resonance imaging in terms of costs, speed, and lack of radiation. In this study, the feasibility of 3D US as input for wall stress analysis is investigated. Second, 3D US-based wall stress analysis was compared with CT-based results. The 3D US and CT data were acquired in 12 patients (diameter, 35-90 mm). US data were segmented manually and compared with automatically acquired CT geometries by calculating the similarity index and Hausdorff distance. Wall stresses were simulated at P = 140 mm Hg and compared between both modalities. The similarity index of US vs CT was 0.75 to 0.91 (n = 12), with a median Hausdorff distance ranging from 4.8 to 13.9 mm, with the higher values found at the proximal and distal sides of the AAA. Wall stresses were in accordance with literature, and a good agreement was found between US- and CT-based median stresses and interquartile stresses, which was confirmed by Bland-Altman and regression analysis (n = 8). Wall stresses based on US were typically higher (+23%), caused by geometric irregularities due to the registration of several 3D volumes and manual segmentation. In future work, an automated US registration and segmentation approach is the essential point of improvement before pursuing large-scale patient studies. This study is a first step toward US-based wall stress analysis, which would be the modality of choice to monitor wall stress development over time because no ionizing radiation and contrast material are involved. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Jesse, Stephen; Kalinin, Sergei V
2009-02-25
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
Resolvent analysis of shear flows using One-Way Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Rigas, Georgios; Schmidt, Oliver; Towne, Aaron; Colonius, Tim
2017-11-01
For three-dimensional flows, questions of stability, receptivity, secondary flows, and coherent structures require the solution of large partial-derivative eigenvalue problems. Reduced-order approximations are thus required for engineering prediction since these problems are often computationally intractable or prohibitively expensive. For spatially slowly evolving flows, such as jets and boundary layers, the One-Way Navier-Stokes (OWNS) equations permit a fast spatial marching procedure that results in a huge reduction in computational cost. Here, an adjoint-based optimization framework is proposed and demonstrated for calculating optimal boundary conditions and optimal volumetric forcing. The corresponding optimal response modes are validated against modes obtained in terms of global resolvent analysis. For laminar base flows, the optimal modes reveal modal and non-modal transition mechanisms. For turbulent base flows, they predict the evolution of coherent structures in a statistical sense. Results from the application of the method to three-dimensional laminar wall-bounded flows and turbulent jets will be presented. This research was supported by the Office of Naval Research (N00014-16-1-2445) and Boeing Company (CT-BA-GTA-1).
Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research.
Ercius, Peter; Alaidi, Osama; Rames, Matthew J; Ren, Gang
2015-10-14
Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantitative analysis of three-dimensional biological cells using interferometric microscopy
NASA Astrophysics Data System (ADS)
Shaked, Natan T.; Wax, Adam
2011-06-01
Live biological cells are three-dimensional microscopic objects that constantly adjust their sizes, shapes and other biophysical features. Wide-field digital interferometry (WFDI) is a holographic technique that is able to record the complex wavefront of the light which has interacted with in-vitro cells in a single camera exposure, where no exogenous contrast agents are required. However, simple quasi-three-dimensional holographic visualization of the cell phase profiles need not be the end of the process. Quantitative analysis should permit extraction of numerical parameters which are useful for cytology or medical diagnosis. Using a transmission-mode setup, the phase profile represents the multiplication between the integral refractive index and the thickness of the sample. These coupled variables may not be distinct when acquiring the phase profiles of dynamic cells. Many morphological parameters which are useful for cell biologists are based on the cell thickness profile rather than on its phase profile. We first overview methods to decouple the cell thickness and its refractive index using the WFDI-based phase profile. Then, we present a whole-cell-imaging approach which is able to extract useful numerical parameters on the cells even in cases where decoupling of cell thickness and refractive index is not possible or desired.
A weighted U-statistic for genetic association analyses of sequencing data.
Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing
2014-12-01
With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Ur Rehman, Fiaz; Nadeem, Sohail; Ur Rehman, Hafeez; Ul Haq, Rizwan
2018-03-01
In the present paper a theoretical investigation is performed to analyze heat and mass transport enhancement of water-based nanofluid for three dimensional (3D) MHD stagnation-point flow caused by an exponentially stretched surface. Water is considered as a base fluid. There are three (3) types of nanoparticles considered in this study namely, CuO (Copper oxide), Fe3O4 (Magnetite), and Al2O3 (Alumina) are considered along with water. In this problem we invoked the boundary layer phenomena and suitable similarity transformation, as a result our three dimensional non-linear equations of describing current problem are transmuted into nonlinear and non-homogeneous differential equations involving ordinary derivatives. We solved the final equations by applying homotopy analysis technique. Influential outcomes of aggressing parameters involved in this study, effecting profiles of temperature field and velocity are explained in detail. Graphical results of involved parameters appearing in considered nanofluid are presented separately. It is worth mentioning that Skin-friction along x and y-direction is maximum for Copper oxide-water nanofluid and minimum for Alumina-water nanofluid. Result for local Nusselt number is maximum for Copper oxide-water nanofluid and is minimum for magnetite-water nanofluid.
Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research
Alaidi, Osama; Rames, Matthew J.
2016-01-01
Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. PMID:26087941
Structural, energetic, and electronic trends in low-dimensional late-transition-metal systems
NASA Astrophysics Data System (ADS)
Hu, C. H.; Chizallet, C.; Toulhoat, H.; Raybaud, P.
2009-05-01
Using first-principles calculations, we present a comprehensive investigation of the structural trends of low dimensionality late 4d (from Tc to Ag) and 5d (from Re to Au) transition-metal systems including 13-atom clusters. Energetically favorable clusters not being reported previously are discovered by molecular-dynamics simulation based on the simulated annealing method. They allow a better agreement between experiments and theory for their magnetic properties. The structural periodic trend exhibits a nonmonotonic variation of the ratio of square to triangular facets for the two rows, with a maximum for Rh13 and Ir13 . By a comparative analysis of the relevant energetic and electronic properties performed on other metallic systems with reduced dimensionalities such as four-atom planar clusters, one-dimensional (1D) scales, double scales, 1D cylinders, monatomic films, two and seven layer slabs, we highlight that this periodic trend can be generalized. Hence, it appears that 1D-metallic nanocylinders or 1D-double nanoscales (with similar binding energies as TM13 ) also favor square facets for Rh and Ir. We finally propose an interpretation based on the evolution of the width of the valence band and of the Coulombic repulsions of the bonding basins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Meakin, Paul
2005-08-10
A numerical model based on smoothed particle hydrodynamics (SPH) has been developed and used to simulate the classical two-dimensional Rayleigh–Taylor instability and three-dimensional miscible flow in fracture apertures with complex geometries. To model miscible flow fluid particles with variable, composition dependent, masses were used. By basing the SPH equations on the particle number density artificial surface tension effects were avoided. The simulation results for the growth of a single perturbation driven by the Rayleigh – Taylor instability compare well with numerical results obtained by Fournier et al., and the growth of a perturbation with time can be represented quite wellmore » by a second-degree polynomial, in accord with the linear stability analysis of Duff et al. The dispersion coefficient found from SPH simulation of flow and diffusion in an ideal fracture was in excellent agreement with the value predicted by the theory of Taylor and Aris. The simulations of miscible flow in fracture apertures can be used to determination dispersion coefficients for transport in fractured media - a parameter used in large-scale simulations of contaminant transport.« less
Chan, Ernest G; Landreneau, James R; Schuchert, Matthew J; Odell, David D; Gu, Suicheng; Pu, Jiantao; Luketich, James D; Landreneau, Rodney J
2015-09-01
Accurate cancer localization and negative resection margins are necessary for successful segmentectomy. In this study, we evaluate a newly developed software package that permits automated segmentation of the pulmonary parenchyma, allowing 3-dimensional assessment of tumor size, location, and estimates of surgical margins. A pilot study using a newly developed 3-dimensional computed tomography analytic software package was performed to retrospectively evaluate preoperative computed tomography images of patients who underwent segmentectomy (n = 36) or lobectomy (n = 15) for stage 1 non-small cell lung cancer. The software accomplishes an automated reconstruction of anatomic pulmonary segments of the lung based on bronchial arborization. Estimates of anticipated surgical margins and pulmonary segmental volume were made on the basis of 3-dimensional reconstruction. Autosegmentation was achieved in 72.7% (32/44) of preoperative computed tomography images with slice thicknesses of 3 mm or less. Reasons for segmentation failure included local severe emphysema or pneumonitis, and lower computed tomography resolution. Tumor segmental localization was achieved in all autosegmented studies. The 3-dimensional computed tomography analysis provided a positive predictive value of 87% in predicting a marginal clearance greater than 1 cm and a 75% positive predictive value in predicting a margin to tumor diameter ratio greater than 1 in relation to the surgical pathology assessment. This preoperative 3-dimensional computed tomography analysis of segmental anatomy can confirm the tumor location within an anatomic segment and aid in predicting surgical margins. This 3-dimensional computed tomography information may assist in the preoperative assessment regarding the suitability of segmentectomy for peripheral lung cancers. Published by Elsevier Inc.
An evaluation method for nanoscale wrinkle
NASA Astrophysics Data System (ADS)
Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.
2016-06-01
In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Determination of melamine of milk based on two-dimensional correlation infrared spectroscopy
NASA Astrophysics Data System (ADS)
Yang, Ren-jie; Liu, Rong; Xu, Kexin
2012-03-01
The adulteration of milk with harmful substances is a threat to public health and beyond question a serious crime. In order to develop a rapid, cost-effective, high-throughput analysis method for detecting of adulterants in milk, the discriminative analysis of melamine is established in milk based on the two-dimensional (2D) correlation infrared spectroscopy in present paper. Pure milk samples and adulterated milk samples with different content of melamine were prepared. Then the Fourier Transform Infrared spectra of all samples were measured at room temperature. The characteristics of pure milk and adulterated milk were studied by one-dimensional spectra. The 2D NIR and 2D IR correlation spectroscopy were calculated under the perturbation of adulteration concentration. In the range from 1400 to 1800 cm-1, two strong autopeaks were aroused by melamine in milk at 1464 cm-1 and 1560 cm-1 in synchronous spectrum. At the same time, the 1560 cm-1 band does not share cross peak with the 1464 cm-1 band, which further confirm that the two bands have the same origin. Also in the range from 4200 to 4800 cm-1, the autopeak was shown at 4648 cm-1 in synchronous spectrum of melamine in milk. 2D NIR-IR hetero-spectral correlation analysis confirmed that the bands at 1464, 1560 and 4648 cm-1 had the same origin. The results demonstrated that the adulterant can be discriminated correctly by 2D correlation infrared spectroscopy.
Benazzi, Stefano; Panetta, Daniele; Fornai, Cinzia; Toussaint, Michel; Gruppioni, Giorgio; Hublin, Jean-Jacques
2014-02-01
The study of enamel thickness has received considerable attention in regard to the taxonomic, phylogenetic and dietary assessment of human and non-human primates. Recent developments based on two-dimensional (2D) and three-dimensional (3D) digital techniques have facilitated accurate analyses, preserving the original object from invasive procedures. Various digital protocols have been proposed. These include several procedures based on manual handling of the virtual models and technical shortcomings, which prevent other scholars from confidently reproducing the entire digital protocol. There is a compelling need for standard, reproducible, and well-tailored protocols for the digital analysis of 2D and 3D dental enamel thickness. In this contribution we provide essential guidelines for the digital computation of 2D and 3D enamel thickness in hominoid molars, premolars, canines and incisors. We modify previous techniques suggested for 2D analysis and we develop a new approach for 3D analysis that can also be applied to premolars and anterior teeth. For each tooth class, the cervical line should be considered as the fundamental morphological feature both to isolate the crown from the root (for 3D analysis) and to define the direction of the cross-sections (for 2D analysis). Copyright © 2013 Wiley Periodicals, Inc.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Teaching the Falling Ball Problem with Dimensional Analysis
ERIC Educational Resources Information Center
Sznitman, Josué; Stone, Howard A.; Smits, Alexander J.; Grotberg, James B.
2013-01-01
Dimensional analysis is often a subject reserved for students of fluid mechanics. However, the principles of scaling and dimensional analysis are applicable to various physical problems, many of which can be introduced early on in a university physics curriculum. Here, we revisit one of the best-known examples from a first course in classic…
Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai
2017-01-01
This paper presents an analysis of 3-dimensional engineered structural panels (3DESP) made from wood-fiber-based laminated paper composites. Since the existing models for calculating the mechanical behavior of core configurations within sandwich panels are very complex, a new simplified orthogonal model (SOM) using an equivalent element has been developed. This model...
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano
2015-01-01
As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246
XCOM intrinsic dimensionality for low-Z elements at diagnostic energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornefalk, Hans
2012-02-15
Purpose: To determine the intrinsic dimensionality of linear attenuation coefficients (LACs) from XCOM for elements with low atomic number (Z = 1-20) at diagnostic x-ray energies (25-120 keV). H{sub 0}{sup q}, the hypothesis that the space of LACs is spanned by q bases, is tested for various q-values. Methods: Principal component analysis is first applied and the LACs are projected onto the first q principal component bases. The residuals of the model values vs XCOM data are determined for all energies and atomic numbers. Heteroscedasticity invalidates the prerequisite of i.i.d. errors necessary for bootstrapping residuals. Instead wild bootstrap is applied,more » which, by not mixing residuals, allows the effect of the non-i.i.d residuals to be reflected in the result. Credible regions for the eigenvalues of the correlation matrix for the bootstrapped LAC data are determined. If subsequent credible regions for the eigenvalues overlap, the corresponding principal component is not considered to represent true data structure but noise. If this happens for eigenvalues l and l + 1, for any l{<=}q, H{sub 0}{sup q} is rejected. Results: The largest value of q for which H{sub 0}{sup q} is nonrejectable at the 5%-level is q = 4. This indicates that the statistically significant intrinsic dimensionality of low-Z XCOM data at diagnostic energies is four. Conclusions: The method presented allows determination of the statistically significant dimensionality of any noisy linear subspace. Knowledge of such significant dimensionality is of interest for any method making assumptions on intrinsic dimensionality and evaluating results on noisy reference data. For LACs, knowledge of the low-Z dimensionality might be relevant when parametrization schemes are tuned to XCOM data. For x-ray imaging techniques based on the basis decomposition method (Alvarez and Macovski, Phys. Med. Biol. 21, 733-744, 1976), an underlying dimensionality of two is commonly assigned to the LAC of human tissue at diagnostic energies. The finding of a higher statistically significant dimensionality thus raises the question whether a higher assumed model dimensionality (now feasible with the advent of multibin x-ray systems) might also be practically relevant, i.e., if better tissue characterization results can be obtained.« less
Sun, Hokeun; Wang, Shuang
2013-05-30
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Thompson, Danniella Muheim; Griffin, O. Hayden, Jr.; Vidussoni, Marco A.
1990-01-01
A practical example of applying two- to three-dimensional (2- to 3-D) global/local finite element analysis to laminated composites is presented. Cross-ply graphite/epoxy laminates of 0.1-in. (0.254-cm) thickness with central circular holes ranging from 1 to 6 in. (2.54 to 15.2 cm) in diameter, subjected to in-plane compression were analyzed. Guidelines for full three-dimensional finite element analysis and two- to three-dimensional global/local analysis of interlaminar stresses at straight free edges of laminated composites are included. The larger holes were found to reduce substantially the interlaminar stresses at the straight free-edge in proximity to the hole. Three-dimensional stress results were obtained for thin laminates which require prohibitive computer resources for full three-dimensional analyses of comparative accuracy.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Quantum key distribution for composite dimensional finite systems
NASA Astrophysics Data System (ADS)
Shalaby, Mohamed; Kamal, Yasser
2017-06-01
The application of quantum mechanics contributes to the field of cryptography with very important advantage as it offers a mechanism for detecting the eavesdropper. The pioneering work of quantum key distribution uses mutually unbiased bases (MUBs) to prepare and measure qubits (or qudits). Weak mutually unbiased bases (WMUBs) have weaker properties than MUBs properties, however, unlike MUBs, a complete set of WMUBs can be constructed for systems with composite dimensions. In this paper, we study the use of weak mutually unbiased bases (WMUBs) in quantum key distribution for composite dimensional finite systems. We prove that the security analysis of using a complete set of WMUBs to prepare and measure the quantum states in the generalized BB84 protocol, gives better results than using the maximum number of MUBs that can be constructed, when they are analyzed against the intercept and resend attack.
Bairy, Santhosh Kumar; Suneel Kumar, B V S; Bhalla, Joseph Uday Tej; Pramod, A B; Ravikumar, Muttineni
2009-04-01
c-Src kinase play an important role in cell growth and differentiation and its inhibitors can be useful for the treatment of various diseases, including cancer, osteoporosis, and metastatic bone disease. Three dimensional quantitative structure-activity relationship (3D-QSAR) studies were carried out on quinazolin derivatives inhibiting c-Src kinase. Molecular field analysis (MFA) models with four different alignment techniques, namely, GLIDE, GOLD, LIGANDFIT and Least squares based methods were developed. glide based MFA model showed better results (Leave one out cross validation correlation coefficient r(2)(cv) = 0.923 and non-cross validation correlation coefficient r(2)= 0.958) when compared with other models. These results help us to understand the nature of descriptors required for activity of these compounds and thereby provide guidelines to design novel and potent c-Src kinase inhibitors.
Edelbring, Samuel
2012-08-15
The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.
Three-dimensional analysis of facial shape and symmetry in twins using laser surface scanning.
Djordjevic, J; Jadallah, M; Zhurov, A I; Toma, A M; Richmond, S
2013-08-01
Three-dimensional analysis of facial shape and symmetry in twins. Faces of 37 twin pairs [19 monozygotic (MZ) and 18 dizygotic (DZ)] were laser scanned at the age of 15 during a follow-up of the Avon Longitudinal Study of Parents and Children (ALSPAC), South West of England. Facial shape was analysed using two methods: 1) Procrustes analysis of landmark configurations (63 x, y and z coordinates of 21 facial landmarks) and 2) three-dimensional comparisons of facial surfaces within each twin pair. Monozygotic and DZ twins were compared using ellipsoids representing 95% of the variation in landmark configurations and surface-based average faces. Facial symmetry was analysed by superimposing the original and mirror facial images. Both analyses showed greater similarity of facial shape in MZ twins, with lower third being the least similar. Procrustes analysis did not reveal any significant difference in facial landmark configurations of MZ and DZ twins. The average faces of MZ and DZ males were coincident in the forehead, supraorbital and infraorbital ridges, the bridge of the nose and lower lip. In MZ and DZ females, the eyes, supraorbital and infraorbital ridges, philtrum and lower part of the cheeks were coincident. Zygosity did not seem to influence the amount of facial symmetry. Lower facial third was the most asymmetrical. Three-dimensional analyses revealed differences in facial shapes of MZ and DZ twins. The relative contribution of genetic and environmental factors is different for the upper, middle and lower facial thirds. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
MacKinnon, Neil; Somashekar, Bagganahalli S; Tripathi, Pratima; Ge, Wencheng; Rajendiran, Thekkelnaycke M; Chinnaiyan, Arul M; Ramamoorthy, Ayyalusamy
2013-01-01
Nuclear magnetic resonance based measurements of small molecule mixtures continues to be confronted with the challenge of spectral assignment. While multi-dimensional experiments are capable of addressing this challenge, the imposed time constraint becomes prohibitive, particularly with the large sample sets commonly encountered in metabolomic studies. Thus, one-dimensional spectral assignment is routinely performed, guided by two-dimensional experiments on a selected sample subset; however, a publicly available graphical interface for aiding in this process is currently unavailable. We have collected spectral information for 360 unique compounds from publicly available databases including chemical shift lists and authentic full resolution spectra, supplemented with spectral information for 25 compounds collected in-house at a proton NMR frequency of 900 MHz. This library serves as the basis for MetaboID, a Matlab-based user interface designed to aid in the one-dimensional spectral assignment process. The tools of MetaboID were built to guide resonance assignment in order of increasing confidence, starting from cursory compound searches based on chemical shift positions to analysis of authentic spike experiments. Together, these tools streamline the often repetitive task of spectral assignment. The overarching goal of the integrated toolbox of MetaboID is to centralize the one dimensional spectral assignment process, from providing access to large chemical shift libraries to providing a straightforward, intuitive means of spectral comparison. Such a toolbox is expected to be attractive to both experienced and new metabolomic researchers as well as general complex mixture analysts. Copyright © 2012 Elsevier Inc. All rights reserved.
On a 3-D singularity element for computation of combined mode stress intensities
NASA Technical Reports Server (NTRS)
Atluri, S. N.; Kathiresan, K.
1976-01-01
A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra
2017-07-01
This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.
Deviation diagnosis and analysis of hull flat block assembly based on a state space model
NASA Astrophysics Data System (ADS)
Zhang, Zhiying; Dai, Yinfang; Li, Zhen
2012-09-01
Dimensional control is one of the most important challenges in the shipbuilding industry. In order to predict assembly dimensional variation in hull flat block construction, a variation stream model based on state space was presented in this paper which can be further applied to accuracy control in shipbuilding. Part accumulative error, locating error, and welding deformation were taken into consideration in this model, and variation propagation mechanisms and the accumulative rule in the assembly process were analyzed. Then, a model was developed to describe the variation propagation throughout the assembly process. Finally, an example of flat block construction from an actual shipyard was given. The result shows that this method is effective and useful.
Analysis of rotary engine combustion processes based on unsteady, three-dimensional computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed for predicting the turbulent and chemically reacting flows with sprays occurring inside of a stratified charge rotary engine. The solution procedure is based on an Eulerian Lagrangian approach where the unsteady, three-dimensional Navier-Stokes equations for a perfect gas mixture with variable properties are solved in generalized, Eulerian coordinates on a moving grid by making use of an implicit finite volume, Steger-Warming flux vector splitting scheme, and the liquid phase equations are solved in Lagrangian coordinates. Both the details of the numerical algorithm and the finite difference predictions of the combustor flow field during the opening of exhaust and/or intake, and also during fuel vaporization and combustion, are presented.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Malik, M.
2000-01-01
A study is made of the effects of variation in the lamination and geometric parameters, and boundary conditions of multi-layered composite panels on the accuracy of the detailed response characteristics obtained by five different modeling approaches. The modeling approaches considered include four two-dimensional models, each with five parameters to characterize the deformation in the thickness direction, and a predictor-corrector approach with twelve displacement parameters. The two-dimensional models are first-order shear deformation theory, third-order theory; a theory based on trigonometric variation of the transverse shear stresses through the thickness, and a discrete layer theory. The combination of the following four key elements distinguishes the present study from previous studies reported in the literature: (1) the standard of comparison is taken to be the solutions obtained by using three-dimensional continuum models for each of the individual layers; (2) both mechanical and thermal loadings are considered; (3) boundary conditions other than simply supported edges are considered; and (4) quantities compared include detailed through-the-thickness distributions of transverse shear and transverse normal stresses. Based on the numerical studies conducted, the predictor-corrector approach appears to be the most effective technique for obtaining accurate transverse stresses, and for thermal loading, none of the two-dimensional models is adequate for calculating transverse normal stresses, even when used in conjunction with three-dimensional equilibrium equations.
A novel model of magnetorheological damper with hysteresis division
NASA Astrophysics Data System (ADS)
Yu, Jianqiang; Dong, Xiaomin; Zhang, Zonglun
2017-10-01
Due to the complex nonlinearity of magnetorheological (MR) behavior, the modeling of MR dampers is a challenge. A simple and effective model of MR damper remains a work in progress. A novel model of MR damper is proposed with force-velocity hysteresis division method in this study. A typical hysteresis loop of MR damper can be simply divided into two novel curves with the division idea. One is the backbone curve and the other is the branch curve. The exponential-family functions which capturing the characteristics of the two curves can simplify the model and improve the identification efficiency. To illustrate and validate the novel phenomenological model with hysteresis division idea, a dual-end MR damper is designed and tested. Based on the experimental data, the characteristics of the novel curves are investigated. To simplify the parameters identification and obtain the reversibility, the maximum force part, the non-dimensional backbone part and the non-dimensional branch part are derived from the two curves. The maximum force part and the non-dimensional part are in multiplication type add-rule. The maximum force part is dependent on the current and maximum velocity. The non-dominated sorting genetic algorithm II (NSGA II) based on the design of experiments (DOE) is employed to identify the parameters of the normalized shape functions. Comparative analysis is conducted based on the identification results. The analysis shows that the novel model with few identification parameters has higher accuracy and better predictive ability.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software.
Ebersberger, Ullrich; Marcus, Roy P; Schoepf, U Joseph; Lo, Gladys G; Wang, Yining; Blanke, Philipp; Geyer, Lucas L; Gray, J Cranston; McQuiston, Andrew D; Cho, Young Jun; Scheuering, Michael; Canstein, Christian; Nikolaou, Konstantin; Hoffmann, Ellen; Bamberg, Fabian
2014-01-01
To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. • Myocardial perfusion CT is attractive for comprehensive coronary heart disease assessment. • Traditional image analysis methods are cumbersome and time-consuming. • Automated 3D perfusion software shortens analysis times. • Automated 3D perfusion software increases standardisation of myocardial perfusion CT. • Automated, standardised analysis fosters myocardial perfusion CT integration into clinical practice.
Vertex Space Analysis for Model-Based Target Recognition.
1996-08-01
performed in our unique invariant representation, Vertex Space, that reduces both the dimensionality and size of the required search space. Vertex Space ... mapping results in a reduced representation that serves as a characteristic target signature which is invariant to four of the six viewing geometry
NASA Technical Reports Server (NTRS)
Yang, Y. L.; Tan, C. S.; Hawthorne, W. R.
1992-01-01
A computational method, based on a theory for turbomachinery blading design in three-dimensional inviscid flow, is applied to a parametric design study of a radial inflow turbine wheel. As the method requires the specification of swirl distribution, a technique for its smooth generation within the blade region is proposed. Excellent agreements have been obtained between the computed results from this design method and those from direct Euler computations, demonstrating the correspondence and consistency between the two. The computed results indicate the sensitivity of the pressure distribution to a lean in the stacking axis and a minor alteration in the hub/shroud profiles. Analysis based on Navier-Stokes solver shows no breakdown of flow within the designed blade passage and agreement with that from design calculation; thus the flow in the designed turbine rotor closely approximates that of an inviscid one. These calculations illustrate the use of a design method coupled to an analysis tool for establishing guidelines and criteria for designing turbomachinery blading.
Nakamura, Tatsuji; Kuromitsu, Junro; Oda, Yoshiya
2008-03-01
Two-dimensional liquid-chromatographic (LC) separation followed by mass spectrometric (MS) analysis was examined for the identification of peptides in complex mixtures as an alternative to widely used two-dimensional gel electrophoresis followed by MS analysis for use in proteomics. The present method involves the off-line coupling of a narrow-bore, polymer-based, reversed-phase column using an acetonitrile gradient in an alkaline mobile phase in the first dimension with octadecylsilanized silica (ODS)-based nano-LC/MS in the second dimension. After the first separation, successive fractions were acidified and dried off-line, then loaded on the second dimension column. Both columns separate peptides according to hydrophobicity under different pH conditions, but more peptides were identified than with the conventional technique for shotgun proteomics, that is, the combination of a strong cation exchange column with an ODS column, and the system was robust because no salts were included in the mobile phases. The suitability of the method for proteomics measurements was evaluated.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
Seidel, Dominik
2018-01-01
The three-dimensional forest structure affects many ecosystem functions and services provided by forests. As forests are made of trees it seems reasonable to approach their structure by investigating individual tree structure. Based on three-dimensional point clouds from laser scanning, a newly developed holistic approach is presented that enables to calculate the box dimension as a measure of structural complexity of individual trees using fractal analysis. It was found that the box dimension of trees was significantly different among the tested species, among trees belonging to the same species but exposed to different growing conditions (at gap vs. forest interior) or to different kinds of competition (intraspecific vs. interspecific). Furthermore, it was shown that the box dimension is positively related to the trees' growth rate. The box dimension was identified as an easy to calculate measure that integrates the effect of several external drivers of tree structure, such as competition strength and type, while simultaneously providing information on structure-related properties, like tree growth.
NASA Astrophysics Data System (ADS)
Ma, Junhai; Ren, Wenbo; Zhan, Xueli
2017-04-01
Based on the study of scholars at home and abroad, this paper improves the three-dimensional IS-LM model in macroeconomics, analyzes the equilibrium point of the system and stability conditions, focuses on the parameters and complex dynamic characteristics when Hopf bifurcation occurs in the three-dimensional IS-LM macroeconomics system. In order to analyze the stability of limit cycles when Hopf bifurcation occurs, this paper further introduces the first Lyapunov coefficient to judge the limit cycles, i.e. from a practical view of the business cycle. Numerical simulation results show that within the range of most of the parameters, the limit cycle of 3D IS-LM macroeconomics is stable, that is, the business cycle is stable; with the increase of the parameters, limit cycles becomes unstable, and the value range of the parameters in this situation is small. The research results of this paper have good guide significance for the analysis of macroeconomics system.
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, Manohar D.; Cockrell, C. R.; Beck, F. B.
1995-01-01
A combined finite element method/method of moments (FEM/MoM) approach is used to analyze the electromagnetic scattering properties of a three-dimensional-cavity-backed aperture in an infinite ground plane. The FEM is used to formulate the fields inside the cavity, and the MoM (with subdomain bases) in both spectral and spatial domains is used to formulate the fields above the ground plane. Fields in the aperture and the cavity are solved using a system of equations resulting from the combination of the FEM and the MoM. By virtue of the FEM, this combined approach is applicable to all arbitrarily shaped cavities with inhomogeneous material fillings, and because of the subdomain bases used in the MoM, the apertures can be of any arbitrary shape. This approach leads to a partly sparse and partly full symmetric matrix, which is efficiently solved using a biconjugate gradient algorithm. Numerical results are presented to validate the analysis.
ERIC Educational Resources Information Center
Besson, Ugo; Borghi, Lidia; De Ambrosis, Anna; Mascheretti, Paolo
2010-01-01
We have developed a teaching-learning sequence (TLS) on friction based on a preliminary study involving three dimensions: an analysis of didactic research on the topic, an overview of usual approaches, and a critical analysis of the subject, considered also in its historical development. We found that mostly the usual presentations do not take…
Study of Graphite/Epoxy Composites for Material Flaw Criticality.
1980-11-01
criticality of disbonds with two-dimensional planforms located in laminated graphite/epoxy composites has been examined. Linear elastic fracture...mechanics approach, semi-empirical growth laws and methods of stress analysis based on a modified laminated plate theory have been studied for assessing...growth rates of disbonds in a transverse shear environ- ment. Elastic stability analysis has been utilized for laminates with disbonds subjected to in
Chang, Sung-A; Kim, Hyung-Kwan; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Kwon, Oh Min; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2013-04-01
Left ventricular (LV) mass is an important prognostic indicator in hypertrophic cardiomyopathy. Although LV mass can be easily calculated using conventional echocardiography, it is based on geometric assumptions and has inherent limitations in asymmetric left ventricles. Real-time three-dimensional echocardiographic (RT3DE) imaging with single-beat capture provides an opportunity for the accurate estimation of LV mass. The aim of this study was to validate this new technique for LV mass measurement in patients with hypertrophic cardiomyopathy. Sixty-nine patients with adequate two-dimensional (2D) and three-dimensional echocardiographic image quality underwent cardiac magnetic resonance (CMR) imaging and echocardiography on the same day. Real-time three-dimensional echocardiographic images were acquired using an Acuson SC2000 system, and CMR-determined LV mass was considered the reference standard. Left ventricular mass was derived using the formula of the American Society of Echocardiography (M-mode mass), the 2D-based truncated ellipsoid method (2D mass), and the RT3DE technique (RT3DE mass). The mean time for RT3DE analysis was 5.85 ± 1.81 min. Intraclass correlation analysis showed a close relationship between RT3DE and CMR LV mass (r = 0.86, P < .0001). However, LV mass by the M-mode or 2D technique showed a smaller intraclass correlation coefficient compared with CMR-determined mass (r = 0.48, P = .01, and r = 0.71, P < .001, respectively). Bland-Altman analysis showed reasonable limits of agreement between LV mass by RT3DE imaging and by CMR, with a smaller positive bias (19.5 g [9.1%]) compared with that by the M-mode and 2D methods (-35.1 g [-20.2%] and 30.6 g [17.6%], respectively). RT3DE measurement of LV mass using the single-beat capture technique is practical and more accurate than 2D or M-mode LV mass in patients with hypertrophic cardiomyopathy. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Plane Poiseuille Flow of a Rarefied Gas in the Presence of a Strong Gravitation
NASA Astrophysics Data System (ADS)
Doi, Toshiyuki
2010-11-01
Poiseuille flow of a rarefied gas between two horizontal planes in the presence of a strong gravitation is considered, where the gravity is so strong that the path of a molecule is curved considerably as it ascends or descends the distance of the planes. The gas behavior is studied based on the Boltzmann equation. An asymptotic analysis for a slow variation in the longitudinal direction is carried out and the problem is reduced to a spatially one dimensional problem, as was in the Poiseuille flow problem in the absence of the gravitation. The mass flow rate as well as the macroscopic variables is obtained for a wide range of the mean free path of the gas and the gravity. A numerical analysis of a two dimensional problem is also carried out and the result of the asymptotic analysis is verified.
X-ray microbeam three-dimensional topography for dislocation strain-field analysis of 4H-SiC
NASA Astrophysics Data System (ADS)
Tanuma, R.; Mori, D.; Kamata, I.; Tsuchida, H.
2013-07-01
This paper describes the strain-field analysis of threading edge dislocations (TEDs) and basal-plane dislocations (BPDs) in 4H-SiC using x-ray microbeam three-dimensional (3D) topography. This 3D topography enables quantitative strain-field analysis, which measures images of effective misorientations (Δω maps) around the dislocations. A deformation-matrix-based simulation algorithm is developed to theoretically evaluate the Δω mapping. Systematic linear calculations can provide simulated Δω maps (Δωsim maps) of dislocations with different Burgers vectors, directions, and reflection vectors for the desired cross-sections. For TEDs and BPDs, Δω maps are compared with Δωsim maps, and their excellent correlation is demonstrated. Two types of asymmetric reflections, high- and low-angle incidence types, are compared. Strain analyses are also conducted to investigate BPD-TED conversion near an epilayer/substrate interface in 4H-SiC.
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Matthaeus, William H.
1990-01-01
A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.
Chang, Po-Hsun; Tsai, Hsieh-Chih; Chen, Yu-Ren; Chen, Jian-Yu; Hsiue, Ging-Ho
2008-10-21
In this study, two nonlinear optic hybrid materials with different dimensional alkoxysilane dyes were prepared and characterized. One NLO silane (Cz2PhSO 2OH- TES), a two-dimensional structure based on carbazole, had a larger rotational volume than the other (DR19-TES). Second harmonic ( d 33) analysis verified there is an optimum heating process for the best poling efficiency. The maximum d 33 value of NLO hybrid film containing Cz2PhSO 2OH was obtained for 10.7 pm/V after precuring at 150 degrees C for 3 h and poling at 210 degrees C for 60 min. The solid-state (29)Si NMR spectrum shows that the main factor influencing poling efficiency and thermal stability was cross-linking degree of NLO silane, but not that of TMOS. In particular, the two-dimensional sol-gel system has a greater dynamic and temporary stability than the one-dimensional system due to Cz2PhSO 2OH-TES requiring a larger volume to rotate in the hybrid matrix after cross-linking.
NASA Technical Reports Server (NTRS)
Santoro, Robert J.; Pal, Sibtosh
1999-01-01
Rocket thrusters for Rocket Based Combined Cycle (RBCC) engines typically operate with hydrogen/oxygen propellants in a very compact space. Packaging considerations lead to designs with either axisymmetric or two-dimensional throat sections. Nozzles tend to be either two- or three-dimensional. Heat transfer characteristics, particularly in the throat, where the peak heat flux occurs, are not well understood. Heat transfer predictions for these small thrusters have been made with one-dimensional analysis such as the Bartz equation or scaling of test data from much larger thrusters. The current work addresses this issue with an experimental program that examines the heat transfer characteristics of a gaseous oxygen (GO2)/gaseous hydrogen (GH2) two-dimensional compact rocket thruster. The experiments involved measuring the axial wall temperature profile in the nozzle region of a water-cooled gaseous oxygen/gaseous hydrogen rocket thruster at a pressure of 3.45 MPa. The wall temperature measurements in the thruster nozzle in concert with Bartz's correlation are utilized in a one-dimensional model to obtain axial profiles of nozzle wall heat flux.
NASA Astrophysics Data System (ADS)
Zhang, Siqian; Kuang, Gangyao
2014-10-01
In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Baysal, Oktay
1997-01-01
A gradient-based shape optimization based on quasi-analytical sensitivities has been extended for practical three-dimensional aerodynamic applications. The flow analysis has been rendered by a fully implicit, finite-volume formulation of the Euler and Thin-Layer Navier-Stokes (TLNS) equations. Initially, the viscous laminar flow analysis for a wing has been compared with an independent computational fluid dynamics (CFD) code which has been extensively validated. The new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4 with coarse- and fine-grid based computations performed with Euler and TLNS equations. The influence of the initial constraints on the geometry and aerodynamics of the optimized shape has been explored. Various final shapes generated for an identical initial problem formulation but with different optimization path options (coarse or fine grid, Euler or TLNS), have been aerodynamically evaluated via a common fine-grid TLNS-based analysis. The initial constraint conditions show significant bearing on the optimization results. Also, the results demonstrate that to produce an aerodynamically efficient design, it is imperative to include the viscous physics in the optimization procedure with the proper resolution. Based upon the present results, to better utilize the scarce computational resources, it is recommended that, a number of viscous coarse grid cases using either a preconditioned bi-conjugate gradient (PbCG) or an alternating-direction-implicit (ADI) method, should initially be employed to improve the optimization problem definition, the design space and initial shape. Optimized shapes should subsequently be analyzed using a high fidelity (viscous with fine-grid resolution) flow analysis to evaluate their true performance potential. Finally, a viscous fine-grid-based shape optimization should be conducted, using an ADI method, to accurately obtain the final optimized shape.
Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data
NASA Astrophysics Data System (ADS)
Kamnev, I. S.; Seredovich, V. A.
2017-12-01
The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.
Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide
2007-01-01
Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.
An Improved Treatment of External Boundary for Three-Dimensional Flow Computations
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon V.; Vatsa, Veer N.
1997-01-01
We present an innovative numerical approach for setting highly accurate nonlocal boundary conditions at the external computational boundaries when calculating three-dimensional compressible viscous flows over finite bodies. The approach is based on application of the difference potentials method by V. S. Ryaben'kii and extends our previous technique developed for the two-dimensional case. The new boundary conditions methodology has been successfully combined with the NASA-developed code TLNS3D and used for the analysis of wing-shaped configurations in subsonic and transonic flow regimes. As demonstrated by the computational experiments, the improved external boundary conditions allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable speedup of convergence of the multigrid iterations.
Origami interleaved tube cellular materials
NASA Astrophysics Data System (ADS)
Cheung, Kenneth C.; Tachi, Tomohiro; Calisch, Sam; Miura, Koryo
2014-09-01
A novel origami cellular material based on a deployable cellular origami structure is described. The structure is bi-directionally flat-foldable in two orthogonal (x and y) directions and is relatively stiff in the third orthogonal (z) direction. While such mechanical orthotropicity is well known in cellular materials with extruded two dimensional geometry, the interleaved tube geometry presented here consists of two orthogonal axes of interleaved tubes with high interfacial surface area and relative volume that changes with fold-state. In addition, the foldability still allows for fabrication by a flat lamination process, similar to methods used for conventional expanded two dimensional cellular materials. This article presents the geometric characteristics of the structure together with corresponding kinematic and mechanical modeling, explaining the orthotropic elastic behavior of the structure with classical dimensional scaling analysis.
Thermal conductivity in one-dimensional nonlinear systems
NASA Astrophysics Data System (ADS)
Politi, Antonio; Giardinà, Cristian; Livi, Roberto; Vassalli, Massimo
2000-03-01
Thermal conducitivity of one-dimensional nonlinear systems typically diverges in the thermodynamic limit, whenever the momentum is conserved (i.e. in the absence of interactions with an external substrate). Evidence comes from detailed studies of Fermi-Pasta-Ulam and diatomic Toda chains. Here, we discuss the first example of a one-dimensional system obeying Fourier law : a chain of coupled rotators. Numerical estimates of the thermal conductivity obtained by simulating a chain in contact with two thermal baths at different temperatures are found to be consistent with those ones based on linear response theory. The dynamics of the Fourier modes provides direct evidence of energy diffusion. The finiteness of the conductivity is traced back to the occurrence of phase-jumps. Our conclusions are confirmed by the analysis of two variants of the rotator model.
Low dimensional model of heart rhythm dynamics as a tool for diagnosing the anaerobic threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anosov, O.L.; Butkovskii, O.Y.; Kadtke, J.
We report preliminary results on describing the dependence of the heart rhythm variability on the stress level by using qualitative, low dimensional models. The reconstruction of macroscopic heart models yielding cardio cycles (RR-intervals) duration was based on actual clinical data. Our results show that the coefficients of the low dimensional models are sensitive to metabolic changes. In particular, at the transition between aerobic and aerobic-anaerobic metabolism, there are pronounced extrema in the functional dependence of the coefficients on the stress level. This strong sensitivity can be used to design an easy indirect method for determining the anaerobic threshold. This methodmore » could replace costly and invasive traditional methods such as gas analysis and blood tests. {copyright} {ital 1997 American Institute of Physics.}« less
Three-dimensional analysis of magnetometer array data
NASA Technical Reports Server (NTRS)
Richmond, A. D.; Baumjohann, W.
1984-01-01
A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.
Kernel PLS-SVC for Linear and Nonlinear Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan
2003-01-01
A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
A simple new filter for nonlinear high-dimensional data assimilation
NASA Astrophysics Data System (ADS)
Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo
2015-04-01
The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.
Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E
2015-04-07
Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.
ERIC Educational Resources Information Center
Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka
2015-01-01
The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…
Modal analysis on resonant excitation of two-dimensional waveguide grating filters
NASA Astrophysics Data System (ADS)
Zhou, Jianyu; Sang, Tian; Li, Junlang; Wang, Rui; Wang, La; Wang, Benxin; Wang, Yueke
2017-12-01
Modal analysis on resonant excitation of two-dimensional (2-D) waveguide grating filters (WGFs) is proposed. It is shown that the 2-D WGFs can support the excitation of a resonant pair, and the locations of the resonant pair arising from the TE and TM guided-mode resonances (GMRs) can be estimated accurately based on the modal analysis. Multichannel filtering using the resonant pair is investigated, and the antireflection (AR) design of the 2-D WGFs is also studied. It is shown that the reflection sideband can be reduced by placing an AR layer on the bottom of the homogeneous layer, and the well-shaped reflection spectrum with near-zero sideband reflection can be achieved by using the double-faced AR design. By merely increasing the thickness of the homogeneous layer with other parameters maintained, the spectrally dense comb-like filters with good unpolarized filtering features can be achieved. The proposed modal analysis can be extended to study the resonant excitation of 2-D periodic nanoarrays with diverse surface profiles.
User-Defined Material Model for Progressive Failure Analysis
NASA Technical Reports Server (NTRS)
Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)
2006-01-01
An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.
Extending unbiased stereology of brain ultrastructure to three-dimensional volumes
NASA Technical Reports Server (NTRS)
Fiala, J. C.; Harris, K. M.; Koslow, S. H. (Principal Investigator)
2001-01-01
OBJECTIVE: Analysis of brain ultrastructure is needed to reveal how neurons communicate with one another via synapses and how disease processes alter this communication. In the past, such analyses have usually been based on single or paired sections obtained by electron microscopy. Reconstruction from multiple serial sections provides a much needed, richer representation of the three-dimensional organization of the brain. This paper introduces a new reconstruction system and new methods for analyzing in three dimensions the location and ultrastructure of neuronal components, such as synapses, which are distributed non-randomly throughout the brain. DESIGN AND MEASUREMENTS: Volumes are reconstructed by defining transformations that align the entire area of adjacent sections. Whole-field alignment requires rotation, translation, skew, scaling, and second-order nonlinear deformations. Such transformations are implemented by a linear combination of bivariate polynomials. Computer software for generating transformations based on user input is described. Stereological techniques for assessing structural distributions in reconstructed volumes are the unbiased bricking, disector, unbiased ratio, and per-length counting techniques. A new general method, the fractional counter, is also described. This unbiased technique relies on the counting of fractions of objects contained in a test volume. A volume of brain tissue from stratum radiatum of hippocampal area CA1 is reconstructed and analyzed for synaptic density to demonstrate and compare the techniques. RESULTS AND CONCLUSIONS: Reconstruction makes practicable volume-oriented analysis of ultrastructure using such techniques as the unbiased bricking and fractional counter methods. These analysis methods are less sensitive to the section-to-section variations in counts and section thickness, factors that contribute to the inaccuracy of other stereological methods. In addition, volume reconstruction facilitates visualization and modeling of structures and analysis of three-dimensional relationships such as synaptic connectivity.