Sample records for normalized scale transform

  1. Size effects on the martensitic phase transformation of NiTi nanograins

    NASA Astrophysics Data System (ADS)

    Waitz, T.; Antretter, T.; Fischer, F. D.; Simha, N. K.; Karnthaler, H. P.

    2007-02-01

    The analysis of nanocrystalline NiTi by transmission electron microscopy (TEM) shows that the martensitic transformation proceeds by the formation of atomic-scale twins. Grains of a size less than about 50 nm do not transform to martensite even upon large undercooling. A systematic investigation of these phenomena was carried out elucidating the influence of the grain size on the energy barrier of the transformation. Based on the experiment, nanograins were modeled as spherical inclusions containing (0 0 1) compound twinned martensite. Decomposition of the transformation strains of the inclusions into a shear eigenstrain and a normal eigenstrain facilitates the analytical calculation of shear and normal strain energies in dependence of grain size, twin layer width and elastic properties. Stresses were computed analytically for special cases, otherwise numerically. The shear stresses that alternate from twin layer to twin layer are concentrated at the grain boundaries causing a contribution to the strain energy scaling with the surface area of the inclusion, whereas the strain energy induced by the normal components of the transformation strain and the temperature dependent chemical free energy scale with the volume of the inclusion. In the nanograins these different energy contributions were calculated which allow to predict a critical grain size below which the martensitic transformation becomes unlikely. Finally, the experimental result of the atomic-scale twinning can be explained by analytical calculations that account for the transformation-opposing contributions of the shear strain and the twin boundary energy of the twin-banded morphology of martensitic nanograins.

  2. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  4. An Alternate Definition of the ETS Delta Scale of Item Difficulty. Program Statistics Research.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    An alternative definition has been developed of the delta scale of item difficulty used at Educational Testing Service. The traditional delta scale uses an inverse normal transformation based on normal ogive models developed years ago. However, no use is made of this fact in typical uses of item deltas. It is simply one way to make the probability…

  5. At what scale should microarray data be analyzed?

    PubMed

    Huang, Shuguang; Yeo, Adeline A; Gelbert, Lawrence; Lin, Xi; Nisenbaum, Laura; Bemis, Kerry G

    2004-01-01

    The hybridization intensities derived from microarray experiments, for example Affymetrix's MAS5 signals, are very often transformed in one way or another before statistical models are fitted. The motivation for performing transformation is usually to satisfy the model assumptions such as normality and homogeneity in variance. Generally speaking, two types of strategies are often applied to microarray data depending on the analysis need: correlation analysis where all the gene intensities on the array are considered simultaneously, and gene-by-gene ANOVA where each gene is analyzed individually. We investigate the distributional properties of the Affymetrix GeneChip signal data under the two scenarios, focusing on the impact of analyzing the data at an inappropriate scale. The Box-Cox type of transformation is first investigated for the strategy of pooling genes. The commonly used log-transformation is particularly applied for comparison purposes. For the scenario where analysis is on a gene-by-gene basis, the model assumptions such as normality are explored. The impact of using a wrong scale is illustrated by log-transformation and quartic-root transformation. When all the genes on the array are considered together, the dependent relationship between the expression and its variation level can be satisfactorily removed by Box-Cox transformation. When genes are analyzed individually, the distributional properties of the intensities are shown to be gene dependent. Derivation and simulation show that some loss of power is incurred when a wrong scale is used, but due to the robustness of the t-test, the loss is acceptable when the fold-change is not very large.

  6. Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.

    PubMed

    Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent

    2018-02-01

    The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. A New Distribution Family for Microarray Data †

    PubMed Central

    Kelmansky, Diana Mabel; Ricci, Lila

    2017-01-01

    The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative standpoint taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. R codes are available from the authors upon request. PMID:28208652

  8. A New Distribution Family for Microarray Data.

    PubMed

    Kelmansky, Diana Mabel; Ricci, Lila

    2017-02-10

    The traditional approach with microarray data has been to apply transformations that approximately normalize them, with the drawback of losing the original scale. The alternative stand point taken here is to search for models that fit the data, characterized by the presence of negative values, preserving their scale; one advantage of this strategy is that it facilitates a direct interpretation of the results. A new family of distributions named gpower-normal indexed by p∈R is introduced and it is proven that these variables become normal or truncated normal when a suitable gpower transformation is applied. Expressions are given for moments and quantiles, in terms of the truncated normal density. This new family can be used to model asymmetric data that include non-positive values, as required for microarray analysis. Moreover, it has been proven that the gpower-normal family is a special case of pseudo-dispersion models, inheriting all the good properties of these models, such as asymptotic normality for small variances. A combined maximum likelihood method is proposed to estimate the model parameters, and it is applied to microarray and contamination data. Rcodes are available from the authors upon request.

  9. Fractal Dimension Analysis of Transient Visual Evoked Potentials: Optimisation and Applications.

    PubMed

    Boon, Mei Ying; Henry, Bruce Ian; Chu, Byoung Sun; Basahi, Nour; Suttle, Catherine May; Luu, Chi; Leung, Harry; Hing, Stephen

    2016-01-01

    The visual evoked potential (VEP) provides a time series signal response to an external visual stimulus at the location of the visual cortex. The major VEP signal components, peak latency and amplitude, may be affected by disease processes. Additionally, the VEP contains fine detailed and non-periodic structure, of presently unclear relevance to normal function, which may be quantified using the fractal dimension. The purpose of this study is to provide a systematic investigation of the key parameters in the measurement of the fractal dimension of VEPs, to develop an optimal analysis protocol for application. VEP time series were mathematically transformed using delay time, τ, and embedding dimension, m, parameters. The fractal dimension of the transformed data was obtained from a scaling analysis based on straight line fits to the numbers of pairs of points with separation less than r versus log(r) in the transformed space. Optimal τ, m, and scaling analysis were obtained by comparing the consistency of results using different sampling frequencies. The optimised method was then piloted on samples of normal and abnormal VEPs. Consistent fractal dimension estimates were obtained using τ = 4 ms, designating the fractal dimension = D2 of the time series based on embedding dimension m = 7 (for 3606 Hz and 5000 Hz), m = 6 (for 1803 Hz) and m = 5 (for 1000Hz), and estimating D2 for each embedding dimension as the steepest slope of the linear scaling region in the plot of log(C(r)) vs log(r) provided the scaling region occurred within the middle third of the plot. Piloting revealed that fractal dimensions were higher from the sampled abnormal than normal achromatic VEPs in adults (p = 0.02). Variances of fractal dimension were higher from the abnormal than normal chromatic VEPs in children (p = 0.01). A useful analysis protocol to assess the fractal dimension of transformed VEPs has been developed.

  10. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  11. Shift-, rotation-, and scale-invariant shape recognition system using an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Bader, Gerhard; Lueder, Ernst H.

    1998-02-01

    We present a hybrid shape recognition system with an optical Hough transform processor. The features of the Hough space offer a separate cancellation of distortions caused by translations and rotations. Scale invariance is also provided by suitable normalization. The proposed system extends the capabilities of Hough transform based detection from only straight lines to areas bounded by edges. A very compact optical design is achieved by a microlens array processor accepting incoherent light as direct optical input and realizing the computationally expensive connections massively parallel. Our newly developed algorithm extracts rotation and translation invariant normalized patterns of bright spots on a 2D grid. A neural network classifier maps the 2D features via a nonlinear hidden layer onto the classification output vector. We propose initialization of the connection weights according to regions of activity specifically assigned to each neuron in the hidden layer using a competitive network. The presented system is designed for industry inspection applications. Presently we have demonstrated detection of six different machined parts in real-time. Our method yields very promising detection results of more than 96% correctly classified parts.

  12. Fechner's law: where does the log transform come from?

    PubMed

    Laming, Donald

    2010-01-01

    This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.

  13. Directional Multi-scale Modeling of High-Resolution Computed Tomography (HRCT) Lung Images for Diffuse Lung Disease Classification

    NASA Astrophysics Data System (ADS)

    Vo, Kiet T.; Sowmya, Arcot

    A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.

  14. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  15. Normalizing and scaling of data to derive human response corridors from impact tests.

    PubMed

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A

    2014-06-03

    It is well known that variability is inherent in any biological experiment. Human cadavers (Post-Mortem Human Subjects, PMHS) are routinely used to determine responses to impact loading for crashworthiness applications including civilian (motor vehicle) and military environments. It is important to transform measured variables from PMHS tests (accelerations, forces and deflections) to a standard or reference population, termed normalization. The transformation process should account for inter-specimen variations with some underlying assumptions used during normalization. Scaling is a process by which normalized responses are converted from one standard to another (example, mid-size adult male to large-male and small-size female adults, and to pediatric populations). These responses are used to derive corridors to assess the biofidelity of anthropomorphic test devices (crash dummies) used to predict injury in impact environments and design injury mitigating devices. This survey examines the pros and cons of different approaches for obtaining normalized and scaled responses and corridors used in biomechanical studies for over four decades. Specifically, the equal-stress equal-velocity and impulse-momentum methods along with their variations are discussed in this review. Methods ranging from subjective to quasi-static loading to different approaches are discussed for deriving temporal mean and plus minus one standard deviation human corridors of time-varying fundamental responses and cross variables (e.g., force-deflection). The survey offers some insights into the potential efficacy of these approaches with examples from recent impact tests and concludes with recommendations for future studies. The importance of considering various parameters during the experimental design of human impact tests is stressed. Published by Elsevier Ltd.

  16. Vortex locking in direct numerical simulations of quantum turbulence.

    PubMed

    Morris, Karla; Koplik, Joel; Rouson, Damian W I

    2008-07-04

    Direct numerical simulations are used to examine the locking of quantized superfluid vortices and normal fluid vorticity in evolving turbulent flows. The superfluid is driven by the normal fluid, which undergoes either a decaying Taylor-Green flow or a linearly forced homogeneous isotropic turbulent flow, although the back reaction of the superfluid on the normal fluid flow is omitted. Using correlation functions and wavelet transforms, we present numerical and visual evidence for vortex locking on length scales above the intervortex spacing.

  17. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    PubMed

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  18. Rapid estimation of frequency response functions by close-range photogrammetry

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1985-01-01

    The accuracy of a rapid method which estimates the frequency response function from stereoscopic dynamic data is computed. It is shown that reversal of the order of the operations of coordinate transformation and Fourier transformation, which provides a significant increase in computational speed, introduces error. A portion of the error, proportional to the perturbation components normal to the camera focal planes, cannot be eliminated. The remaining error may be eliminated by proper scaling of frequency data prior to coordinate transformation. Methods are developed for least squares estimation of the full 3x3 frequency response matrix for a three dimensional structure.

  19. Automatic Brain Portion Segmentation From Magnetic Resonance Images of Head Scans Using Gray Scale Transformation and Morphological Operations.

    PubMed

    Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan

    2015-01-01

    To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.

  20. On physical changes on surface of human cervical epithelial cells during cancer transformations

    NASA Astrophysics Data System (ADS)

    Sokolov, Igor; Dokukin, Maxim; Guz, Nataliia; Woodworth, Craig

    2013-03-01

    Physical changes of the cell surface of cells during transformation from normal to cancerous state are rather poorly studied. Here we describe our recent studies of such changes done on human cervical epithelial cells during their transformation from normal through infected with human papillomavirus type-16 (HPV-16), immortalized (precancerous), to cancerous cells. The changes were studied with the help of atomic force microscopy (AFM) and through the measurement of physical adhesion of fluorescent silica beads to the cell surface. Based on the adhesion experiments, we clearly see the difference in nonspecific adhesion which occurs at the stage of immortalization of cells, precancerous cells. The analysis done with the help of AFM shows that the difference observed comes presumably from the alteration of the cellular ``brush,'' a layer that surrounds cells and which consists of mostly microvilli, microridges, and glycocalyx. Further AFM analysis reveals the emergence of fractal scaling behavior on the surface of cells when normal cells turn into cancerous. The possible causes and potential significance of these observations will be discussed.

  1. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  2. Effects of extensional rates on characteristic scales of two-dimensional turbulence in polymer solutions

    NASA Astrophysics Data System (ADS)

    Hidema, R.

    2014-08-01

    In order to study the effects of extensional viscosities on turbulent drag reduction, experimental studies using two-dimensional turbulence have been made. Anisotropic structures and variations of energy transfer induced by polymers are considered. Polyethyleneoxide and hydroxypropyl cellulose having different flexibility, which is due to different characteristics of extensional viscosity, are added to 2D turbulence. Variations of the turbulence were visualized by interference patterns of 2D flow, and were analysed by an image processing. The effects of polymers on turbulence in the streamwise and normal directions were also analysed by 2D Fourier transform. In addition, characteristic scales in 2D turbulence were analysed by wavelet transform.

  3. An analytical approach to reduce between-plate variation in multiplex assays that measure antibodies to Plasmodium falciparum antigens.

    PubMed

    Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J

    2017-07-17

    Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.

  4. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    PubMed

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Earthquake scaling laws for rupture geometry and slip heterogeneity

    NASA Astrophysics Data System (ADS)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip distributions. To further characterize the spatial correlations of slip heterogeneity, we analyze the power spectral decay of slip applying the 2-D von Karman auto-correlation function (parameterized by the Hurst exponent, H, and correlation lengths along strike and down-slip). The Hurst exponent is scale invariant, H = 0.83 (± 0.12), while the correlation lengths scale with source dimensions (seismic moment), thus implying characteristic physical scales of earthquake ruptures. Our self-consistent scaling relationships allow constraining the generation of slip-heterogeneity scenarios for physics-based ground-motion and tsunami simulations.

  6. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  7. On the distribution of scaling hydraulic parameters in a spatially anisotropic banana field

    NASA Astrophysics Data System (ADS)

    Regalado, Carlos M.

    2005-06-01

    When modeling soil hydraulic properties at field scale it is desirable to approximate the variability in a given area by means of some scaling transformations which relate spatially variable local hydraulic properties to global reference characteristics. Seventy soil cores were sampled within a drip irrigated banana plantation greenhouse on a 14×5 array of 2.5 m×5 m rectangles at 15 cm depth, to represent the field scale variability of flow related properties. Saturated hydraulic conductivity and water retention characteristics were measured in these 70 soil cores. van Genuchten water retention curves (WRC) with optimized m ( m≠1-1/ n) were fitted to the WR data and a general Mualem-van Genuchten model was used to predict hydraulic conductivity functions for each soil core. A scaling law, of the form ν=ανi*, was fitted to soil hydraulic data, such that the original hydraulic parameters νi were scaled down to a reference curve with parameters νi*. An analytical expression, in terms of Beta functions, for the average suction value, hc, necessary to apply the above scaling method, was obtained. A robust optimization procedure with fast convergence to the global minimum is used to find the optimum hc, such that dispersion is minimized in the scaled data set. Via the Box-Cox transformation P(τ)=(αiτ-1)/τ, Box-Cox normality plots showed that scaling factors for the suction ( αh) and hydraulic conductivity ( αk) were approximately log-normally distributed (i.e. τ=0), as it would be expected for such dynamic properties involving flow. By contrast static soil related properties as αθ were found closely Gaussian, although a power τ=3/4 was best for approaching normality. Application of four different normality tests (Anderson-Darling, Shapiro-Wilk, Kolmogorov-Smirnov and χ2 goodness-of-fit tests) rendered some contradictory results among them, thus suggesting that this widely extended practice is not recommended for providing a suitable probability density function for the scaling parameters, αi. Some indications for the origin of these disagreements, in terms of population size and test constraints, are pointed out. Visual inspection of normal probability plots can also lead to erroneous results. The scaling parameters αθ and αK show a sinusoidal spatial variation coincident with the underlying alignment of banana plants on the field. Such anisotropic distribution is explained in terms of porosity variations due to processes promoting soil degradation as surface desiccation and soil compaction, induced by tillage and localized irrigation of banana plants, and it is quantified by means of cross-correlograms.

  8. A new method to real-normalize measured complex modes

    NASA Technical Reports Server (NTRS)

    Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.

    1987-01-01

    A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.

  9. Time-frequency analysis of electric motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bentley, C.L.; Dunn, M.E.; Mattingly, J.K.

    1995-12-31

    Physical signals such as the current of an electric motor become nonstationary as a consequence of degraded operation and broken parts. In this instance, their power spectral densities become time dependent, and time-frequency analysis techniques become the appropriate tools for signal analysis. The first among these techniques, generally called the short-time Fourier transform (STFT) method, is the Gabor transform 2 (GT) of a signal S(t), which decomposes the signal into time-local frequency modes: where the window function, {Phi}(t-{tau}), is a normalized Gaussian. Alternatively, one can decompose the signal into its multi-resolution representation at different levels of magnification. This representation ismore » achieved by the continuous wavelet transform (CWT) where the function g(t) is a kernel of zero average belonging to a family of scaled and shifted wavelet kernels. The CWT can be interpreted as the action of a microscope that locates the signal by the shift parameter b and adjusts its magnification by changing the scale parameter a. The Fourier-transformed CWT, W,{sub g}(a, {omega}), acts as a filter that places the high-frequency content of a signal into the lower end of the scale spectrum and vice versa for the low frequencies. Signals from a motor in three different states were analyzed.« less

  10. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  11. Watermarking scheme based on singular value decomposition and homomorphic transform

    NASA Astrophysics Data System (ADS)

    Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu

    2017-10-01

    A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.

  12. Methods for Scaling to Doubly Stochastic Form,

    DTIC Science & Technology

    1981-06-26

    Frobenius -Konig Theorem (MARCUS and MINC [1964],p 97) A nonnegative n xn matrix without support contains an s x t zero subma- trix where: s +t =n + -3...that YA(k) has row sums 1. Then normalize the columns by a diagonal similarity transform defined as follows: Let x = (zx , • z,,) be a left Perron vector

  13. Percentile-Based Journal Impact Factors: A Neglected Collection Development Metric

    ERIC Educational Resources Information Center

    Wagner, A. Ben

    2009-01-01

    Various normalization techniques to transform journal impact factors (JIFs) into a standard scale or range of values have been reported a number of times in the literature, but have seldom been part of collection development librarians' tool kits. In this paper, JIFs as reported in the Journal Citation Reports (JCR) database are converted to…

  14. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability.

    PubMed

    van Albada, S J; Robinson, P A

    2007-04-15

    Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.

  15. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  16. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  17. Oil Spill Detection and Tracking Using Lipschitz Regularity and Multiscale Techniques in Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2014-12-01

    Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.

  18. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  19. Molecular control of normal and acrocona mutant seed cone development in Norway spruce (Picea abies) and the evolution of conifer ovule-bearing organs.

    PubMed

    Carlsbecker, Annelie; Sundström, Jens F; Englund, Marie; Uddenberg, Daniel; Izquierdo, Liz; Kvarnheden, Anders; Vergara-Silva, Francisco; Engström, Peter

    2013-10-01

    Reproductive organs in seed plants are morphologically divergent and their evolutionary history is often unclear. The mechanisms controlling their development have been extensively studied in angiosperms but are poorly understood in conifers and other gymnosperms. Here, we address the molecular control of seed cone development in Norway spruce, Picea abies. We present expression analyses of five novel MADS-box genes in comparison with previously identified MADS and LEAFY genes at distinct developmental stages. In addition, we have characterized the homeotic transformation from vegetative shoot to female cone and associated changes in regulatory gene expression patterns occurring in the acrocona mutant. The analyses identified genes active at the onset of ovuliferous and ovule development and identified expression patterns marking distinct domains of the ovuliferous scale. The reproductive transformation in acrocona involves the activation of all tested genes normally active in early cone development, except for an AGAMOUS-LIKE6/SEPALLATA (AGL6/SEP) homologue. This absence may be functionally associated with the nondeterminate development of the acrocona ovule-bearing scales. Our morphological and gene expression analyses give support to the hypothesis that the modern cone is a complex structure, and the ovuliferous scale the result of reductions and compactions of an ovule-bearing axillary short shoot in cones of Paleozoic conifers. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  20. Identification of Proteins Related to Epigenetic Regulation in the Malignant Transformation of Aberrant Karyotypic Human Embryonic Stem Cells by Quantitative Proteomics

    PubMed Central

    Sun, Yi; Yang, Yixuan; Zeng, Sicong; Tan, Yueqiu; Lu, Guangxiu; Lin, Ge

    2014-01-01

    Previous reports have demonstrated that human embryonic stem cells (hESCs) tend to develop genomic alterations and progress to a malignant state during long-term in vitro culture. This raises concerns of the clinical safety in using cultured hESCs. However, transformed hESCs might serve as an excellent model to determine the process of embryonic stem cell transition. In this study, ITRAQ-based tandem mass spectrometry was used to quantify normal and aberrant karyotypic hESCs proteins from simple to more complex karyotypic abnormalities. We identified and quantified 2583 proteins, and found that the expression levels of 316 proteins that represented at least 23 functional molecular groups were significantly different in both normal and abnormal hESCs. Dysregulated protein expression in epigenetic regulation was further verified in six pairs of hESC lines in early and late passage. In summary, this study is the first large-scale quantitative proteomic analysis of the malignant transformation of aberrant karyotypic hESCs. The data generated should serve as a useful reference of stem cell-derived tumor progression. Increased expression of both HDAC2 and CTNNB1 are detected as early as the pre-neoplastic stage, and might serve as prognostic markers in the malignant transformation of hESCs. PMID:24465727

  1. The application of the piecewise linear approximation to the spectral neighborhood of soil line for the analysis of the quality of normalization of remote sensing materials

    NASA Astrophysics Data System (ADS)

    Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.

    2017-04-01

    The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).

  2. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  3. Vibrational Spectroscopic Studies of Reduced-Sensitivity RDX under Static Compression

    NASA Astrophysics Data System (ADS)

    Wong, Chak

    2005-07-01

    Explosives formulations with Reduced- Sensitivity RDX showed reduced shock sensitivity using NOL Large Scale Gap Test, compared with similar formulations using normal RDX. Molecular processes responsible for the reduction of sensitivity are unknown and are crucial for formulation development. Vibrational spectroscopy at static high pressure may shed light to the mechanisms responsible for the reduced shock sensitivity as shown by the NOL Large Scale Gap Test. SIRDX, a form of Reduced- Sensitivity RDX, was subjected to static compression at ambient temperature in a Merrill-Bassett sapphire cell from ambient to about 6 GPa. The spectroscopic techniques used were Raman and Fourier-Transformed IR (FTIR). The pressure dependence of the Raman mode frequencies of SIRDX was determined and compared with that of normal RDX. The behavior of SIRDX near the pressure at which normal RDX, at ambient temperature, undergoes a phase transition from the α to the γ polymorph will be presented. Implications to the reduction in sensitivity will be discussed.

  4. 3D Facial Pattern Analysis for Autism

    DTIC Science & Technology

    2010-07-01

    each individual’s data were scaled by the geometric mean of all possible linear distances between landmarks, following. The first two principal...over traditional template matching in that it can represent geometrical and non- geometrical changes of an object in the parametric template space...set of vertex templates can be generated from the root template by geometric or non- geometric transformation. Let Mtt ,...1 be M normalized vertex

  5. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  6. Discrimination and prediction of the origin of Chinese and Korean soybeans using Fourier transform infrared spectrometry (FT-IR) with multivariate statistical analysis

    PubMed Central

    Lee, Byeong-Ju; Zhou, Yaoyao; Lee, Jae Soung; Shin, Byeung Kon; Seo, Jeong-Ah; Lee, Doyup; Kim, Young-Suk

    2018-01-01

    The ability to determine the origin of soybeans is an important issue following the inclusion of this information in the labeling of agricultural food products becoming mandatory in South Korea in 2017. This study was carried out to construct a prediction model for discriminating Chinese and Korean soybeans using Fourier-transform infrared (FT-IR) spectroscopy and multivariate statistical analysis. The optimal prediction models for discriminating soybean samples were obtained by selecting appropriate scaling methods, normalization methods, variable influence on projection (VIP) cutoff values, and wave-number regions. The factors for constructing the optimal partial-least-squares regression (PLSR) prediction model were using second derivatives, vector normalization, unit variance scaling, and the 4000–400 cm–1 region (excluding water vapor and carbon dioxide). The PLSR model for discriminating Chinese and Korean soybean samples had the best predictability when a VIP cutoff value was not applied. When Chinese soybean samples were identified, a PLSR model that has the lowest root-mean-square error of the prediction value was obtained using a VIP cutoff value of 1.5. The optimal PLSR prediction model for discriminating Korean soybean samples was also obtained using a VIP cutoff value of 1.5. This is the first study that has combined FT-IR spectroscopy with normalization methods, VIP cutoff values, and selected wave-number regions for discriminating Chinese and Korean soybeans. PMID:29689113

  7. Method for distinguishing normal and transformed cells using G1 kinase inhibitors

    DOEpatents

    Crissman, Harry A.; Gadbois, Donna M.; Tobey, Robert A.; Bradbury, E. Morton

    1993-01-01

    A G.sub.1 phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G.sub.1 phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G.sub.1 cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G.sub.1 phase, suggesting that such G.sub.1 phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.

  8. Method for distinguishing normal and transformed cells using G1 kinase inhibitors

    DOEpatents

    Crissman, H.A.; Gadbois, D.M.; Tobey, R.A.; Bradbury, E.M.

    1993-02-09

    A G[sub 1] phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G[sub 1] phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G[sub 1] cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G[sub 1] phase, suggesting that such G[sub 1] phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.

  9. Molecular conformational analysis, vibrational spectra and normal coordinate analysis of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene based on density functional theory calculations.

    PubMed

    Joseph, Lynnette; Sajan, D; Chaitanya, K; Isac, Jayakumary

    2014-03-25

    The conformational behavior and structural stability of trans-1,2-bis(3,5-dimethoxy phenyl)-ethene (TDBE) were investigated by using density functional theory (DFT) method with the B3LYP/6-311++G(d,p) basis set combination. The vibrational wavenumbers of TDBE were computed at DFT level and complete vibrational assignments were made on the basis of normal coordinate analysis calculations (NCA). The DFT force field transformed to natural internal coordinates was corrected by a well-established set of scale factors that were found to be transferable to the title compound. The infrared and Raman spectra were also predicted from the calculated intensities. The observed Fourier transform infrared (FTIR) and Fourier transform (FT) Raman vibrational wavenumbers were analyzed and compared with the theoretically predicted vibrational spectra. Comparison of the simulated spectra with the experimental spectra provides important information about the ability of the computational method to describe the vibrational modes. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecules has been obtained by mapping electron density isosurface with electrostatic potential surfaces (ESP). Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Diagnostic features of Alzheimer's disease extracted from PET sinograms

    NASA Astrophysics Data System (ADS)

    Sayeed, A.; Petrou, M.; Spyrou, N.; Kadyrov, A.; Spinks, T.

    2002-01-01

    Texture analysis of positron emission tomography (PET) images of the brain is a very difficult task, due to the poor signal to noise ratio. As a consequence, very few techniques can be implemented successfully. We use a new global analysis technique known as the Trace transform triple features. This technique can be applied directly to the raw sinograms to distinguish patients with Alzheimer's disease (AD) from normal volunteers. FDG-PET images of 18 AD and 10 normal controls obtained from the same CTI ECAT-953 scanner were used in this study. The Trace transform triple feature technique was used to extract features that were invariant to scaling, translation and rotation, referred to as invariant features, as well as features that were sensitive to rotation but invariant to scaling and translation, referred to as sensitive features in this study. The features were used to classify the groups using discriminant function analysis. Cross-validation tests using stepwise discriminant function analysis showed that combining both sensitive and invariant features produced the best results, when compared with the clinical diagnosis. Selecting the five best features produces an overall accuracy of 93% with sensitivity of 94% and specificity of 90%. This is comparable with the classification accuracy achieved by Kippenhan et al (1992), using regional metabolic activity.

  11. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  12. Multifractal Downscaling of Rainfall Using Normalized Difference Vegetation Index (NDVI) in the Andes Plateau.

    PubMed

    Duffaut Espinosa, L A; Posadas, A N; Carbajal, M; Quiroz, R

    2017-01-01

    In this paper, a multifractal downscaling technique is applied to adequately transformed and lag corrected normalized difference vegetation index (NDVI) in order to obtain daily estimates of rainfall in an area of the Peruvian Andean high plateau. This downscaling procedure is temporal in nature since the original NDVI information is provided at an irregular temporal sampling period between 8 and 11 days, and the desired final scale is 1 day. The spatial resolution of approximately 1 km remains the same throughout the downscaling process. The results were validated against on-site measurements of meteorological stations distributed in the area under study.

  13. Multifractal Downscaling of Rainfall Using Normalized Difference Vegetation Index (NDVI) in the Andes Plateau

    PubMed Central

    Posadas, A. N.; Carbajal, M.; Quiroz, R.

    2017-01-01

    In this paper, a multifractal downscaling technique is applied to adequately transformed and lag corrected normalized difference vegetation index (NDVI) in order to obtain daily estimates of rainfall in an area of the Peruvian Andean high plateau. This downscaling procedure is temporal in nature since the original NDVI information is provided at an irregular temporal sampling period between 8 and 11 days, and the desired final scale is 1 day. The spatial resolution of approximately 1 km remains the same throughout the downscaling process. The results were validated against on-site measurements of meteorological stations distributed in the area under study. PMID:28125607

  14. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  15. Computing Instantaneous Frequency by normalizing Hilbert Transform

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  16. Computing Instantaneous Frequency by normalizing Hilbert Transform

    DOEpatents

    Huang, Norden E.

    2005-05-31

    This invention presents Normalized Amplitude Hilbert Transform (NAHT) and Normalized Hilbert Transform(NHT), both of which are new methods for computing Instantaneous Frequency. This method is designed specifically to circumvent the limitation set by the Bedorsian and Nuttal Theorems, and to provide a sharp local measure of error when the quadrature and the Hilbert Transform do not agree. Motivation for this method is that straightforward application of the Hilbert Transform followed by taking the derivative of the phase-angle as the Instantaneous Frequency (IF) leads to a common mistake made up to this date. In order to make the Hilbert Transform method work, the data has to obey certain restrictions.

  17. Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar

    NASA Technical Reports Server (NTRS)

    Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.

  18. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  19. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  20. On Similarity Transformation and Geodetic Network Distortions Based on Doppler Satellite Observations

    NASA Technical Reports Server (NTRS)

    Leick, Alfred; Vangelder, Boudewijn H. W.

    1975-01-01

    Models used in geodesy to transform two sets of coordinates are studied and distortions in geodetic networks are investigated. Commonly used transformation models are first reviewed and most of them are interpreted. Differences between various models are discussed. Pitfalls in partial solutions are then considered. It is shown that only as many chords and/or directional elements can be used in the computation as are needed to completely determine the size or shape of the polyhedron implied in the set of Cartesian coordinates. Each additional element causes the normal matrix to be singular provided that all correlations between the chords are used. A number of tables and maps indicating distortions in the NAD 27, Precise Traverse M-R '72, AUS, and SAD 69 geodetic datums are also included. The residuals of the coordinates are scanned for systematic patterns after transforming each geodetic system to the NWL9D Doppler system. Also, an attempt is made to show scale distortions in the NAD 27.

  1. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  2. Color image encryption based on gyrator transform and Arnold transform

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.

  3. Characteristics of the uridine uptake system in normal and polyoma transformed hamster embryo cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemkin, J.A.

    1973-01-01

    The lability of the uridine uptake system in the normal and polyoma transformed hamster embryo fibroblast was studied. The major areas investigated were: the kinetic parameters of uridine transport, a comparison of changes in cellular ATP content by factors which modulate uridine uptake, and a comparison of the qualitative and quantitative effects of the same modulating agent on uridine transport, cell growth, and cellular ATP content. Uridine uptake into cells in vitro was examined using tritiated uridine as a tracer to measure the amount of uridine incorporated into the acid soluble and acid-insoluble fractions of the cells studied. The ATPmore » content of the cells was determined by the firefly bioluminescence method. It was found that the K/sub t/ for uridine uptake into the normal hamster embryo cell and two polyoma transformed hamster embryo cell lines was identical. However, the V/sub max/ for uridine transport was higher in both polyoma transformed cell lines. Furthermore, the K/sub t/ in both the normal and transformed cell cultured in serum-less or serum-containing media was identical, although the V/sub max/ was higher in the serum-stimulated cell in both the normal and transformed cell. Stimulation of the normal cell with adenosine produced a different K/sub t/ for uridine transport. Preliminary investigations have demonstrated that treatment of the polyoma transformed with adenosine also induces a different K/sub t/ (not shown). The K/sub i/ for phloretin inhibition in serum-less and serum-stimulated normal and polyoma transformed cells was found to be identical in each case.« less

  4. Evaluation and correction of uncertainty due to Gaussian approximation in radar - rain gauge merging using kriging with external drift

    NASA Astrophysics Data System (ADS)

    Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.

    2016-12-01

    It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.

  5. Using a small hybrid pulse power transformer unit as component of a high-current opening switch for a railgun

    NASA Astrophysics Data System (ADS)

    Leung, E. M. W.; Bailey, R. E.; Michels, P. H.

    1989-03-01

    The hybrid pulse power transformer (HPPT) is a unique concept utilizing the ultrafast superconducting-to-normal transition process of a superconductor. When used in the form of a hybrid transformer current-zero switch (HTCS), this creates an approach in which the large, high-power, high-current opening switch in a conventional railgun system can be eliminated. This represents an innovative application of superconductivity to pulsed power conditioning required for the Strategic Defense Initiative (SDI). The authors explain the working principles of a 100-KJ unit capable of switching up to 500 kA at a frequency of 0.5 Hz and with a system efficiency of greater than 90 percent. Circuit analysis using a computer code called SPICE PLUS was used to verify the HTCS concept. This concept can be scaled up to applications in the several mega-ampere levels.

  6. Morphological filtering and multiresolution fusion for mammographic microcalcification detection

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-04-01

    Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.

  7. Derivation of a generalized Schrödinger equation from the theory of scale relativity

    NASA Astrophysics Data System (ADS)

    Chavanis, Pierre-Henri

    2017-06-01

    Using Nottale's theory of scale relativity relying on a fractal space-time, we derive a generalized Schrödinger equation taking into account the interaction of the system with the external environment. This equation describes the irreversible evolution of the system towards a static quantum state. We first interpret the scale-covariant equation of dynamics stemming from Nottale's theory as a hydrodynamic viscous Burgers equation for a potential flow involving a complex velocity field and an imaginary viscosity. We show that the Schrödinger equation can be directly obtained from this equation by performing a Cole-Hopf transformation equivalent to the WKB transformation. We then introduce a friction force proportional and opposite to the complex velocity in the scale-covariant equation of dynamics in a way that preserves the local conservation of the normalization condition. We find that the resulting generalized Schrödinger equation, or the corresponding fluid equations obtained from the Madelung transformation, involve not only a damping term but also an effective thermal term. The friction coefficient and the temperature are related to the real and imaginary parts of the complex friction coefficient in the scale-covariant equation of dynamics. This may be viewed as a form of fluctuation-dissipation theorem. We show that our generalized Schrödinger equation satisfies an H-theorem for the quantum Boltzmann free energy. As a result, the probability distribution relaxes towards an equilibrium state which can be viewed as a Boltzmann distribution including a quantum potential. We propose to apply this generalized Schrödinger equation to dark matter halos in the Universe, possibly made of self-gravitating Bose-Einstein condensates.

  8. Growth models and the expected distribution of fluctuating asymmetry

    USGS Publications Warehouse

    Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John

    2003-01-01

    Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.

  9. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  10. Flame speed and self-similar propagation of expanding turbulent premixed flames.

    PubMed

    Chaudhuri, Swetaprovo; Wu, Fujia; Zhu, Delin; Law, Chung K

    2012-01-27

    In this Letter we present turbulent flame speeds and their scaling from experimental measurements on constant-pressure, unity Lewis number expanding turbulent flames, propagating in nearly homogeneous isotropic turbulence in a dual-chamber, fan-stirred vessel. It is found that the normalized turbulent flame speed as a function of the average radius scales as a turbulent Reynolds number to the one-half power, where the average radius is the length scale and the thermal diffusivity is the transport property, thus showing self-similar propagation. Utilizing this dependence it is found that the turbulent flame speeds from the present expanding flames and those from the Bunsen geometry in the literature can be unified by a turbulent Reynolds number based on flame length scales using recent theoretical results obtained by spectral closure of the transformed G equation.

  11. Flame Speed and Self-Similar Propagation of Expanding Turbulent Premixed Flames

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Swetaprovo; Wu, Fujia; Zhu, Delin; Law, Chung K.

    2012-01-01

    In this Letter we present turbulent flame speeds and their scaling from experimental measurements on constant-pressure, unity Lewis number expanding turbulent flames, propagating in nearly homogeneous isotropic turbulence in a dual-chamber, fan-stirred vessel. It is found that the normalized turbulent flame speed as a function of the average radius scales as a turbulent Reynolds number to the one-half power, where the average radius is the length scale and the thermal diffusivity is the transport property, thus showing self-similar propagation. Utilizing this dependence it is found that the turbulent flame speeds from the present expanding flames and those from the Bunsen geometry in the literature can be unified by a turbulent Reynolds number based on flame length scales using recent theoretical results obtained by spectral closure of the transformed G equation.

  12. Development of the method and U.S. normalization database for Life Cycle Impact Assessment and sustainability metrics.

    PubMed

    Bare, Jane; Gloria, Thomas; Norris, Gregory

    2006-08-15

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well.

  13. Cell competition with normal epithelial cells promotes apical extrusion of transformed cells through metabolic changes.

    PubMed

    Kon, Shunsuke; Ishibashi, Kojiro; Katoh, Hiroto; Kitamoto, Sho; Shirai, Takanobu; Tanaka, Shinya; Kajita, Mihoko; Ishikawa, Susumu; Yamauchi, Hajime; Yako, Yuta; Kamasaki, Tomoko; Matsumoto, Tomohiro; Watanabe, Hirotaka; Egami, Riku; Sasaki, Ayana; Nishikawa, Atsuko; Kameda, Ikumi; Maruyama, Takeshi; Narumi, Rika; Morita, Tomoko; Sasaki, Yoshiteru; Enoki, Ryosuke; Honma, Sato; Imamura, Hiromi; Oshima, Masanobu; Soga, Tomoyoshi; Miyazaki, Jun-Ichi; Duchen, Michael R; Nam, Jin-Min; Onodera, Yasuhito; Yoshioka, Shingo; Kikuta, Junichi; Ishii, Masaru; Imajo, Masamichi; Nishida, Eisuke; Fujioka, Yoichiro; Ohba, Yusuke; Sato, Toshiro; Fujita, Yasuyuki

    2017-05-01

    Recent studies have revealed that newly emerging transformed cells are often apically extruded from epithelial tissues. During this process, normal epithelial cells can recognize and actively eliminate transformed cells, a process called epithelial defence against cancer (EDAC). Here, we show that mitochondrial membrane potential is diminished in RasV12-transformed cells when they are surrounded by normal cells. In addition, glucose uptake is elevated, leading to higher lactate production. The mitochondrial dysfunction is driven by upregulation of pyruvate dehydrogenase kinase 4 (PDK4), which positively regulates elimination of RasV12-transformed cells. Furthermore, EDAC from the surrounding normal cells, involving filamin, drives the Warburg-effect-like metabolic alteration. Moreover, using a cell-competition mouse model, we demonstrate that PDK-mediated metabolic changes promote the elimination of RasV12-transformed cells from intestinal epithelia. These data indicate that non-cell-autonomous metabolic modulation is a crucial regulator for cell competition, shedding light on the unexplored events at the initial stage of carcinogenesis.

  14. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  15. Stratigraphy, Structure and Tectonics of the Eyjafjarðaráll Rift, Abandoned Southern Segment of the Kolbeinsey Ridge, North Iceland

    NASA Astrophysics Data System (ADS)

    Brandsdottir, B.; Karson, J. A.; Magnúsdóttir, S.; Detrick, B.; Driscoll, N. W.

    2017-12-01

    The multi-branched plate boundary across Iceland is made up of divergent and oblique rifts, and transform zones, characterized by entwined extensional and transform tectonics. The Tjörnes Fracture Zone (TFZ) is a complex transform linking the northern rift zone (NVZ) on land with the offshore Kolbeinsey Ridge. The TFZ lacks a clear topographic expression typical of oceanic fracture zones. The transform zone is roughly 150 km long (E-W) by 50-75 km wide (N-S) with three N-S trending pull-apart basins bounded by a complex array of normal and oblique-slip faults. The offshore extension of the NVZ, the Grímsey Oblique Rift, is composed of several active volcanic systems with N-S trending fissure swarms, including the Skjálfandadjúp Basin (SB). The magma-starved southern extension of the KR, the 80 km NS and 15-20 EW Eyjafjarðaráll Rift (ER), is made up of dominantly normal faults merging southwards with a system of right-lateral strike-slip faults with vertical displacement up to 15 m in the Húsavík Flatey Fault Zone (HFFZ). The northern ER is a 500-700 m deep asymmetric rift, framed by normal faults with 20-25 m vertical displacement, To the south, transform movement associated with the HFFZ has created a NW- striking pull-apart basin with frequent earthquake swarms. Details of the tectonic framework of the ER are documented in a compilation of data from aerial photos, satellite images, field mapping, multibeam bathymetry, high-resolution seismic reflection surveys (Chirp) and seismicity. The TFZ rift basins contain post-glacial sediments of variable thickness. Strata in the western ER and SB basins dip steeply E along the normal faults, towards the deepest part of the rift. The eastern side of the ER and SB basins differ considerably from the western side, with near-vertical faults. Correlation of Chirp reflection data and tephrachronology from a sediment core reveal major rifting episodes between 10-12.1 kyrs BP activating both the Eyjafjarðaráll and Skjálfandadjúp rift basins, followed by smaller-scale fault movements throughout Holocene. These vertical fault movements reflect elevated tectonic activity during early postglacial time coinciding with isostatic rebound and enhanced volcanism within Iceland.

  16. Decomposition of PCBs in transformer oil using an electron beam accelerator

    NASA Astrophysics Data System (ADS)

    Jung, In-Ha; Lee, Myun-Joo; Mah, Yoon-Jung

    2012-07-01

    Decomposition of PCBs in commercially used transformer oil used for more than 30 years has been carried out at normal temperature and pressure without any additives using an electron beam accelerator. The experiments were carried out in two ways: batch and continuous pilot plant with 1.5 MeV of energy, a 50 mA current, and 75 kW of power in a commercial scale accelerator. The electron beam irradiation seemed to transform large molecular weight compounds into lower ones, but the impact was considered too small on the physical properties of oil. Residual concentrations of PCBs after irradiation depend on the absorption dose of the electron beam energy, but aliphatic chloride compounds were produced at higher doses of irradiation. As the results from FT-NMR, chloride ions decomposed from the PCBs are likely to react with aliphatic hydro carbon compounds rather than existing as free radical ions in the transformer oil. Since this is a dry process, treated oil can be used as cutting oil or machine oil for heavy equipment without any additional treatments.

  17. Changes in cell surface structure by viral transformation studied by binding of lectins differing in sugar specificity.

    PubMed

    Tsuda, M; Kurokawa, T; Takeuchi, M; Sugino, Y

    1975-10-01

    Changes in cell surface structure by viral transformation were studied by examining changes in the binding of various lectins differing in carbohydrate specificities. Binding of lectins was assayed directly using cells grown in coverslips. The following 125I-lectins were used: Concanavalin-A (specific for glucose and mannose), wheat germ agglutinin (specific for N-acetylglucosamine), castor bean agglutinin (specific for galactose), Wistaria floribunda agglutinin (specific for N-acetylgalactosamine), and soybean agglutinin (specific for N-acetyl-galactosamine). Cells for a clone, SS7, transformed by bovine adenovirus type-3, were found to bind 5 to 6 times more Wistaria floribunda agglutinin than the normal counterpart cells (clone C31, from C3H mouse kidney). In contrast, the binding of soybean agglutinin, which has a sugar specificity similar to Wistaria floribunda agglutinin, to normal and transformed cells was similar. The binding of wheat germ agglutinin and castor bean agglutinin, respectively, to normal and transformed cells was also similar. However, normal cells bound twice as much concanavalin-A as transformed cells. Only half as much Wistaria floribunda agglutinin was bound to transformed cells when they had been dispersed with EDTA. These changes in the number of lectin binding sites on transformation are thought to reflect alteration of the cell surface structure. The amount of lectins bound per cell decreased with increase in cell density, especially in the case of binding of Wistaria floribunda agglutinin to normal cells.

  18. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  19. Mechanism for generating the anomalous uplift of oceanic core complexes: Atlantis Bank, southwest Indian Ridge

    NASA Astrophysics Data System (ADS)

    Baines, A. Graham; Cheadle, Michael J.; Dick, Henry J. B.; Hosford Scheirer, Allegra; John, Barbara E.; Kusznir, Nick J.; Matsumoto, Takeshi

    2003-12-01

    Atlantis Bank is an anomalously uplifted oceanic core complex adjacent to the Atlantis II transform, on the southwest Indian Ridge, that rises >3 km above normal seafloor of the same age. Models of flexural uplift due to detachment faulting can account for ˜1 km of this uplift. Postdetachment normal faults have been observed during submersible dives and on swath bathymetry. Two transform-parallel, large-offset (hundreds of meters) normal faults are identified on the eastern flank of Atlantis Bank, with numerous smaller faults (tens of meters) on the western flank. Flexural uplift associated with this transform-parallel normal faulting is consistent with gravity data and can account for the remaining anomalous uplift of Atlantis Bank. Extension normal to the Atlantis II transform may have occurred during a 12 m.y. period of transtension initiated by a 10° change in spreading direction ca. 19.5 Ma. This extension may have produced the 120-km-long transverse ridge of which Atlantis Bank is a part, and is consistent with stress reorientation about a weak transform fault.

  20. Mechanism for generating the anomalous uplift of oceanic core complexes: Atlantis Bank, southwest Indian Ridge

    USGS Publications Warehouse

    Baines, A.G.; Cheadle, Michael J.; Dick, H.J.B.; Scheirer, A.H.; John, Barbara E.; Kusznir, N.J.; Matsumoto, T.

    2003-01-01

    Atlantis Bank is an anomalously uplifted oceanic core complex adjacent to the Atlantis II transform, on the southwest Indian Ridge, that rises >3 km above normal seafloor of the same age. Models of flexural uplift due to detachment faulting can account for ???1 km of this uplift. Postdetachment normal faults have been observed during submersible dives and on swath bathymetry. Two transform-parallel, large-offset (hundreds of meters) normal faults are identified on the eastern flank of Atlantis Bank, with numerous smaller faults (tens of meters) on the western flank. Flexural uplift associated with this transform-parallel normal faulting is consistent with gravity data and can account for the remaining anomalous uplift of Atlantis Bank. Extension normal to the Atlantis II transform may have occurred during a 12 m.y. period of transtension initiated by a 10?? change in spreading direction ca. 19.5 Ma. This extension may have produced the 120-km-long transverse ridge of which Atlantis Bank is a part, and is consistent with stress reorientation about a weak transform fault.

  1. Substrate flexibility regulates growth and apoptosis of normal but not transformed cells

    NASA Technical Reports Server (NTRS)

    Wang, H. B.; Dembo, M.; Wang, Y. L.

    2000-01-01

    One of the hallmarks of oncogenic transformation is anchorage-independent growth (27). Here we demonstrate that responses to substrate rigidity play a major role in distinguishing the growth behavior of normal cells from that of transformed cells. We cultured normal or H-ras-transformed NIH 3T3 cells on flexible collagen-coated polyacrylamide substrates with similar chemical properties but different rigidity. Compared with cells cultured on stiff substrates, nontransformed cells on flexible substrates showed a decrease in the rate of DNA synthesis and an increase in the rate of apoptosis. These responses on flexible substrates are coupled to decreases in cell spreading area and traction forces. In contrast, transformed cells maintained their growth and apoptotic characteristics regardless of substrate flexibility. The responses in cell spreading area and traction forces to substrate flexibility were similarly diminished. Our results suggest that normal cells are capable of probing substrate rigidity and that proper mechanical feedback is required for regulating cell shape, cell growth, and survival. The loss of this response can explain the unregulated growth of transformed cells.

  2. PKA-regulated VASP phosphorylation promotes extrusion of transformed cells from the epithelium

    PubMed Central

    Anton, Katarzyna A.; Sinclair, John; Ohoka, Atsuko; Kajita, Mihoko; Ishikawa, Susumu; Benz, Peter M.; Renne, Thomas; Balda, Maria; Matter, Karl; Fujita, Yasuyuki

    2014-01-01

    ABSTRACT At the early stages of carcinogenesis, transformation occurs in single cells within tissues. In an epithelial monolayer, such mutated cells are recognized by their normal neighbors and are often apically extruded. The apical extrusion requires cytoskeletal reorganization and changes in cell shape, but the molecular switches involved in the regulation of these processes are poorly understood. Here, using stable isotope labeling by amino acids in cell culture (SILAC)-based quantitative mass spectrometry, we have identified proteins that are modulated in transformed cells upon their interaction with normal cells. Phosphorylation of VASP at serine 239 is specifically upregulated in RasV12-transformed cells when they are surrounded by normal cells. VASP phosphorylation is required for the cell shape changes and apical extrusion of Ras-transformed cells. Furthermore, PKA is activated in Ras-transformed cells that are surrounded by normal cells, leading to VASP phosphorylation. These results indicate that the PKA–VASP pathway is a crucial regulator of tumor cell extrusion from the epithelium, and they shed light on the events occurring at the early stage of carcinogenesis. PMID:24963131

  3. On the relationship between thermal emissivity and the Normalized Difference Vegetation Index for natural surfaces

    NASA Technical Reports Server (NTRS)

    Van De Griend, A. A.; Owe, M.

    1993-01-01

    The spatial variation of both the thermal emissivity (8-14 microns) and Normalized Difference Vegetation Index (NDVI) was measured for a series of natural surfaces within a savanna environment in Botswana. The measurements were performed with an emissivity-box and with a combined red and near-IR radiometer, with spectral bands corresponding to NOAA/AVHRR. It was found that thermal emissivity was highly correlated with NDVI after logarithmic transformation, with a correlation coefficient of R = 0.94. This empirical relationship is of potential use for energy balance studies using thermal IR remote sensing. The relationship was used in combination with AVHRR (GAC), AVHRR (LAC), and Landsat (TM) data to demonstrate and compare the spatial variability of various spatial scales.

  4. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples.

  5. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  6. Episodic Rifting Events Within the Tjörnes Fracture Zone, an Onshore-Offshore Ridge-Transform in N-Iceland

    NASA Astrophysics Data System (ADS)

    Brandsdottir, B.; Magnusdottir, S.; Karson, J. A.; Detrick, R. S.; Driscoll, N. W.

    2015-12-01

    The multi-branched plate boundary across Iceland is made up of divergent and oblique rifts, and transform zones, characterized by entwined extensional and transform tectonics. The Tjörnes Fracture Zone (TFZ), located on the coast and offshore Northern Iceland, is a complex transform linking the northern rift zone (NVZ) on land with the Kolbeinsey Ridge offshore. Extension across TFZ is partitioned across three N-S trending rift basins; Eyjafjarðaráll, Skjálfandadjúp (SB) and Öxarfjörður and three WNW-NW oriented seismic lineaments; the Grímsey Oblique Rift, Húsavík-Flatey Faults (HFFs) and Dalvík Lineament. We compile the tectonic framework of the TFZ ridge-transform from aerial photos, satellite images, multibeam bathymetry and high-resolution seismic reflection data (Chirp). The rift basins are made up of normal faults with vertical displacements of up to 50-60 m, and post-glacial sediments of variable thickness. The SB comprises N5°W obliquely trending, eastward dipping normal faults as well as N10°E striking, westward dipping faults oriented roughly perpendicular to the N104°E spreading direction, indicative of early stages of rifting. Correlation of Chirp reflection data and tephrachronology from a sediment core within SB reveal major rifting episodes between 10-12.1 kyrs BP activating the whole basin, followed by smaller-scale fault movements throughout Holocene. Onshore faults have the same orientations as those mapped offshore and provide a basis for the interpretation of the kinematics of the faults throughout the region. These include transform parallel right-lateral, strike-slip faults separating domains dominated by spreading parallel left-lateral bookshelf faults. Shearing is most prominent along the HFFs, a system of right-lateral strike-slip faults with vertical displacement up to 15 m. Vertical fault movements reflect increased tectonic activity during early postglacial time coinciding with isostatic rebound enhancing volcanism within Iceland.

  7. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    PubMed

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  8. Rab5-regulated endocytosis plays a crucial role in apical extrusion of transformed cells.

    PubMed

    Saitoh, Sayaka; Maruyama, Takeshi; Yako, Yuta; Kajita, Mihoko; Fujioka, Yoichiro; Ohba, Yusuke; Kasai, Nobuhiro; Sugama, Natsu; Kon, Shunsuke; Ishikawa, Susumu; Hayashi, Takashi; Yamazaki, Tomohiro; Tada, Masazumi; Fujita, Yasuyuki

    2017-03-21

    Newly emerging transformed cells are often eliminated from epithelial tissues. Recent studies have revealed that this cancer-preventive process involves the interaction with the surrounding normal epithelial cells; however, the molecular mechanisms underlying this phenomenon remain largely unknown. In this study, using mammalian cell culture and zebrafish embryo systems, we have elucidated the functional involvement of endocytosis in the elimination of RasV12-transformed cells. First, we show that Rab5, a crucial regulator of endocytosis, is accumulated in RasV12-transformed cells that are surrounded by normal epithelial cells, which is accompanied by up-regulation of clathrin-dependent endocytosis. Addition of chlorpromazine or coexpression of a dominant-negative mutant of Rab5 suppresses apical extrusion of RasV12 cells from the epithelium. We also show in zebrafish embryos that Rab5 plays an important role in the elimination of transformed cells from the enveloping layer epithelium. In addition, Rab5-mediated endocytosis of E-cadherin is enhanced at the boundary between normal and RasV12 cells. Rab5 functions upstream of epithelial protein lost in neoplasm (EPLIN), which plays a positive role in apical extrusion of RasV12 cells by regulating protein kinase A. Furthermore, we have revealed that epithelial defense against cancer (EDAC) from normal epithelial cells substantially impacts on Rab5 accumulation in the neighboring transformed cells. This report demonstrates that Rab5-mediated endocytosis is a crucial regulator for the competitive interaction between normal and transformed epithelial cells in mammals.

  9. A discrete polar Stockwell transform for enhanced characterization of tissue structure using MRI.

    PubMed

    Pridham, Glen; Steenwijk, Martijn D; Geurts, Jeroen J G; Zhang, Yunyan

    2018-05-02

    The purpose of this study was to present an effective algorithm for computing the discrete polar Stockwell transform (PST), investigate its unique multiscale and multi-orientation features, and explore potentially new applications including denoising and tissue segmentation. We investigated PST responses using both synthetic and MR images. Moreover, we compared the features of PST with both Gabor and Morlet wavelet transforms, and compared the PST with two wavelet approaches for denoising using MRI. Using a synthetic image, we also tested the edge effect of PST through signal-padding. Then, we constructed a partially supervised classifier using radial, marginal PST spectra of T2-weighted MRI, acquired from postmortem brains with multiple sclerosis. The classification involved three histology-verified tissue types: normal appearing white matter (NAWM), lesion, or other, along with 5-fold cross-validation. The PST generated a series of images with varying orientations or rotation-invariant scales. Radial frequencies highlighted image structures of different size, and angular frequencies enhanced structures by orientation. Signal-padding helped suppress boundary artifacts but required attention to incidental artifacts. In comparison, the Gabor transform produced more redundant images and the wavelet spectra appeared less spatially smooth than the PST. In addition, the PST demonstrated lower root-mean-square errors than other transforms in denoising and achieved a 93% accuracy for NAWM pixels (296/317), and 88% accuracy for lesion pixels (165/188) in MRI segmentation. The PST is a unique local spectral density-assessing tool which is sensitive to both structure orientations and scales. This may facilitate multiple new applications including advanced characterization of tissue structure in standard MRI. © 2018 International Society for Magnetic Resonance in Medicine.

  10. Nonlinear Schroedinger Approximations for Partial Differential Equations with Quadratic and Quasilinear Terms

    NASA Astrophysics Data System (ADS)

    Cummings, Patrick

    We consider the approximation of solutions of two complicated, physical systems via the nonlinear Schrodinger equation (NLS). In particular, we discuss the evolution of wave packets and long waves in two physical models. Due to the complicated nature of the equations governing many physical systems and the in-depth knowledge we have for solutions of the nonlinear Schrodinger equation, it is advantageous to use approximation results of this kind to model these physical systems. The approximations are simple enough that we can use them to understand the qualitative and quantitative behavior of the solutions, and by justifying them we can show that the behavior of the approximation captures the behavior of solutions to the original equation, at least for long, but finite time. We first consider a model of the water wave equations which can be approximated by wave packets using the NLS equation. We discuss a new proof that both simplifies and strengthens previous justification results of Schneider and Wayne. Rather than using analytic norms, as was done by Schneider and Wayne, we construct a modified energy functional so that the approximation holds for the full interval of existence of the approximate NLS solution as opposed to a subinterval (as is seen in the analytic case). Furthermore, the proof avoids problems associated with inverting the normal form transform by working with a modified energy functional motivated by Craig and Hunter et al. We then consider the Klein-Gordon-Zakharov system and prove a long wave approximation result. In this case there is a non-trivial resonance that cannot be eliminated via a normal form transform. By combining the normal form transform for small Fourier modes and using analytic norms elsewhere, we can get a justification result on the order 1 over epsilon squared time scale.

  11. Multi-Scale Scattering Transform in Music Similarity Measuring

    NASA Astrophysics Data System (ADS)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  12. Mechanical phenotype of cancer cells: cell softening and loss of stiffness sensing.

    PubMed

    Lin, Hsi-Hui; Lin, Hsiu-Kuan; Lin, I-Hsuan; Chiou, Yu-Wei; Chen, Horn-Wei; Liu, Ching-Yi; Harn, Hans I-Chen; Chiu, Wen-Tai; Wang, Yang-Kao; Shen, Meng-Ru; Tang, Ming-Jer

    2015-08-28

    The stiffness sensing ability is required to respond to the stiffness of the matrix. Here we determined whether normal cells and cancer cells display distinct mechanical phenotypes. Cancer cells were softer than their normal counterparts, regardless of the type of cancer (breast, bladder, cervix, pancreas, or Ha-RasV12-transformed cells). When cultured on matrices of varying stiffness, low stiffness decreased proliferation in normal cells, while cancer cells and transformed cells lost this response. Thus, cancer cells undergo a change in their mechanical phenotype that includes cell softening and loss of stiffness sensing. Caveolin-1, which is suppressed in many tumor cells and in oncogene-transformed cells, regulates the mechanical phenotype. Caveolin-1-upregulated RhoA activity and Y397FAK phosphorylation directed actin cap formation, which was positively correlated with cell elasticity and stiffness sensing in fibroblasts. Ha-RasV12-induced transformation and changes in the mechanical phenotypes were reversed by re-expression of caveolin-1 and mimicked by the suppression of caveolin-1 in normal fibroblasts. This is the first study to describe this novel role for caveolin-1, linking mechanical phenotype to cell transformation. Furthermore, mechanical characteristics may serve as biomarkers for cell transformation.

  13. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  14. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  15. A result about scale transformation families in approximation

    NASA Astrophysics Data System (ADS)

    Apprato, Dominique; Gout, Christian

    2000-06-01

    Scale transformations are common in approximation. In surface approximation from rapidly varying data, one wants to suppress, or at least dampen the oscillations of the approximation near steep gradients implied by the data. In that case, scale transformations can be used to give some control over overshoot when the surface has large variations of its gradient. Conversely, in image analysis, scale transformations are used in preprocessing to enhance some features present on the image or to increase jumps of grey levels before segmentation of the image. In this paper, we establish the convergence of an approximation method which allows some control over the behavior of the approximation. More precisely, we study the convergence of an approximation from a data set of , while using scale transformations on the values before and after classical approximation. In addition, the construction of scale transformations is also given. The algorithm is presented with some numerical examples.

  16. Nonlinear and non-Gaussian Bayesian based handwriting beautification

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2013-03-01

    A framework is proposed in this paper to effectively and efficiently beautify handwriting by means of a novel nonlinear and non-Gaussian Bayesian algorithm. In the proposed framework, format and size of handwriting image are firstly normalized, and then typeface in computer system is applied to optimize vision effect of handwriting. The Bayesian statistics is exploited to characterize the handwriting beautification process as a Bayesian dynamic model. The model parameters to translate, rotate and scale typeface in computer system are controlled by state equation, and the matching optimization between handwriting and transformed typeface is employed by measurement equation. Finally, the new typeface, which is transformed from the original one and gains the best nonlinear and non-Gaussian optimization, is the beautification result of handwriting. Experimental results demonstrate the proposed framework provides a creative handwriting beautification methodology to improve visual acceptance.

  17. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  18. Face photo-sketch synthesis and recognition.

    PubMed

    Wang, Xiaogang; Tang, Xiaoou

    2009-11-01

    In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch/photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http://mmlab.ie.cuhk.edu.hk/facesketch.html).

  19. A restricted signature normal form for Hermitian matrices, quasi-spectral decompositions, and applications

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Huckle, Thomas

    1989-01-01

    In recent years, a number of results on the relationships between the inertias of Hermitian matrices and the inertias of their principal submatrices appeared in the literature. We study restricted congruence transformation of Hermitian matrices M which, at the same time, induce a congruence transformation of a given principal submatrix A of M. Such transformations lead to concept of the restricted signature normal form of M. In particular, by means of this normal form, we obtain short proofs of most of the known inertia theorems and also derive some new results of this type. For some applications, a special class of almost unitary restricted congruence transformations turns out to be useful. We show that, with such transformations, M can be reduced to a quasi-diagonal form which, in particular, displays the eigenvalues of A. Finally, applications of this quasi-spectral decomposition to generalize inverses and Hermitian matrix pencils are discussed.

  20. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  1. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  2. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    PubMed

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  3. Stabilizing Conditional Standard Errors of Measurement in Scale Score Transformations

    ERIC Educational Resources Information Center

    Moses, Tim; Kim, YoungKoung

    2017-01-01

    The focus of this article is on scale score transformations that can be used to stabilize conditional standard errors of measurement (CSEMs). Three transformations for stabilizing the estimated CSEMs are reviewed, including the traditional arcsine transformation, a recently developed general variance stabilization transformation, and a new method…

  4. Broadband Structural Dynamics: Understanding the Impulse-Response of Structures Across Multiple Length and Time Scales

    DTIC Science & Technology

    2010-08-18

    Spectral domain response calculated • Time domain response obtained through inverse transform Approach 4: WASABI Wavelet Analysis of Structural Anomalies...differences at unity scale! Time Function Transform Apply Spectral Domain Transfer Function Time Function Inverse Transform Transform Transform  mtP

  5. Multi-scale Imaging of Cellular and Sub-cellular Structures using Scanning Probe Recognition Microscopy.

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Rice, A. F.

    2005-03-01

    Scanning Probe Recognition Microscopy is a new scanning probe capability under development within our group to reliably return to and directly interact with a specific nanobiological feature of interest. In previous work, we have successfully recognized and classified tubular versus globular biological objects from experimental atomic force microscope images using a method based on normalized central moments [ref. 1]. In this paper we extend this work to include recognition schemes appropriate for cellular and sub-cellular structures. Globular cells containing tubular actin filaments are under investigation. Thus there are differences in external/internal shapes and scales. Continuous Wavelet Transform with a differential Gaussian mother wavelet is employed for multi- scale analysis. [ref. 1] Q. Chen, V. Ayres and L. Udpa, ``Biological Investigation Using Scanning Probe Recognition Microscopy,'' Proceedings 3rd IEEE Conference on Nanotechnology, vol. 2, p 863-865 (2003).

  6. Mild clinical involvement in two males with a large FMR1 premutation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagerman, R.; O`Connor, R.; Staley, L.

    1994-09-01

    Both male and female individuals who carry the FMR1 premutation are considered to be clinically unaffected and have been reported to have normal transcription of their FMR1 gene and normal FMR1 protein (FMRP) production. We have evaluated two males who are mildly affected clinically with features of fragile X syndrome and demonstrate a large premutation on DNA studies. The first patient is a 2 year 8 month old boy who demonstrated the fragile X chromosome in 3% of his lymphocytes on cytogenetic testing. His physical features include mildly prominent ears and hyperextensible finger joints. He has language delays along withmore » behavioral problems including tantrums and attention deficit. Developmental testing revealed a mental scale of 116 on the Bayley Scales of Infant Development, which is in the normal range. DNA testing demonstrated a premutation with 161 CGG repeats. This premutation was methylated in a small percent of his cells (<2%). These findings were observed in both blood leukocytes and buccal cells. Protein studies of transformed lymphocytes from this boy showed approximately 50 to 70% of the normal level of FMRP. The second patient is a 14 year old male who was cytogenetically negative for fragile X expression. His physical exam demonstrates a long face, a high palate and macroorchidism, (testicular volume of approximately 35 ml). His overall full scale IQ on the WISC-III is 73. He has language deficits and visual spatial perceptual deficits which have caused significant learning problems in school. Behaviorally he has problems with shyness and social anxiety, although he does not have attention deficit hyperactivity disorder. DNA testing revealed an FMR1 mutation of approximately 210 CGG repeats that is methylated in 4.7% of his cells.« less

  7. An investigation of ride quality rating scales

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Leatherwood, J. D.

    1977-01-01

    An experimental investigation was conducted for the combined purposes of determining the relative merits of various category scales for the prediction of human discomfort response to vibration and for determining the mathematical relationships whereby subjective data are transformed from one scale to other scales. There were 16 category scales analyzed representing various parametric combinations of polarity, that is, unipolar and bipolar, scale type, and number of scalar points. Results indicated that unipolar continuous-type scales containing either seven or nine scalar points provide the greatest reliability and discriminability. Transformations of subjective data between category scales were found to be feasible with unipolar scales of a larger number of scalar points providing the greatest accuracy of transformation. The results contain coefficients for transformation of subjective data between the category scales investigated. A result of particular interest was that the comfort half of a bipolar scale was seldom used by subjects to describe their subjective reaction to vibration.

  8. Flow-covariate prediction of stream pesticide concentrations.

    PubMed

    Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin

    2018-01-01

    Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.

  9. Transformation techniques for cross-sectional and longitudinal endocrine data: application to salivary cortisol concentrations.

    PubMed

    Miller, Robert; Plessow, Franziska

    2013-06-01

    Endocrine time series often lack normality and homoscedasticity most likely due to the non-linear dynamics of their natural determinants and the immanent characteristics of the biochemical analysis tools, respectively. As a consequence, data transformation (e.g., log-transformation) is frequently applied to enable general linear model-based analyses. However, to date, data transformation techniques substantially vary across studies and the question of which is the optimum power transformation remains to be addressed. The present report aims to provide a common solution for the analysis of endocrine time series by systematically comparing different power transformations with regard to their impact on data normality and homoscedasticity. For this, a variety of power transformations of the Box-Cox family were applied to salivary cortisol data of 309 healthy participants sampled in temporal proximity to a psychosocial stressor (the Trier Social Stress Test). Whereas our analyses show that un- as well as log-transformed data are inferior in terms of meeting normality and homoscedasticity, they also provide optimum transformations for both, cross-sectional cortisol samples reflecting the distributional concentration equilibrium and longitudinal cortisol time series comprising systematically altered hormone distributions that result from simultaneously elicited pulsatile change and continuous elimination processes. Considering these dynamics of endocrine oscillations, data transformation prior to testing GLMs seems mandatory to minimize biased results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Neutron monitor generated data distributions in quantum variational Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  11. Using reactive transport codes to provide mechanistic biogeochemistry representations in global land surface models: CLM-PFLOTRAN 1.0

    DOE PAGES

    Tang, G.; Yuan, F.; Bisht, G.; ...

    2015-12-17

    We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10 −3 to 10 −9 mol m −3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative update tolerance. As some biogeochemical processes (e.g., methane and nitrous oxide production and consumption) involve very low half saturation and threshold concentrations, this work provides insights for addressing nonphysical negativity issues and facilitates the representation of a mechanistic biogeochemical description in earth system models to reduce climate prediction uncertainty.« less

  12. Is this the right normalization? A diagnostic tool for ChIP-seq normalization.

    PubMed

    Angelini, Claudia; Heller, Ruth; Volkinshtein, Rita; Yekutieli, Daniel

    2015-05-09

    Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The diagnostic plot proposed in this paper can be used to assess how adequate ChIP/Input normalization constants are, and thus it allows the user to choose the most adequate estimate for the analysis.

  13. Big biomedical data and cardiovascular disease research: opportunities and challenges.

    PubMed

    Denaxas, Spiros C; Morley, Katherine I

    2015-07-01

    Electronic health records (EHRs), data generated and collected during normal clinical care, are increasingly being linked and used for translational cardiovascular disease research. Electronic health record data can be structured (e.g. coded diagnoses) or unstructured (e.g. clinical notes) and increasingly encapsulate medical imaging, genomic and patient-generated information. Large-scale EHR linkages enable researchers to conduct high-resolution observational and interventional clinical research at an unprecedented scale. A significant amount of preparatory work and research, however, is required to identify, obtain, and transform raw EHR data into research-ready variables that can be statistically analysed. This study critically reviews the opportunities and challenges that EHR data present in the field of cardiovascular disease clinical research and provides a series of recommendations for advancing and facilitating EHR research.

  14. Tumor suppressors Sav/Scrib and oncogene Ras regulate stem cell transformation in adult Drosophila Malpighian Tubules

    PubMed Central

    Zeng, Xiankun; Singh, Shree Ram; Hou, David; Hou, Steven X.

    2012-01-01

    An increasing body of evidence suggests that tumors might originate from a few transformed cells that share many properties with normal stem cells. However, it remains unclear how normal stem cells are transformed into cancer stem cells. Here, we demonstrated that mutations causing the loss of tumor suppressor Sav or Scrib or activation of the oncogene Ras transform normal stem cells into cancer stem cells through a multistep process in the adult Drosophila Malpighian Tubules (MTs). In wild-type MTs, each stem cell generates one self-renewing and one differentiating daughter cell. However, in flies with loss-of-function sav or scrib or gain-of-function Ras mutations, both daughter cells grew and behaved like stem cells, leading to the formation of tumors in MTs. Ras functioned downstream of Sav and Scrib in regulating the stem cell transformation. The Ras-transformed stem cells exhibited many of the hallmarks of cancer, such as increased proliferation, reduced cell death, and failure to differentiate. We further demonstrated that several signal transduction pathways (including MEK/MAPK, RhoA, PKA, and TOR) mediate Rasṕ function in the stem cell transformation. Therefore, we have identified a molecular mechanism that regulates stem cell transformation, and this finding may lead to strategies for preventing tumor formation in certain organs. PMID:20432470

  15. Box-Cox transformation for QTL mapping.

    PubMed

    Yang, Runqing; Yi, Nengjun; Xu, Shizhong

    2006-01-01

    The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.

  16. The stabilized, wavelet-Mellin transform for analyzing the size and shape information of vocalized sounds

    NASA Astrophysics Data System (ADS)

    Irino, Toshio; Patterson, Roy

    2005-04-01

    We hear vowels produced by men, women, and children as approximately the same although there is considerable variability in glottal pulse rate and vocal tract length. At the same time, we can identify the speaker group. Recent experiments show that it is possible to identify vowels even when the glottal pulse rate and vocal tract length are condensed or expanded beyond the range of natural vocalization. This suggests that the auditory system has an automatic process to segregate information about shape and size of the vocal tract. Recently we proposed that the auditory system uses some form of Stabilized, Wavelet-Mellin Transform (SWMT) to analyze scale information in bio-acoustic sounds as a general framework for auditory processing from cochlea to cortex. This talk explains the theoretical background of the model and how the vocal information is normalized in the representation. [Work supported by GASR(B)(2) No. 15300061, JSPS.

  17. Theory of strong turbulence by renormalization

    NASA Technical Reports Server (NTRS)

    Tchen, C. M.

    1981-01-01

    The hydrodynamical equations of turbulent motions are inhomogeneous and nonlinear in their inertia and force terms and will generate a hierarchy. A kinetic method was developed to transform the hydrodynamic equations into a master equation governing the velocity distribution, as a function of the time, the position and the velocity as an independent variable. The master equation presents the advantage of being homogeneous and having fewer nonlinear terms and is therefore simpler for the investigation of closure. After the closure by means of a cascade scaling procedure, the kinetic equation is derived and possesses a memory which represents the nonMarkovian character of turbulence. The kinetic equation is transformed back to the hydrodynamical form to yield an energy balance in the cascade form. Normal and anomalous transports are analyzed. The theory is described for incompressible, compressible and plasma turbulence. Applications of the method to problems relating to sound generation and the propagation of light in a nonfrozen turbulence are considered.

  18. Robust image watermarking using DWT and SVD for copyright protection

    NASA Astrophysics Data System (ADS)

    Harjito, Bambang; Suryani, Esti

    2017-02-01

    The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.

  19. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  20. A Box-Cox normal model for response times.

    PubMed

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  1. An efficient and numerically stable procedure for generating sextic force fields in normal mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz

    In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less

  2. Length-Scale-Dependent Phase Transformation of LiFePO4 : An In situ and Operando Study Using Micro-Raman Spectroscopy and XRD.

    PubMed

    Siddique, N A; Salehi, Amir; Wei, Zi; Liu, Dong; Sajjad, Syed D; Liu, Fuqiang

    2015-08-03

    The charge and discharge of lithium ion batteries are often accompanied by electrochemically driven phase-transformation processes. In this work, two in situ and operando methods, that is, micro-Raman spectroscopy and X-ray diffraction (XRD), have been combined to study the phase-transformation process in LiFePO4 at two distinct length scales, namely, particle-level scale (∼1 μm) and macroscopic scale (∼several cm). In situ Raman studies revealed a discrete mode of phase transformation at the particle level. Besides, the preferred electrochemical transport network, particularly the carbon content, was found to govern the sequence of phase transformation among particles. In contrast, at the macroscopic level, studies conducted at four different discharge rates showed a continuous but delayed phase transformation. These findings uncovered the intricate phase transformation in LiFePO4 and potentially offer valuable insights into optimizing the length-scale-dependent properties of battery materials. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Martensitelike spontaneous relaxor-normal ferroelectric transformation in Pb(Zn1/3Nb2/3)O3-PbLa(ZrTi)O3 system

    NASA Astrophysics Data System (ADS)

    Deng, Guochu; Ding, Aili; Li, Guorong; Zheng, Xinsen; Cheng, Wenxiu; Qiu, Pingsun; Yin, Qingrui

    2005-11-01

    The spontaneous relaxor-normal ferroelectric transformation was found in the tetragonal composition of Pb(Zn1/3Nb2/3)O3-PbLa(ZrTi)O3 (0.3PZN-0.7PLZT) complex ABO3 system. The corresponding dielectric permittivities and losses of different compositions located near the morphotrophic phase boundary were analyzed. By reviewing all of the results about this type of transformation in previous references, the electric, compositional, structural, and thermodynamic characteristics of the spontaneous relaxor-normal transformation were proposed. Additionally, the adaptive phase model for martensite transformation proposed by Khachaturyan et al. [Phys. Rev. B 43, 10832 (1991)] was introduced into this ferroelectric transformation to explain the unique transformation pathway and associated features such as the tweedlike domain patterns and the dielectric dispersion under the critical transition temperature. Due to the critical compositions near the MPB, the ferroelectric materials just fulfill the condition, in which the adaptive phases can form in the transformation procedure. The formation of the adaptive phases, which are composed of stress-accommodating twinned domains, makes the system bypass the energy barrier encountered in conventional martensite transformations. The twinned adaptive phase corresponds to the tweedlike domain pattern under a transmission electronic microscope. At lower temperature, these precursor phases transform into the conventional ferroelectric state with macrodomains by the movement of domain walls, which causes a weak dispersion in dielectric permittivity.

  4. On the wall-normal velocity of the compressible boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1991-01-01

    Numerical methods for the compressible boundary-layer equations are facilitated by transformation from the physical (x,y) plane to a computational (xi,eta) plane in which the evolution of the flow is 'slow' in the time-like xi direction. The commonly used Levy-Lees transformation results in a computationally well-behaved problem for a wide class of non-similar boundary-layer flows, but it complicates interpretation of the solution in physical space. Specifically, the transformation is inherently nonlinear, and the physical wall-normal velocity is transformed out of the problem and is not readily recovered. In light of recent research which shows mean-flow non-parallelism to significantly influence the stability of high-speed compressible flows, the contribution of the wall-normal velocity in the analysis of stability should not be routinely neglected. Conventional methods extract the wall-normal velocity in physical space from the continuity equation, using finite-difference techniques and interpolation procedures. The present spectrally-accurate method extracts the wall-normal velocity directly from the transformation itself, without interpolation, leaving the continuity equation free as a check on the quality of the solution. The present method for recovering wall-normal velocity, when used in conjunction with a highly-accurate spectral collocation method for solving the compressible boundary-layer equations, results in a discrete solution which is extraordinarily smooth and accurate, and which satisfies the continuity equation nearly to machine precision. These qualities make the method well suited to the computation of the non-parallel mean flows needed by spatial direct numerical simulations (DNS) and parabolized stability equation (PSE) approaches to the analysis of stability.

  5. Measuring Resistance to Change at the Within-Session Level

    ERIC Educational Resources Information Center

    Tonneau, Francois; Rios, Americo; Cabrera, Felipe

    2006-01-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases…

  6. Graph Databases for Large-Scale Healthcare Systems: A Framework for Efficient Data Management and Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Yubin; Shankar, Mallikarjun; Park, Byung H.

    Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less

  7. Statistical transformation and the interpretation of inpatient glucose control data.

    PubMed

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B

    2014-03-01

    To introduce a statistical method of assessing hospital-based non-intensive care unit (non-ICU) inpatient glucose control. Point-of-care blood glucose (POC-BG) data from hospital non-ICUs were extracted for January 1 through December 31, 2011. Glucose data distribution was examined before and after Box-Cox transformations and compared to normality. Different subsets of data were used to establish upper and lower control limits, and exponentially weighted moving average (EWMA) control charts were constructed from June, July, and October data as examples to determine if out-of-control events were identified differently in nontransformed versus transformed data. A total of 36,381 POC-BG values were analyzed. In all 3 monthly test samples, glucose distributions in nontransformed data were skewed but approached a normal distribution once transformed. Interpretation of out-of-control events from EWMA control chart analyses also revealed differences. In the June test data, an out-of-control process was identified at sample 53 with nontransformed data, whereas the transformed data remained in control for the duration of the observed period. Analysis of July data demonstrated an out-of-control process sooner in the transformed (sample 55) than nontransformed (sample 111) data, whereas for October, transformed data remained in control longer than nontransformed data. Statistical transformations increase the normal behavior of inpatient non-ICU glycemic data sets. The decision to transform glucose data could influence the interpretation and conclusions about the status of inpatient glycemic control. Further study is required to determine whether transformed versus nontransformed data influence clinical decisions or evaluation of interventions.

  8. [The scale and application of the norm of occupational stress on the professionals in Chengdu and Chongqing area].

    PubMed

    Zeng, Fan-Hua; Wang, Zhi-Ming; Wang, Mian-Zhen; Lan, Ya-Jia

    2004-12-01

    To establish the scale of the norm of occupational stress on the professionals and put it into practice. T scores were linear transformations of raw scores, derived to have a mean of 50 and a standard deviation of 10. The scale standard of the norm was formulated in line with the principle of normal distribution. (1) For the occupational role questionnaire (ORQ) and personal strain questionnaire (PSQ) scales, high scores suggested significant levels of occupational stress and psychological strain, respectively. T scores >/= 70 indicated a strong probability of maladaptive stress, debilitating strain, or both. T scores in 60 approximately 69 suggested mild levels of maladaptive stress and strain, and in 40 approximately 59 were within one standard deviation of the mean and should be interpreted as being within normal range. T scores < 40 indicated a relative absence of occupational stress or psychological strain. For the personal resources questionnaire (PRQ) scales, high scores indicated highly developed coping resources. T scores < 30 indicated a significant lack of coping resources. T scores in 30 approximately 39 suggested mild deficits in coping skills, and in 40 approximately 59 indicated average coping resources, where as higher scores (i.e., >/= 60) indicated increasingly strong coping resources. (2) This study provided raw score to T-score conversion tables for each OSI-R scale for the total normative sample as well as for gender, and several occupational groups, including professional engineer, professional health care, economic business, financial business, law, education and news. OSI-R profile forms for total normative samples, gender and occupation were also offered according to the conversion tables. The norm of occupational stress can be used as screening tool, organizational/occupational assessment, guide to occupational choice and intervention measures.

  9. Robust decentralised stabilisation of uncertain large-scale interconnected nonlinear descriptor systems via proportional plus derivative feedback

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zhang, Qingling; Ren, Junchao; Zhang, Yanhao

    2017-10-01

    This paper studies the problem of robust stability and stabilisation for uncertain large-scale interconnected nonlinear descriptor systems via proportional plus derivative state feedback or proportional plus derivative output feedback. The basic idea of this work is to use the well-known differential mean value theorem to deal with the nonlinear model such that the considered nonlinear descriptor systems can be transformed into linear parameter varying systems. By using a parameter-dependent Lyapunov function, a decentralised proportional plus derivative state feedback controller and decentralised proportional plus derivative output feedback controller are designed, respectively such that the closed-loop system is quadratically normal and quadratically stable. Finally, a hypersonic vehicle practical simulation example and numerical example are given to illustrate the effectiveness of the results obtained in this paper.

  10. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  11. Optimizing BAO measurements with non-linear transformations of the Lyman-α forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu

    2015-04-01

    We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less

  12. Length scale effects and multiscale modeling of thermally induced phase transformation kinetics in NiTi SMA

    NASA Astrophysics Data System (ADS)

    Frantziskonis, George N.; Gur, Sourav

    2017-06-01

    Thermally induced phase transformation in NiTi shape memory alloys (SMAs) shows strong size and shape, collectively termed length scale effects, at the nano to micrometer scales, and that has important implications for the design and use of devices and structures at such scales. This paper, based on a recently developed multiscale model that utilizes molecular dynamics (MDs) simulations at small scales and MD-verified phase field (PhF) simulations at larger scales, reports results on specific length scale effects, i.e. length scale effects in martensite phase fraction (MPF) evolution, transformation temperatures (martensite and austenite start and finish) and in the thermally cyclic transformation between austenitic and martensitic phase. The multiscale study identifies saturation points for length scale effects and studies, for the first time, the length scale effect on the kinetics (i.e. developed internal strains) in the B19‧ phase during phase transformation. The major part of the work addresses small scale single crystals in specific orientations. However, the multiscale method is used in a unique and novel way to indirectly study length scale and grain size effects on evolution kinetics in polycrystalline NiTi, and to compare the simulation results to experiments. The interplay of the grain size and the length scale effect on the thermally induced MPF evolution is also shown in this present study. Finally, the multiscale coupling results are employed to improve phenomenological material models for NiTi SMA.

  13. Microstructure of warm rolling and pearlitic transformation of ultrafine-grained GCr15 steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jun-Jie; Lian, Fu-Liang; Liu, Hong-Ji

    2014-09-15

    Pearlitic transformation mechanisms have been investigated in ultra-fine grained GCr15 steel. The ultrafine-grained steel, whose grain size was less than 1 μm, was prepared by thermo-mechanical treatment at 873 K and then annealing at 923 K for 2 h. Pearlitic transformation was conducted by reheating the ultra-fine grained samples at 1073 K and 1123 K for different periods of time and then cooling in air. Scanning electron microscope observation shows that normal lamellar pearlite, instead of granular cementite and ferrite, cannot be formed when the grain size is approximately less than 4(± 0.6) μm, which yields a critical grain sizemore » for normal lamellar pearlitic transformations in this chromium alloyed steel. The result confirms that grain size has a great influence on pearlitic transformation by increasing the diffusion rate of carbon atoms in the ultra-fine grained steel, and the addition of chromium element doesn't change this pearlitic phase transformation rule. Meanwhile, the grain growth rate is reduced by chromium alloying, which is beneficial to form fine grains during austenitizing, thus it facilitating pearlitic transformation by divorced eutectoid transformation. Moreover, chromium element can form a relatively high gradient in the frontier of the undissolved carbide, which promotes carbide formation in the frontier of the undissolved carbide, i.e., chromium promotes divorced eutectoid transformation. - Highlights: • Ultrafine-grained GCr15 steel was obtained by warm rolling and annealing technology. • Reduction of grain size makes pearlite morphology from lamellar to granular. • Adding Cr does not change normal pearlitic phase transformation rule in UFG steel. • Cr carbide resists grain growth and facilitates pearlitic transformation by DET.« less

  14. Speaker normalization and adaptation using second-order connectionist networks.

    PubMed

    Watrous, R L

    1993-01-01

    A method for speaker normalization and adaption using connectionist networks is developed. A speaker-specific linear transformation of observations of the speech signal is computed using second-order network units. Classification is accomplished by a multilayer feedforward network that operates on the normalized speech data. The network is adapted for a new talker by modifying the transformation parameters while leaving the classifier fixed. This is accomplished by backpropagating classification error through the classifier to the second-order transformation units. This method was evaluated for the classification of ten vowels for 76 speakers using the first two formant values of the Peterson-Barney data. The results suggest that rapid speaker adaptation resulting in high classification accuracy can be accomplished by this method.

  15. Analysis of breast thermograms using Gabor wavelet anisotropy index.

    PubMed

    Suganthi, S S; Ramakrishnan, S

    2014-09-01

    In this study, an attempt is made to distinguish the normal and abnormal tissues in breast thermal images using Gabor wavelet transform. Thermograms having normal, benign and malignant tissues are considered in this study and are obtained from public online database. Segmentation of breast tissues is performed by multiplying raw image and ground truth mask. Left and right breast regions are separated after removing the non-breast regions from the segmented image. Based on the pathological conditions, the separated breast regions are grouped as normal and abnormal tissues. Gabor features such as energy and amplitude in different scales and orientations are extracted. Anisotropy and orientation measures are calculated from the extracted features and analyzed. A distinctive variation is observed among different orientations of the extracted features. It is found that the anisotropy measure is capable of differentiating the structural changes due to varied metabolic conditions. Further, the Gabor features also showed relative variations among different pathological conditions. It appears that these features can be used efficiently to identify normal and abnormal tissues and hence, improve the relevance of breast thermography in early detection of breast cancer and content based image retrieval.

  16. Computing Normal Shock-Isotropic Turbulence Interaction With Tetrahedral Meshes and the Space-Time CESE Method

    NASA Astrophysics Data System (ADS)

    Venkatachari, Balaji Shankar; Chang, Chau-Lyan

    2016-11-01

    The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).

  17. Indonesian Sign Language Number Recognition using SIFT Algorithm

    NASA Astrophysics Data System (ADS)

    Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.

    2018-04-01

    Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.

  18. Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2012-09-01

    It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.

  19. Synthesis, maturation and extracellular release of procathepsin D as influenced by cell proliferation or transformation.

    PubMed

    Isidoro, C; Demoz, M; De Stefanis, D; Baccino, F M; Bonelli, G

    1995-12-11

    The relationship between cell growth and intra- and extracellular accumulation of cathepsin D (CD), a lysosomal endopeptidase involved in cell protein breakdown, was examined in cultures of normal and transformed BALB/c mouse 3T3 fibroblasts grown at various cell densities. In crowded cultures of normal 3T3 cells (doubling time, Td, 53 hr) intracellular CD activity was 2-fold higher than in sparse, rapidly-growing (Td, 27 hr) cultures. In uncrowded (Td, 18 hr) and crowded (Td, 32 hr) cultures of benzo[a]pyrene-transformed cells intracellular CD levels were one third and two thirds, respectively, of those measured in hyperconfluent 3T3 cultures. Regardless of cell density, SV-40-virus-transformed cells (Td, 12 hr) contained one third of CD levels found in hyperconfluent 3T3 cells. Both transformed cell lines released into the medium a higher proportion of CD, compared with their untransformed counterpart, yet the amount secreted was not sufficient to account for the reduced intracellular level of the enzyme. Serum withdrawal induced a marked increase of both intra- and extracellular levels of CD activity. In both normal and virally or chemically transformed 3T3 cells CD comprised a precursor (52 kDa) and processed mature polypeptides; the latter were mostly represented by a 48-kDa peptide, but a minor part was in a double-chain form (31 and 16 kDa respectively). The proportion of mature enzyme vs. precursor was much higher in confluent, slowly-growing cells than in fast-growing cells, whether normal or transformed. In the latter, conversion of mature 48-kDa peptide into the double-chain form occurred more efficiently.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  1. A novel image enhancement algorithm based on stationary wavelet transform for infrared thermography to the de-bonding defect in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhang, Wei; Yan, Shaoze

    2015-10-01

    In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.

  2. Dynamical complexity detection in geomagnetic activity indices using wavelet transforms and Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Balasis, G.; Daglis, I. A.; Papadimitriou, C.; Kalimeri, M.; Anastasiadis, A.; Eftaxias, K.

    2008-12-01

    Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. Herein, we examine the fractal spectral properties of the Dst data using a wavelet analysis technique. We show that distinct changes in associated scaling parameters occur (i.e., transition from anti- persistent to persistent behavior) as an intense magnetic storm approaches. We then analyze Dst time series by introducing the non-extensive Tsallis entropy, Sq, as an appropriate complexity measure. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). The Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization.

  3. Early embryonic demise: no evidence of abnormal spiral artery transformation or trophoblast invasion.

    PubMed

    Ball, E; Robson, S C; Ayis, S; Lyall, F; Bulmer, J N

    2006-03-01

    Invasion by extravillous trophoblast of uterine decidua and myometrium and the associated spiral artery 'transformation' are essential for the development of normal pregnancy. Small pilot studies of placental bed and basal plate tissues from miscarriages have suggested that impaired interstitial and endovascular trophoblast invasion may play a role in the pathogenesis of miscarriage. The hypothesis that early miscarriage is associated with reduced extravillous trophoblast invasion and spiral artery transformation was tested in a large series of placental bed biopsies containing decidua and myometrium and at least one spiral artery from early, karyotyped embryonic miscarriages (

  4. [Inheritable phenotypic normalization of rodent cells transformed by simian adenovirus SA7 E1 oncogenes by singled-stranded oligonucleotides complementary to a long region of integrated oncogenes].

    PubMed

    Grineva, N I; Borovkova, T V; Sats, N V; Kurabekova, R M; Rozhitskaia, O S; Solov'ev, G Ia; Pantin, V I

    1995-08-01

    G11 mouse cells and SH2 rat cells transformed with simian adenovirus SA7 DNA showed inheritable oncogen-specific phenotypic normalization when treated with sense and antisense oligonucleotides complementary to long RNA sequences, plus or minus strands of the integrated adenovirus oncogenes E1A and E1B. Transitory treatment of the cells with the oligonucleotides in the absence of serum was shown to cause the appearance of normalized cell lines with fibroblastlike morphology, slower cell proliferation, and lack of ability to form colonies in soft agar. Proliferative activity and adhesion of the normalized cells that established cell lines were found to depend on the concentration of growth factors in the cultural medium. In some of the cell lines, an inhibition of transcription of the E1 oncogenes was observed. The normalization also produced cells that divided 2 - 5 times and died and cells that reverted to a transformed phenotype in 2 - 10 days. The latter appeared predominantly upon the action of the antisense oligonucleotides.

  5. Studies of genetic transformation of higher plants using irradiated pollen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chyi, Y.S.

    Pandey has reported extensively on an unusual genetic phenomenon he called egg transformation. When compatible pollen was treated wth genetically lethal dosage of ..gamma..-radiation (100,000 rad), and used as mentor pollen to overcome selfincompatibility of several Nicotiana species, some genetic characters were found to be transferred from the radiation killed pollen to nonhybrid progeny. Observed transformants were fertile, cytogenetically normal, and had maternal phenotypes except for those specific traits transferred from the donors. Heavily irradiated pollen was believed to discharge its radiation-fragmented DNA (chromatin) into the embryo sac and bring about the transformation of the egg. The frequency of genemore » transfer was reported to be over 50%, and happened for all three characters Pandey studied - self incompatible specificities, flower color, and pollen color. Plant species studied were tomato, pea, apple, rapeseed, and Nicotiana species, including various stocks from Dr. Pandey. Treatments included pollinations with soley irradiated donor pollen, with a mixture of irradiated donor and normal self pollen, with a mixture of normal donor and self pollen, and double pollinations with irradiated donor pollen and normal self pollen, using different time intervals to separate the two pollinations. A total of 6210 pollinations were made, and 17,522 seedlings representing 87,750 potential transformational events were screened. In no case was an unambiguous transformant recovered. This research was unable to confirm or expand upon the findings of Dr. Pandey, or elucidate the mechanisms underlying such phenomena. Alternative explanations for Pandey's data were postulated. This approach to gene transfer by using irradiated pollen appears to be of little practical use to plant breeders.« less

  6. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  7. [Transformer winding's temperature rising and an analysis of its uncertainty].

    PubMed

    Wang, Pei-Lian; Chen, Yu-En; Zhong, Sheng-Kui

    2007-09-01

    This paper introduces the temperature rising experimental process and some matters needing attention when the transformer is normally loading. And an analysis of the uncertainty for transformer's temperature rising is also made based on the practical examples' data.

  8. Deformation and Failure Mechanisms of Shape Memory Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Samantha Hayes

    2015-04-15

    The goal of this research was to understand the fundamental mechanics that drive the deformation and failure of shape memory alloys (SMAs). SMAs are difficult materials to characterize because of the complex phase transformations that give rise to their unique properties, including shape memory and superelasticity. These phase transformations occur across multiple length scales (one example being the martensite-austenite twinning that underlies macroscopic strain localization) and result in a large hysteresis. In order to optimize the use of this hysteretic behavior in energy storage and damping applications, we must first have a quantitative understanding of this transformation behavior. Prior resultsmore » on shape memory alloys have been largely qualitative (i.e., mapping phase transformations through cracked oxide coatings or surface morphology). The PI developed and utilized new approaches to provide a quantitative, full-field characterization of phase transformation, conducting a comprehensive suite of experiments across multiple length scales and tying these results to theoretical and computational analysis. The research funded by this award utilized new combinations of scanning electron microscopy, diffraction, digital image correlation, and custom testing equipment and procedures to study phase transformation processes at a wide range of length scales, with a focus at small length scales with spatial resolution on the order of 1 nanometer. These experiments probe the basic connections between length scales during phase transformation. In addition to the insights gained on the fundamental mechanisms driving transformations in shape memory alloys, the unique experimental methodologies developed under this award are applicable to a wide range of solid-to-solid phase transformations and other strain localization mechanisms.« less

  9. The Wavelet ToolKat: A set of tools for the analysis of series through wavelet transforms. Application to the channel curvature and the slope control of three free meandering rivers in the Amazon basin.

    NASA Astrophysics Data System (ADS)

    Vaudor, Lise; Piegay, Herve; Wawrzyniak, Vincent; Spitoni, Marie

    2016-04-01

    The form and functioning of a geomorphic system result from processes operating at various spatial and temporal scales. Longitudinal channel characteristics thus exhibit complex patterns which vary according to the scale of study, might be periodic or segmented, and are generally blurred by noise. Describing the intricate, multiscale structure of such signals, and identifying at which scales the patterns are dominant and over which sub-reach, could help determine at which scales they should be investigated, and provide insights into the main controlling factors. Wavelet transforms aim at describing data at multiple scales (either in time or space), and are now exploited in geophysics for the analysis of nonstationary series of data. They provide a consistent, non-arbitrary, and multiscale description of a signal's variations and help explore potential causalities. Nevertheless, their use in fluvial geomorphology, notably to study longitudinal patterns, is hindered by a lack of user-friendly tools to help understand, implement, and interpret them. We have developed a free application, The Wavelet ToolKat, designed to facilitate the use of wavelet transforms on temporal or spatial series. We illustrate its usefulness describing longitudinal channel curvature and slope of three freely meandering rivers in the Amazon basin (the Purus, Juruá and Madre de Dios rivers), using topographic data generated from NASA's Shuttle Radar Topography Mission (SRTM) in 2000. Three types of wavelet transforms are used, with different purposes. Continuous Wavelet Transforms are used to identify in a non-arbitrary way the dominant scales and locations at which channel curvature and slope vary. Cross-wavelet transforms, and wavelet coherence and phase are used to identify scales and locations exhibiting significant channel curvature and slope co-variations. Maximal Overlap Discrete Wavelet Transforms decompose data into their variations at a series of scales and are used to provide smoothed descriptions of the series at the scales deemed relevant.

  10. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    PubMed

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  11. Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica

    USGS Publications Warehouse

    Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.

    1994-01-01

    Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.

  12. MetaPathways v2.5: quantitative functional, taxonomic and usability improvements.

    PubMed

    Konwar, Kishori M; Hanson, Niels W; Bhatia, Maya P; Kim, Dongjae; Wu, Shang-Ju; Hahn, Aria S; Morgan-Lang, Connor; Cheung, Hiu Kan; Hallam, Steven J

    2015-10-15

    Next-generation sequencing is producing vast amounts of sequence information from natural and engineered ecosystems. Although this data deluge has an enormous potential to transform our lives, knowledge creation and translation need software applications that scale with increasing data processing and analysis requirements. Here, we present improvements to MetaPathways, an annotation and analysis pipeline for environmental sequence information that expedites this transformation. We specifically address pathway prediction hazards through integration of a weighted taxonomic distance and enable quantitative comparison of assembled annotations through a normalized read-mapping measure. Additionally, we improve LAST homology searches through BLAST-equivalent E-values and output formats that are natively compatible with prevailing software applications. Finally, an updated graphical user interface allows for keyword annotation query and projection onto user-defined functional gene hierarchies, including the Carbohydrate-Active Enzyme database. MetaPathways v2.5 is available on GitHub: http://github.com/hallamlab/metapathways2. shallam@mail.ubc.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  13. Normal Fibroblasts Induce E-Cadherin Loss and Increase Lymph Node Metastasis in Gastric Cancer

    PubMed Central

    Xu, Wen; Hu, Xinlei; Chen, Zhongting; Zheng, Xiaoping; Zhang, Chenjing; Wang, Gang; Chen, Yu; Zhou, Xinglu; Tang, Xiaoxiao; Luo, Laisheng; Xu, Xiang; Pan, Wensheng

    2014-01-01

    Background A tumor is considered a heterogeneous complex in a three-dimensional environment that is flush with pathophysiological and biomechanical signals. Cell-stroma interactions guide the development and generation of tumors. Here, we evaluate the contributions of normal fibroblasts to gastric cancer. Methodology/Principal Findings By coculturing normal fibroblasts in monolayers of BGC-823 gastric cancer cells, tumor cells sporadically developed short, spindle-like morphological characteristics and demonstrated enhanced proliferation and invasive potential. Furthermore, the transformed tumor cells demonstrated decreased tumor formation and increased lymphomatic and intestinal metastatic potential. Non-transformed BGC-823 cells, in contrast, demonstrated primary tumor formation and delayed intestinal and lymph node invasion. We also observed E-cadherin loss and the upregulation of vimentin expression in the transformed tumor cells, which suggested that the increase in metastasis was induced by epithelial-to-mesenchymal transition. Conclusion Collectively, our data indicated that normal fibroblasts sufficiently induce epithelial-to-mesenchymal transition in cancer cells, thereby leading to metastasis. PMID:24845259

  14. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  15. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  16. Real-time mapping of the corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy

    NASA Astrophysics Data System (ADS)

    Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver

    2010-02-01

    The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.

  17. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  18. Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images

    NASA Astrophysics Data System (ADS)

    Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow

    2018-04-01

    Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.

  19. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  20. The scanning electron microscope as a tool in space biology

    NASA Technical Reports Server (NTRS)

    Barrett, R. A.

    1983-01-01

    Normal erythrocytes are disc-shaped and are referred to here descriptively as discocytes. Several morphologically variant forms occur nomally but in rather small amounts, usually less than one percent of total. It has been shown though, that spiculed variant forms referred to as echinocytes are generated in significant amounts at zero g. Normal red cells have been stressed in vitro in an effort to duplicate the observed discocyte-echinocyte transformation at zero g. The significance of this transformation to extended stay in space and some of the plausible reasons for this transformation are discussed.

  1. Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.

    PubMed

    Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming

    2018-01-01

    Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.

  2. Some simple guides to finding useful information in exploration geochemical data

    USGS Publications Warehouse

    Singer, D.A.; Kouda, R.

    2001-01-01

    Most regional geochemistry data reflect processes that can produce superfluous bits of noise and, perhaps, information about the mineralization process of interest. There are two end-member approaches to finding patterns in geochemical data-unsupervised learning and supervised learning. In unsupervised learning, data are processed and the geochemist is given the task of interpreting and identifying possible sources of any patterns. In supervised learning, data from known subgroups such as rock type, mineralized and nonmineralized, and types of mineralization are used to train the system which then is given unknown samples to classify into these subgroups. To locate patterns of interest, it is helpful to transform the data and to remove unwanted masking patterns. With trace elements use of a logarithmic transformation is recommended. In many situations, missing censored data can be estimated using multiple regression of other uncensored variables on the variable with censored values. In unsupervised learning, transformed values can be standardized, or normalized, to a Z-score by subtracting the subset's mean and dividing by its standard deviation. Subsets include any source of differences that might be related to processes unrelated to the target sought such as different laboratories, regional alteration, analytical procedures, or rock types. Normalization removes effects of different means and measurement scales as well as facilitates comparison of spatial patterns of elements. These adjustments remove effects of different subgroups and hopefully leave on the map the simple and uncluttered pattern(s) related to the mineralization only. Supervised learning methods, such as discriminant analysis and neural networks, offer the promise of consistent and, in certain situations, unbiased estimates of where mineralization might exist. These methods critically rely on being trained with data that encompasses all populations fairly and that can possibly fall into only the identified populations. ?? 2001 International Association for Mathematical Geology.

  3. The morphing of geographical features by Fourier transformation.

    PubMed

    Li, Jingzhong; Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features' continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable.

  4. [Study of the phase transformation of TiO2 with in-situ XRD in different gas].

    PubMed

    Ma, Li-Jing; Guo, Lie-Jin

    2011-04-01

    TiO2 sample was prepared by sol-gel method from chloride titanium. The phase transformation of the prepared TiO2 sample was studied by in-situ XRD and normal XRD in different gas. The experimental results showed that the phase transformation temperatures of TiO2 were different under in-situ or normal XRD in different kinds of gas. The transformation of amorphous TiO2 to anatase was controlled by kinetics before 500 degrees C. In-situ XRD showed that the growth of anatase was inhibited, but the transformation of anatase to rutile was accelerated under inactive nitrogen in contrast to air. Also better crystal was obtained under hydrogen than in argon. These all showed that external oxygen might accelerate the growth of TiO2, but reduced gas might partly counteract the negative influence of lack of external oxygen. The mechanism of phase transformation of TiO2 was studied by in-situ XRD in order to control the structure in situ.

  5. v-myb transformation of Xeroderma pigmentosum human fibroblasts: Overexpression of the c-Ha-ras oncogene in the transformed cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michelin, S.; Varlet, I.; Sarasin, A.

    1991-10-01

    Human Xeroderma pigmentosum normal' fibroblasts AS16 (XP4 VI) were transformed after transfection with a recombinant v-myb clone. In this clone (pKXA 3457) derived from avian myeloblastosis virus (AMV), the expression of the oncogene sequences is driven by the AMV U-5 LTR promoter. The transformed cells (ASKXA), which have integrated a rearranged v-myb oncogene, grow in agar, are not tumorigenic in nude mice, and express a 45-kDa v-myb protein. The HMW DNA of these cells transform chicken embryo fibroblasts. The c-Ha-ras oncogene is overexpressed in the ASKXA cells but not in the parental normal' AS16 cells and a revertant clone (ASKXAmore » Cl 1.1 G). The results lead to the conclusion that the XP fibroblasts are phenotypically transformed by the presence of the transfected v-myb oncogene, which is able to induce an overexpression of the c-Ha-ras gene.« less

  6. Low light and low ammonium are key factors for guayule leaf tissue shoot organogenesis and transformation.

    PubMed

    Dong, Niu; Montanez, Belen; Creelman, Robert A; Cornish, Katrina

    2006-02-01

    A new method has been developed for guayule tissue culture and transformation. Guayule leaf explants have a poor survival rate when placed on normal MS medium and under normal culture room light conditions. Low light and low ammonium treatment greatly improved shoot organogenesis and transformation from leaf tissues. Using this method, a 35S promoter driven BAR gene and an ubiquitin-3 promoter driven GUS gene (with intron) have been successfully introduced into guayule. These transgenic guayule plants were resistant to the herbicide ammonium-glufosinate and were positive to GUS staining. Molecular analysis showed the expected band and signal in all GUS positive transformants. The transformation efficiency with glufosinate selection ranged from 3 to 6%. Transformation with a pBIN19-based plasmid containing a NPTII gene and then selection with kanamycin also works well using this method. The ratio of kanamycin-resistant calli to total starting explants reached 50% in some experiments.

  7. Properties of the Magnitude Terms of Orthogonal Scaling Functions.

    PubMed

    Tay, Peter C; Havlicek, Joseph P; Acton, Scott T; Hossack, John A

    2010-09-01

    The spectrum of the convolution of two continuous functions can be determined as the continuous Fourier transform of the cross-correlation function. The same can be said about the spectrum of the convolution of two infinite discrete sequences, which can be determined as the discrete time Fourier transform of the cross-correlation function of the two sequences. In current digital signal processing, the spectrum of the contiuous Fourier transform and the discrete time Fourier transform are approximately determined by numerical integration or by densely taking the discrete Fourier transform. It has been shown that all three transforms share many analogous properties. In this paper we will show another useful property of determining the spectrum terms of the convolution of two finite length sequences by determining the discrete Fourier transform of the modified cross-correlation function. In addition, two properties of the magnitude terms of orthogonal wavelet scaling functions are developed. These properties are used as constraints for an exhaustive search to determine an robust lower bound on conjoint localization of orthogonal scaling functions.

  8. Multistep carcinogenesis of normal human fibroblasts. Human fibroblasts immortalized by repeated treatment with Co-60 gamma rays were transformed into tumorigenic cells with Ha-ras oncogenes.

    PubMed

    Namba, M; Nishitani, K; Fukushima, F; Kimoto, T

    1988-01-01

    Two normal mortal human fibroblast cell strains were transformed into immortal cell lines, SUSM-1 and KMST-6, by treatment with 4-nitroquinoline 1-oxide (4NQO) and Co-60 gamma rays, respectively. These immortalized cell lines showed morphological changes of cells and remarkable chromosome aberrations, but neither of them grew in soft agar or formed tumors in nude mice. The immortal cell line, KMST-6, was then converted into neoplastic cells by treatment with Harvey murine sarcoma virus (Ha-MSV) or the c-Ha-ras oncogene derived from a human lung carcinoma. These neoplastically transformed cells acquired anchorage-independent growth potential and developed tumors when transplanted into nude mice. All the tumors grew progressively without regression until the animals died of tumors. In addition, the tumors were transplantable into other nude mice. Normal human fibroblasts, on the other hand, were not transformed into either immortal or tumorigenic cells by treatment with Ha-MSV or c-Ha-ras alone. Our present data indicate that (1) the chemical carcinogen, 4NQO, or gamma rays worked as an initiator of carcinogenesis in normal human cells, giving rise to immortality, and (2) the ras gene played a role in the progression of the immortally transformed cells to more malignant cells showing anchorage-independent growth and tumorigenicity. In other words, the immortalization process of human cells seems to be a pivotal or rate-limiting step in the carcinogenesis of human cells.

  9. Smooth quantile normalization.

    PubMed

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  10. Obesity Suppresses Cell-Competition-Mediated Apical Elimination of RasV12-Transformed Cells from Epithelial Tissues.

    PubMed

    Sasaki, Ayana; Nagatake, Takahiro; Egami, Riku; Gu, Guoqiang; Takigawa, Ichigaku; Ikeda, Wataru; Nakatani, Tomoya; Kunisawa, Jun; Fujita, Yasuyuki

    2018-04-24

    Recent studies have revealed that newly emerging transformed cells are often eliminated from epithelial tissues via cell competition with the surrounding normal epithelial cells. This cancer preventive phenomenon is termed epithelial defense against cancer (EDAC). However, it remains largely unknown whether and how EDAC is diminished during carcinogenesis. In this study, using a cell competition mouse model, we show that high-fat diet (HFD) feeding substantially attenuates the frequency of apical elimination of RasV12-transformed cells from intestinal and pancreatic epithelia. This process involves both lipid metabolism and chronic inflammation. Furthermore, aspirin treatment significantly facilitates eradication of transformed cells from the epithelial tissues in HFD-fed mice. Thus, our work demonstrates that obesity can profoundly influence competitive interaction between normal and transformed cells, providing insights into cell competition and cancer preventive medicine. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  11. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  12. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    PubMed

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.

  13. Gray-scale transform and evaluation for digital x-ray chest images on CRT monitor

    NASA Astrophysics Data System (ADS)

    Furukawa, Isao; Suzuki, Junji; Ono, Sadayasu; Kitamura, Masayuki; Ando, Yutaka

    1997-04-01

    In this paper, an experimental evaluation of a super high definition (SHD) imaging system for digital x-ray chest images is presented. The SHD imaging system is proposed as a platform for integrating conventional image media. We are involved in the use of SHD images in the total digitizing of medical records that include chest x-rays and pathological microscopic images, both which demand the highest level of quality among the various types of medical images. SHD images use progressive scanning and have a spatial resolution of 2000 by 2000 pixels or more and a temporal resolution (frame rate) of 60 frames/sec or more. For displaying medical x-ray images on a CRT, we derived gray scale transform characteristics based on radiologists' comments during the experiment, and elucidated the relationship between that gray scale transform and the linearization transform for maintaining the linear relationship with the luminance of film on a light box (luminance linear transform). We then carried out viewing experiments based on a five-stage evaluation. Nine radiologists participated in our experiment, and the ten cases evaluated included pulmonary fibrosis, lung cancer, and pneumonia. The experimental results indicated that conventional film images and those on super high definition CRT monitors have nearly the same quality. They also show that the gray scale transform for CRT images decided according to radiologists' comments agrees with the luminance linear transform in the high luminance region. And in the low luminance region, it was found that the gray scale transform had the characteristics of level expansion to increase the number of levels that can be expressed.

  14. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  15. A Unified Approach to IRT Scale Linking and Scale Transformations. Research Report. RR-04-09

    ERIC Educational Resources Information Center

    von Davier, Matthias; von Davier, Alina A.

    2004-01-01

    This paper examines item response theory (IRT) scale transformations and IRT scale linking methods used in the Non-Equivalent Groups with Anchor Test (NEAT) design to equate two tests, X and Y. It proposes a unifying approach to the commonly used IRT linking methods: mean-mean, mean-var linking, concurrent calibration, Stocking and Lord and…

  16. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach.

    PubMed

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  17. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  18. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. SV40-transformed human fibroblasts: evidence for cellular aging in pre-crisis cells.

    PubMed

    Stein, G H

    1985-10-01

    Pre-crisis SV40-transformed human diploid fibroblast (HDF) cultures have a finite proliferative lifespan, but they do not enter a viable senescent state at end of lifespan. Little is known about either the mechanism for this finite lifespan in SV40-transformed HDF or its relationship to finite lifespan in normal HDF. Recently we proposed that in normal HDF the phenomena of finite lifespan and arrest in a viable senescent state depend on two separate processes: 1) an age-related decrease in the ability of the cells to recognize or respond to serum and/or other mitogens such that the cells become functionally mitogen-deprived at the end of lifespan; and 2) the ability of the cells to enter a viable, G1-arrested state whenever they experience mitogen deprivation. In this paper, data are presented that suggest that pre-crisis SV40-transformed HDF retain the first process described above, but lack the second process. It is shown that SV40-transformed HDF have a progressively decreasing ability to respond to serum as they age, but they continue to traverse the cell cycle at the end of lifespan. Concomitantly, the rate of cell death increases steadily toward the end of lifespan, thereby causing the total population to cease growing and ultimately to decline. Previous studies have shown that when SV40-transformed HDF are environmentally serum deprived, they likewise exhibit continued cell cycle traverse coupled with increased cell death. Thus, these results support the hypothesis that pre-crisis SV40-transformed HDF still undergo the same aging process as do normal HDF, but they end their lifespan in crisis rather than in the normal G1-arrested senescent state because they have lost their ability to enter a viable, G1-arrested state in response to mitogen deprivation.

  20. Vibrational spectroscopy and DFT calculations of flavonoid derriobtusone A

    NASA Astrophysics Data System (ADS)

    Marques, A. N. L.; Mendes Filho, J.; Freire, P. T. C.; Santos, H. S.; Albuquerque, M. R. J. R.; Bandeira, P. N.; Leite, R. V.; Braz-Filho, R.; Gusmão, G. O. M.; Nogueira, C. E. S.; Teixeira, A. M. R.

    2017-02-01

    Flavonoids are secondary metabolites of plants which perform various functions. One subclass of flavonoid is auronol that can present immunostimulating activity. In this work Fourier-Transform Infrared with Attenuated Total Reflectance (FTIR-ATR) and Fourier-Transform Raman (FT-Raman) spectra of an auronol, derriobtusone A (C18H12O4), were obtained at room temperature. Theoretical calculations using Density Functional Theory (DFT) were performed in order to assign the normal modes and to interpret the spectra of the derriobtusone A molecule. The FTIR-ATR and FT-Raman spectra of the crystal, were recorded at room temperature in the regions 600 cm-1 to 4000 cm-1 and 40 cm-1 to 4000 cm-1, respectively. The normal modes of vibrations were obtained using Density Functional Theory with B3LYP functional and 6-31G+ (d,p) basis set. The calculated frequencies are in good agreement with those obtained experimentally. Detailed assignments of the normal modes present in both the Fourier-Transform infrared and the Fourier-Transform Raman spectra of the crystal are given.

  1. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  2. The morphing of geographical features by Fourier transformation

    PubMed Central

    Liu, Pengcheng; Yu, Wenhao; Cheng, Xiaoqiang

    2018-01-01

    This paper presents a morphing model of vector geographical data based on Fourier transformation. This model involves three main steps. They are conversion from vector data to Fourier series, generation of intermediate function by combination of the two Fourier series concerning a large scale and a small scale, and reverse conversion from combination function to vector data. By mirror processing, the model can also be used for morphing of linear features. Experimental results show that this method is sensitive to scale variations and it can be used for vector map features’ continuous scale transformation. The efficiency of this model is linearly related to the point number of shape boundary and the interceptive value n of Fourier expansion. The effect of morphing by Fourier transformation is plausible and the efficiency of the algorithm is acceptable. PMID:29351344

  3. Data Transformations for Inference with Linear Regression: Clarifications and Recommendations

    ERIC Educational Resources Information Center

    Pek, Jolynn; Wong, Octavia; Wong, C. M.

    2017-01-01

    Data transformations have been promoted as a popular and easy-to-implement remedy to address the assumption of normally distributed errors (in the population) in linear regression. However, the application of data transformations introduces non-ignorable complexities which should be fully appreciated before their implementation. This paper adds to…

  4. Towards quantitative assessment of calciphylaxis

    NASA Astrophysics Data System (ADS)

    Deserno, Thomas M.; Sárándi, István.; Jose, Abin; Haak, Daniel; Jonas, Stephan; Specht, Paula; Brandenburg, Vincent

    2014-03-01

    Calciphylaxis is a rare disease that has devastating conditions associated with high morbidity and mortality. Calciphylaxis is characterized by systemic medial calcification of the arteries yielding necrotic skin ulcerations. In this paper, we aim at supporting the installation of multi-center registries for calciphylaxis, which includes a photographic documentation of skin necrosis. However, photographs acquired in different centers under different conditions using different equipment and photographers cannot be compared quantitatively. For normalization, we use a simple color pad that is placed into the field of view, segmented from the image, and its color fields are analyzed. In total, 24 colors are printed on that scale. A least-squares approach is used to determine the affine color transform. Furthermore, the card allows scale normalization. We provide a case study for qualitative assessment. In addition, the method is evaluated quantitatively using 10 images of two sets of different captures of the same necrosis. The variability of quantitative measurements based on free hand photography is assessed regarding geometric and color distortions before and after our simple calibration procedure. Using automated image processing, the standard deviation of measurements is significantly reduced. The coefficients of variations yield 5-20% and 2-10% for geometry and color, respectively. Hence, quantitative assessment of calciphylaxis becomes practicable and will impact a better understanding of this rare but fatal disease.

  5. On new scaling group of transformation for Prandtl-Eyring fluid model with both heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Rehman, Khalil Ur; Malik, Aneeqa Ashfaq; Malik, M. Y.; Tahir, M.; Zehra, Iffat

    2018-03-01

    A short communication is structured to offer a set of scaling group of transformation for Prandtl-Eyring fluid flow yields by stretching flat porous surface. The fluid flow regime is carried with both heat and mass transfer characteristics. To seek solution of flow problem a set of scaling group of transformation is proposed by adopting Lie approach. These transformations are used to step down the partial differential equations into ordinary differential equations. The reduced system is solved by numerical method termed as shooting method. A self-coded algorithm is executed in this regard. The obtain results are elaborated by means of figures and tables.

  6. The scale of the Fourier transform: a point of view of the fractional Fourier transform

    NASA Astrophysics Data System (ADS)

    Jimenez, C. J.; Vilardy, J. M.; Salinas, S.; Mattos, L.; Torres, C. O.

    2017-01-01

    In this paper using the Fourier transform of order fractional, the ray transfer matrix for the symmetrical optical systems type ABCD and the formulae by Collins for the diffraction, we obtain explicitly the expression for scaled Fourier transform conventional; this result is the great importance in optical signal processing because it offers the possibility of scaling the size of output the Fourier distribution of the system, only by manipulating the distance of the diffraction object toward the thin lens, this research also emphasizes on practical limits when a finite spherical converging lens aperture is used. Digital simulation was carried out using the numerical platform of Matlab 7.1.

  7. GROWTH REGULATION IN RSV INFECTED CHECKEN EMBRYO FIBROBLASTS: THE ROLE OF THE src GENE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parry, G.; Bartholomew, J.C.; Bissell, M.J.

    1980-03-01

    The relationship between growth regulation and cell transformation has been studied in many cultured cell lines transformed by a range of oncogenic agents. The main conclusion derived from these investigations is that the nature of the growth regulatory lesion in transformed cells is a function of the agent used to induce transformation. For example, when 3T3 fibroblasts are rendered stationary by serum deprivation, normal cells accumulate in G{sub 1} but SV40 transformed cells are arrested at all stages of the cell cycle. In contrast, 3T3 cells transformed with Rous sarcoma virus B77, accumulate in G{sub 1} upon serum deprivation. Thismore » is also true when mouse sarcoma virus (MSV) is used as the transforming agent. MSV-transformed cells accumulate in G{sub 1}, just as do normal cells. In this letter we report a detailed study of the mechanisms leading to loss of growth control in chicken embryo fibroblasts transformed by Rous sarcoma virus (RSV). We have been particularly concerned with the role of the src gene in the process, and have used RSV mutants temperature sensitive (ts) for transformation to investigate the nature of the growth regulatory lesion. Two principal findings have emerged: (a) the stationary phase of the cell cycle (G{sub 1}) in chick embryo fibroblasts has two distinct compartments, (for simplicity referred to as G{sub 1} and G{sub 0} states), (b) when rendered stationary at 41.5{sup o} by serum deprivation, normal cells enter a G{sub 0}-like state, but cells infected with the ts-mutant occupy a G{sub 1} state, even though a known src gene product, a kinase, should be inactive at this temperature. The possibility is discussed that viral factors other than the active src protein kinase influence growth control.« less

  8. Identifying large scale structures at 1 AU using fluctuations and wavelets

    NASA Astrophysics Data System (ADS)

    Niembro, T.; Lara, A.

    2016-12-01

    The solar wind (SW) is inhomogeneous and it is dominated for two types of flows: one quasi-stationary and one related to large scale transients (such as coronal mass ejections and co-rotating interaction regions). The SW inhomogeneities can be study as fluctuations characterized by a wide range of length and time scales. We are interested in the study of the characteristic fluctuations caused by large scale transient events. To do so, we define the vector space F with the normalized moving monthly/annual deviations as the orthogonal basis. Then, we compute the norm in this space of the solar wind parameters (velocity, magnetic field, density and temperature) fluctuations using WIND data from August 1992 to August 2015. This norm gives important information about the presence of a large structure disturbance in the solar wind and by applying a wavelet transform to this norm, we are able to determine, without subjectivity, the duration of the compression regions of these large transient structures and, even more, to identify if the structure corresponds to a single or complex (or merged) event. With this method we have automatically detected most of the events identified and published by other authors.

  9. Land Cover and Topography Affect the Land Transformation Caused by Wind Facilities

    PubMed Central

    Diffendorfer, Jay E.; Compton, Roger W.

    2014-01-01

    Land transformation (ha of surface disturbance/MW) associated with wind facilities shows wide variation in its reported values. In addition, no studies have attempted to explain the variation across facilities. We digitized land transformation at 39 wind facilities using high resolution aerial imagery. We then modeled the effects of turbine size, configuration, land cover, and topography on the levels of land transformation at three spatial scales. The scales included strings (turbines with intervening roads only), sites (strings with roads connecting them, buried cables and other infrastructure), and entire facilities (sites and the roads or transmission lines connecting them to existing infrastructure). An information theoretic modeling approach indicated land cover and topography were well-supported variables affecting land transformation, but not turbine size or configuration. Tilled landscapes, despite larger distances between turbines, had lower average land transformation, while facilities in forested landscapes generally had the highest land transformation. At site and string scales, flat topographies had the lowest land transformation, while facilities on mesas had the largest. The results indicate the landscape in which the facilities are placed affects the levels of land transformation associated with wind energy. This creates opportunities for optimizing wind energy production while minimizing land cover change. In addition, the results indicate forecasting the impacts of wind energy on land transformation should include the geographic variables affecting land transformation reported here. PMID:24558449

  10. Land cover and topography affect the land transformation caused by wind facilities

    USGS Publications Warehouse

    Diffendorfer, Jay E.; Compton, Roger W.

    2014-01-01

    Land transformation (ha of surface disturbance/MW) associated with wind facilities shows wide variation in its reported values. In addition, no studies have attempted to explain the variation across facilities. We digitized land transformation at 39 wind facilities using high resolution aerial imagery. We then modeled the effects of turbine size, configuration, land cover, and topography on the levels of land transformation at three spatial scales. The scales included strings (turbines with intervening roads only), sites (strings with roads connecting them, buried cables and other infrastructure), and entire facilities (sites and the roads or transmission lines connecting them to existing infrastructure). An information theoretic modeling approach indicated land cover and topography were well-supported variables affecting land transformation, but not turbine size or configuration. Tilled landscapes, despite larger distances between turbines, had lower average land transformation, while facilities in forested landscapes generally had the highest land transformation. At site and string scales, flat topographies had the lowest land transformation, while facilities on mesas had the largest. The results indicate the landscape in which the facilities are placed affects the levels of land transformation associated with wind energy. This creates opportunities for optimizing wind energy production while minimizing land cover change. In addition, the results indicate forecasting the impacts of wind energy on land transformation should include the geographic variables affecting land transformation reported here.

  11. Using Mental Transformation Strategies for Spatial Scaling: Evidence from a Discrimination Task

    ERIC Educational Resources Information Center

    Möhring, Wenke; Newcombe, Nora S.; Frick, Andrea

    2016-01-01

    Spatial scaling, or an understanding of how distances in different-sized spaces relate to each other, is fundamental for many spatial tasks and relevant for success in numerous professions. Previous research has suggested that adults use mental transformation strategies to mentally scale spatial input, as indicated by linear increases in response…

  12. An improved KCF tracking algorithm based on multi-feature and multi-scale

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye

    2018-02-01

    The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.

  13. Thin-plate spline analysis of the cranial base in subjects with Class III malocclusion.

    PubMed

    Singh, G D; McNamara, J A; Lozanoff, S

    1997-08-01

    The role of the cranial base in the emergence of Class III malocclusion is not fully understood. This study determines deformations that contribute to a Class III cranial base morphology, employing thin-plate spline analysis on lateral cephalographs. A total of 73 children of European-American descent aged between 5 and 11 years of age with Class III malocclusion were compared with an equivalent group of subjects with a normal, untreated, Class I molar occlusion. The cephalographs were traced, checked and subdivided into seven age- and sex-matched groups. Thirteen points on the cranial base were identified and digitized. The datasets were scaled to an equivalent size, and statistical analysis indicated significant differences between average Class I and Class III cranial base morphologies for each group. Thin-plate spline analysis indicated that both affine (uniform) and non-affine transformations contribute toward the total spline for each average cranial base morphology at each age group analysed. For non-affine transformations, Partial warps 10, 8 and 7 had high magnitudes, indicating large-scale deformations affecting Bolton point, basion, pterygo-maxillare, Ricketts' point and articulare. In contrast, high eigenvalues associated with Partial warps 1-3, indicating localized shape changes, were found at tuberculum sellae, sella, and the frontonasomaxillary suture. It is concluded that large spatial-scale deformations affect the occipital complex of the cranial base and sphenoidal region, in combination with localized distortions at the frontonasal suture. These deformations may contribute to reduced orthocephalization or deficient flattening of the cranial base antero-posteriorly that, in turn, leads to the formation of a Class III malocclusion.

  14. A DNA methylation map of human cancer at single base-pair resolution.

    PubMed

    Vidal, E; Sayols, S; Moran, S; Guillaumet-Adkins, A; Schroeder, M P; Royo, R; Orozco, M; Gut, M; Gut, I; Lopez-Bigas, N; Heyn, H; Esteller, M

    2017-10-05

    Although single base-pair resolution DNA methylation landscapes for embryonic and different somatic cell types provided important insights into epigenetic dynamics and cell-type specificity, such comprehensive profiling is incomplete across human cancer types. This prompted us to perform genome-wide DNA methylation profiling of 22 samples derived from normal tissues and associated neoplasms, including primary tumors and cancer cell lines. Unlike their invariant normal counterparts, cancer samples exhibited highly variable CpG methylation levels in a large proportion of the genome, involving progressive changes during tumor evolution. The whole-genome sequencing results from selected samples were replicated in a large cohort of 1112 primary tumors of various cancer types using genome-scale DNA methylation analysis. Specifically, we determined DNA hypermethylation of promoters and enhancers regulating tumor-suppressor genes, with potential cancer-driving effects. DNA hypermethylation events showed evidence of positive selection, mutual exclusivity and tissue specificity, suggesting their active participation in neoplastic transformation. Our data highlight the extensive changes in DNA methylation that occur in cancer onset, progression and dissemination.

  15. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  16. a Weighted Closed-Form Solution for Rgb-D Data Registration

    NASA Astrophysics Data System (ADS)

    Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.

    2016-06-01

    Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.

  17. Multiscale characterization and prediction of monsoon rainfall in India using Hilbert-Huang transform and time-dependent intrinsic correlation analysis

    NASA Astrophysics Data System (ADS)

    Adarsh, S.; Reddy, M. Janga

    2017-07-01

    In this paper, the Hilbert-Huang transform (HHT) approach is used for the multiscale characterization of All India Summer Monsoon Rainfall (AISMR) time series and monsoon rainfall time series from five homogeneous regions in India. The study employs the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) for multiscale decomposition of monsoon rainfall in India and uses the Normalized Hilbert Transform and Direct Quadrature (NHT-DQ) scheme for the time-frequency characterization. The cross-correlation analysis between orthogonal modes of All India monthly monsoon rainfall time series and that of five climate indices such as Quasi Biennial Oscillation (QBO), El Niño Southern Oscillation (ENSO), Sunspot Number (SN), Atlantic Multi Decadal Oscillation (AMO), and Equatorial Indian Ocean Oscillation (EQUINOO) in the time domain showed that the links of different climate indices with monsoon rainfall are expressed well only for few low-frequency modes and for the trend component. Furthermore, this paper investigated the hydro-climatic teleconnection of ISMR in multiple time scales using the HHT-based running correlation analysis technique called time-dependent intrinsic correlation (TDIC). The results showed that both the strength and nature of association between different climate indices and ISMR vary with time scale. Stemming from this finding, a methodology employing Multivariate extension of EMD and Stepwise Linear Regression (MEMD-SLR) is proposed for prediction of monsoon rainfall in India. The proposed MEMD-SLR method clearly exhibited superior performance over the IMD operational forecast, M5 Model Tree (MT), and multiple linear regression methods in ISMR predictions and displayed excellent predictive skill during 1989-2012 including the four extreme events that have occurred during this period.

  18. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  19. Understanding a Normal Distribution of Data (Part 2).

    PubMed

    Maltenfort, Mitchell

    2016-02-01

    Completing the discussion of data normality, advanced techniques for analysis of non-normal data are discussed including data transformation, Generalized Linear Modeling, and bootstrapping. Relative strengths and weaknesses of each technique are helpful in choosing a strategy, but help from a statistician is usually necessary to analyze non-normal data using these methods.

  20. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  1. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  2. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  4. Epithelial self-defense against cancer.

    PubMed

    Yamauchi, Hajime; Fujita, Yasuyuki

    2012-11-01

    It is not clearly understood what happens at the interface between normal and transformed epithelial cells at the first step of carcinogenesis. A recent study reveals that the organized epithelial structure suppresses clonal expansion of transformed cells. Translocation from the epithelium or perturbation of intercellular adhesions may be required for transformed cells to evade the suppressive environments.

  5. Beclin-1 Expression in Normal Bladder and in Cd+2 and As+3 Exposed and Transformed Human Urothelial Cells (UROtsa)

    PubMed Central

    Larson, Jennifer L.; Somji, Seema; Zhou, Xu Dong; Sens, Mary Ann; Garrett, Scott H.; Sens, Donald A.; Dunlevy, Jane R.

    2010-01-01

    The expression of beclin-1 in normal human bladder and in Cd+2 and As+3 exposed and transformed urothelial cells (UROtsa) was examined in this study. It was shown using a combination of real time PCR, western analysis and immunohistochemistry that beclin-1 was expressed in the urothelial cells of the normal bladder. It was also demonstrated that the parental UROtsa cell line expressed beclin-1 mRNA and protein at levels similar to that of the in situ urothelium. The level of beclin-1 expression underwent only modest alterations when the UROtsa cells were malignantly transformed by Cd+2 or As+3 or when the parental cells were exposed acutely to Cd+2 or As+3. While there were instances of significant alterations at individual time points and within cell line-to-cell line comparisons there was no evidence of a dose response relationship or correlations to the phenotypic properties of the cell lines. Similar results were obtained for the expression of the Atg-5, Atg-7, Atg-12 and LC3B autophagy-related proteins. The findings provide initial evidence for beclin-1 expression in normal bladder and that large alterations in the expression of beclin-1 and associated proteins do not occur when human urothelial cells are malignantly transformed with, or exposed to, either Cd+2 or As+3. PMID:20206246

  6. Playing the Scales: Regional Transformations and the Differentiation of Rural Space in the Chilean Wine Industry

    ERIC Educational Resources Information Center

    Overton, John; Murray, Warwick E.

    2011-01-01

    Globalization and industrial restructuring transform rural places in complex and often contradictory ways. These involve both quantitative changes, increasing the size and scope of operation to achieve economies of scale, and qualitative shifts, sometimes leading to a shift up the quality/price scale, towards finer spatial resolution and…

  7. Detecting mammographically occult cancer in women with dense breasts using Radon Cumulative Distribution Transform: a preliminary analysis

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Nishikawa, Robert M.; Rohde, Gustavo K.

    2018-02-01

    We propose using novel imaging biomarkers for detecting mammographically-occult (MO) cancer in women with dense breast tissue. MO cancer indicates visually occluded, or very subtle, cancer that radiologists fail to recognize as a sign of cancer. We used the Radon Cumulative Distribution Transform (RCDT) as a novel image transformation to project the difference between left and right mammograms into a space, increasing the detectability of occult cancer. We used a dataset of 617 screening full-field digital mammograms (FFDMs) of 238 women with dense breast tissue. Among 238 women, 173 were normal with 2 - 4 consecutive screening mammograms, 552 normal mammograms in total, and the remaining 65 women had an MO cancer with a negative screening mammogram. We used Principal Component Analysis (PCA) to find representative patterns in normal mammograms in the RCDT space. We projected all mammograms to the space constructed by the first 30 eigenvectors of the RCDT of normal cases. Under 10-fold crossvalidation, we conducted quantitative feature analysis to classify normal mammograms and mammograms with MO cancer. We used receiver operating characteristic (ROC) analysis to evaluate the classifier's output using the area under the ROC curve (AUC) as the figure of merit. Four eigenvectors were selected via a feature selection method. The mean and standard deviation of the AUC of the trained classifier on the test set were 0.74 and 0.08, respectively. In conclusion, we utilized imaging biomarkers to highlight differences between left and right mammograms to detect MO cancer using novel imaging transformation.

  8. Algorithm for the classification of multi-modulating signals on the electrocardiogram.

    PubMed

    Mita, Mitsuo

    2007-03-01

    This article discusses the algorithm to measure electrocardiogram (ECG) and respiration simultaneously and to have the diagnostic potentiality for sleep apnoea from ECG recordings. The algorithm is composed by the combination with the three particular scale transform of a(j)(t), u(j)(t), o(j)(a(j)) and the statistical Fourier transform (SFT). Time and magnitude scale transforms of a(j)(t), u(j)(t) change the source into the periodic signal and tau(j) = o(j)(a(j)) confines its harmonics into a few instantaneous components at tau(j) being a common instant on two scales between t and tau(j). As a result, the multi-modulating source is decomposed by the SFT and is reconstructed into ECG, respiration and the other signals by inverse transform. The algorithm is expected to get the partial ventilation and the heart rate variability from scale transforms among a(j)(t), a(j+1)(t) and u(j+1)(t) joining with each modulation. The algorithm has a high potentiality of the clinical checkup for the diagnosis of sleep apnoea from ECG recordings.

  9. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  10. An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Brown, Clifford; Bridges, James

    2003-01-01

    Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

  11. Epigenetic variability in cells of normal cytology is associated with the risk of future morphological transformation.

    PubMed

    Teschendorff, Andrew E; Jones, Allison; Fiegl, Heidi; Sargent, Alexandra; Zhuang, Joanna J; Kitchener, Henry C; Widschwendter, Martin

    2012-03-27

    Recently, it has been proposed that epigenetic variation may contribute to the risk of complex genetic diseases like cancer. We aimed to demonstrate that epigenetic changes in normal cells, collected years in advance of the first signs of morphological transformation, can predict the risk of such transformation. We analyzed DNA methylation (DNAm) profiles of over 27,000 CpGs in cytologically normal cells of the uterine cervix from 152 women in a prospective nested case-control study. We used statistics based on differential variability to identify CpGs associated with the risk of transformation and a novel statistical algorithm called EVORA (Epigenetic Variable Outliers for Risk prediction Analysis) to make predictions. We observed many CpGs that were differentially variable between women who developed a non-invasive cervical neoplasia within 3 years of sample collection and those that remained disease-free. These CpGs exhibited heterogeneous outlier methylation profiles and overlapped strongly with CpGs undergoing age-associated DNA methylation changes in normal tissue. Using EVORA, we demonstrate that the risk of cervical neoplasia can be predicted in blind test sets (AUC = 0.66 (0.58 to 0.75)), and that assessment of DNAm variability allows more reliable identification of risk-associated CpGs than statistics based on differences in mean methylation levels. In independent data, EVORA showed high sensitivity and specificity to detect pre-invasive neoplasia and cervical cancer (AUC = 0.93 (0.86 to 1) and AUC = 1, respectively). We demonstrate that the risk of neoplastic transformation can be predicted from DNA methylation profiles in the morphologically normal cell of origin of an epithelial cancer. Having profiled only 0.1% of CpGs in the human genome, studies of wider coverage are likely to yield improved predictive and diagnostic models with the accuracy needed for clinical application. The ARTISTIC trial is registered with the International Standard Randomised Controlled Trial Number ISRCTN25417821.

  12. Epigenetic variability in cells of normal cytology is associated with the risk of future morphological transformation

    PubMed Central

    2012-01-01

    Background Recently, it has been proposed that epigenetic variation may contribute to the risk of complex genetic diseases like cancer. We aimed to demonstrate that epigenetic changes in normal cells, collected years in advance of the first signs of morphological transformation, can predict the risk of such transformation. Methods We analyzed DNA methylation (DNAm) profiles of over 27,000 CpGs in cytologically normal cells of the uterine cervix from 152 women in a prospective nested case-control study. We used statistics based on differential variability to identify CpGs associated with the risk of transformation and a novel statistical algorithm called EVORA (Epigenetic Variable Outliers for Risk prediction Analysis) to make predictions. Results We observed many CpGs that were differentially variable between women who developed a non-invasive cervical neoplasia within 3 years of sample collection and those that remained disease-free. These CpGs exhibited heterogeneous outlier methylation profiles and overlapped strongly with CpGs undergoing age-associated DNA methylation changes in normal tissue. Using EVORA, we demonstrate that the risk of cervical neoplasia can be predicted in blind test sets (AUC = 0.66 (0.58 to 0.75)), and that assessment of DNAm variability allows more reliable identification of risk-associated CpGs than statistics based on differences in mean methylation levels. In independent data, EVORA showed high sensitivity and specificity to detect pre-invasive neoplasia and cervical cancer (AUC = 0.93 (0.86 to 1) and AUC = 1, respectively). Conclusions We demonstrate that the risk of neoplastic transformation can be predicted from DNA methylation profiles in the morphologically normal cell of origin of an epithelial cancer. Having profiled only 0.1% of CpGs in the human genome, studies of wider coverage are likely to yield improved predictive and diagnostic models with the accuracy needed for clinical application. Trial registration The ARTISTIC trial is registered with the International Standard Randomised Controlled Trial Number ISRCTN25417821. PMID:22453031

  13. Atomistic to Continuum Multiscale and Multiphysics Simulation of NiTi Shape Memory Alloy

    NASA Astrophysics Data System (ADS)

    Gur, Sourav

    Shape memory alloys (SMAs) are materials that show reversible, thermo-elastic, diffusionless, displacive (solid to solid) phase transformation, due to the application of temperature and/ or stress (/strain). Among different SMAs, NiTi is a popular one. NiTi shows reversible phase transformation, the shape memory effect (SME), where irreversible deformations are recovered upon heating, and superelasticity (SE), where large strains imposed at high enough temperatures are fully recovered. Phase transformation process in NiTi SMA is a very complex process that involves the competition between developed internal strain and phonon dispersion instability. In NiTi SMA, phase transformation occurs over a wide range of temperature and/ or stress (strain) which involves, evolution of different crystalline phases (cubic austenite i.e. B2, different monoclinic variant of martensite i.e. B19', and orthorhombic B19 or BCO structures). Further, it is observed from experimental and computational studies that the evolution kinetics and growth rate of different phases in NiTi SMA vary significantly over a wide spectrum of spatio-temporal scales, especially with length scales. At nano-meter length scale, phase transformation temperatures, critical transformation stress (or strain) and phase fraction evolution change significantly with sample or simulation cell size and grain size. Even, below a critical length scale, the phase transformation process stops. All these aspects make NiTi SMA very interesting to the science and engineering research community and in this context, the present focuses on the following aspects. At first this study address the stability, evolution and growth kinetics of different phases (B2 and variants of B19'), at different length scales, starting from the atomic level and ending at the continuum macroscopic level. The effects of simulation cell size, grain size, and presence of free surface and grain boundary on the phase transformation process (transformation temperature, phase fraction evolution kinetics due to temperature) are also demonstrated herein. Next, to couple and transfer the statistical information of length scale dependent phase transformation process, multiscale/ multiphysics methods are used. Here, the computational difficulty from the fact that the representative governing equations (i.e. different sub-methods such as molecular dynamics simulations, phase field simulations and continuum level constitutive/ material models) are only valid or can be implemented over a range of spatiotemporal scales. Therefore, in the present study, a wavelet based multiscale coupling method is used, where simulation results (phase fraction evolution kinetics) from different sub-methods are linked via concurrent multiscale coupling fashion. Finally, these multiscale/ multiphysics simulation results are used to develop/ modify the macro/ continuum scale thermo-mechanical constitutive relations for NiTi SMA. Finally, the improved material model is used to model new devices, such as thermal diodes and smart dampers.

  14. Enhancing fire safety at Hydro plants with dry transformers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemen, D.M.

    Hydroelectric plant owners and engineers can use dry-type transformers to reduce fire hazards in auxiliary power systems. The decision to replace a liquid-immersed transformer with a dry-type product has a price: higher unit cost and a need to be more vigilant in detailing transformer specifications. But, whether the change affects only one failed transformer or is part of a plant rehabilitation project, the benefits in safety can be worth it. Voltages on hydroelectric plant auxiliary power systems can range from a 20 kV medium-voltage system to the normal 480-208/120 V low-voltage system. Dry transformers typically are used in such systemsmore » to reduce the fire hazard present with liquid-filled transformers. For a hydro plant owner or engineer seeking alternatives to liquid-filled transformers, there are two main kinds of dry-type transformers to consider: vacuum pressure impregnated (VPI) and cast coil epoxy resin. VPI transformers normally are manufactured in sizes up to 6,000 kVA with primary voltage ratings up to 20 kV. Cast coil transformers can be made in sizes from 75 to 10,000 kVA, with primary voltage ratings up to 34,500 V. Although the same transformer theory applies to dry transformers as to liquid-filled units, the cooling medium, air, required different temperature rise ratings, dielectric tests, and construction techniques to ensure reliability. Consequently, the factory and field tests for dry units are established by a separate set of American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE) standards. Cast coil transformers have several important advantages over VPI units.« less

  15. Galaxy formation and physical bias

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1992-01-01

    We have supplemented our code, which computes the evolution of the physical state of a representative piece of the universe to include, not only the dynamics of dark matter (with a standard PM code), and the hydrodynamics of the gaseous component (including detailed collisional and radiative processes), but also galaxy formation on a heuristic but plausible basis. If, within a cell the gas is Jeans' unstable, collapsing, and cooling rapidly, it is transformed to galaxy subunits, which are then followed with a collisionless code. After grouping them into galaxies, we estimate the relative distributions of galaxies and dark matter and the relative velocities of galaxies and dark matter. In a large scale CDM run of 80/h Mpc size with 8 x 10 exp 6 cells and dark matter particles, we find that physical bias b is on the 8/h Mpc scale is about 1.6 and increases towards smaller scales, and that velocity bias is about 0.8 on the same scale. The comparable HDM simulation is highly biased with b = 2.7 on the 8/h Mpc scale. Implications of these results are discussed in the light of the COBE observations which provide an accurate normalization for the initial power spectrum. CDM can be ruled out on the basis of too large a predicted small scale velocity dispersion at greater than 95 percent confidence level.

  16. Wavelet analysis applied to the IRAS cirrus

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1994-01-01

    The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.

  17. The dynamic system corresponding to LOD and AAM.

    NASA Astrophysics Data System (ADS)

    Liu, Shida; Liu, Shikuo; Chen, Jiong

    2000-02-01

    Using wavelet transform, the authors can reconstruct the 1-D map of a multifractal object. The wavelet transform of LOD and AAM shows that at 20 years scale, annual scale and 2 - 3 years scale, the jump points of LOD and AAM accord with each other very well, and their reconstructing 1-D mapping dynamic system are also very similar.

  18. Prevention of Breast Cell Transformation by Blockade of the AP-1 Transcription Factor.

    DTIC Science & Technology

    1997-09-01

    Alto, CA) to normal for transfection efficiency. Both the CAT (from Dr. M . Karin) and Luciferase reporter plasmids used were: -73/+63 ColCAT, -73/+63...1. Table 1: HMECs used in this study. Cells Name Source Phenotype Normal HMECs: NHMEC Clonetics Senescent, anchorage-dependent 184 M . Stampfer...Immortal HMECs: 184A I N5 M . Stampfer Immortal, anchorage dependent 184B5 M . Stampfer MCF10A A. Russo Transformed HMECs: MCF1OAneoT (ras) A. Russo Immortal

  19. Feature combinations and the divergence criterion

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.; Mayekar, S. M.

    1976-01-01

    Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.

  20. Mechanical Strength and Failure Characterization of Sn-Ag-Cu Intermetallic Compound Joints at the Microscale

    NASA Astrophysics Data System (ADS)

    Ladani, Leila; Razmi, Jafar

    2012-03-01

    Continuous miniaturization of microelectronic devices has led the industry to develop interconnects on the order of a few microns for advanced superhigh-density and three-dimensional integrated circuits (3D ICs). At this scale, interconnects that conventionally consist of solder material will completely transform to intermetallic compounds (IMCs) such as Cu6Sn5. IMCs are brittle, unlike conventional solder materials that are ductile in nature; therefore, IMCs do not experience large amounts of plasticity or creep before failure. IMCs have not been fully characterized, and their mechanical and thermomechanical reliability is questioned. This study presents experimental efforts to characterize such material. Sn-based microbonds are fabricated in a controlled environment to assure complete transformation of the bonds to Cu6Sn5 IMC. Microstructural analysis including scanning electron microscopy (SEM), energy-dispersive x-ray spectroscopy (EDS), and x-ray diffraction (XRD) is utilized to determine the IMC material composition and degree of copper diffusion into the bond area. Specimens are fabricated with different bond thicknesses and in different configurations for various tests. Normal strength of the bonds is measured utilizing double cantilever beam and peeling tests. Shear tests are conducted to quantify the shear strength of the material. Four-point bending tests are conducted to measure the fracture toughness and critical energy release rate. Bonds are fabricated in different sizes, and the size effect is investigated. The shear strength, normal strength, critical energy release rate, and effect of bond size on bond strength are reported.

  1. Time-space and cognition-space transformations for transportation network analysis based on multidimensional scaling and self-organizing map

    NASA Astrophysics Data System (ADS)

    Hong, Zixuan; Bian, Fuling

    2008-10-01

    Geographic space, time space and cognition space are three fundamental and interrelated spaces in geographic information systems for transportation. However, the cognition space and its relationships to the time space and geographic space are often neglected. This paper studies the relationships of these three spaces in urban transportation system from a new perspective and proposes a novel MDS-SOM transformation method which takes the advantages of the techniques of multidimensional scaling (MDS) and self-organizing map (SOM). The MDS-SOM transformation framework includes three kinds of mapping: the geographic-time transformation, the cognition-time transformation and the time-cognition transformation. The transformations in our research provide a better understanding of the interactions of these three spaces and beneficial knowledge is discovered to help the transportation analysis and decision supports.

  2. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  4. Plasminogen activator: analysis of enzyme induction by ultraviolet irradiation mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miskin, R.; Reich, E.; Dixon, K.

    1981-10-01

    Ultraviolet irradiation mapping techniques have previously been used to study the organization of eucaryotic gene classes and transcription units. We used the same method to probe some regulatory phenomena observed in the induction of plasminogen activator (PA) biosynthesis: PA synthesis in chicken embryo fibroblasts is induced by tumor-promoting phorbol esters and by retinoic acid; furthermore, PA induction by phorbol esters is synergistic with transformation, being 10- to 20-fold greater in virus-transformed cells than in normal cells. We found that the ultraviolet irradiation inactivation cross sections for PA induction by phorbol esters and by retinoate differed significantly, suggesting that these agentsmore » induce PA biosynthesis by different mechanisms. On the other hand, the ultraviolet irradiation sensitivity of phorbol ester induction in normal chicken embryo fibroblasts was the same as in transformed cells, indicating that the synergism of transformation and phorbol esters is probably not due to different pathways of PA induction.« less

  5. Mechanisms of Radiation Toxicity in Transformed and Non-Transformed Cells

    PubMed Central

    Panganiban, Ronald-Allan M.; Snow, Andrew L.; Day, Regina M.

    2013-01-01

    Radiation damage to biological systems is determined by the type of radiation, the total dosage of exposure, the dose rate, and the region of the body exposed. Three modes of cell death—necrosis, apoptosis, and autophagy—as well as accelerated senescence have been demonstrated to occur in vitro and in vivo in response to radiation in cancer cells as well as in normal cells. The basis for cellular selection for each mode depends on various factors including the specific cell type involved, the dose of radiation absorbed by the cell, and whether it is proliferating and/or transformed. Here we review the signaling mechanisms activated by radiation for the induction of toxicity in transformed and normal cells. Understanding the molecular mechanisms of radiation toxicity is critical for the development of radiation countermeasures as well as for the improvement of clinical radiation in cancer treatment. PMID:23912235

  6. Limits on rock strength under high confinement

    NASA Astrophysics Data System (ADS)

    Renshaw, Carl E.; Schulson, Erland M.

    2007-06-01

    Understanding of deep earthquake source mechanisms requires knowledge of failure processes active under high confinement. Under low confinement the compressive strength of rock is well known to be limited by frictional sliding along stress-concentrating flaws. Under higher confinement strength is usually assumed limited by power-law creep associated with the movement of dislocations. In a review of existing experimental data, we find that when the confinement is high enough to suppress frictional sliding, rock strength increases as a power-law function only up to a critical normalized strain rate. Within the regime where frictional sliding is suppressed and the normalized strain rate is below the critical rate, both globally distributed ductile flow and localized brittle-like failure are observed. When frictional sliding is suppressed and the normalized strain rate is above the critical rate, failure is always localized in a brittle-like manner at a stress that is independent of the degree of confinement. Within the high-confinement, high-strain rate regime, the similarity in normalized failure strengths across a variety of rock types and minerals precludes both transformational faulting and dehydration embrittlement as strength-limiting mechanisms. The magnitude of the normalized failure strength corresponding to the transition to the high-confinement, high-strain rate regime and the observed weak dependence of failure strength on strain rate within this regime are consistent with a localized Peierls-type strength-limiting mechanism. At the highest strain rates the normalized strengths approach the theoretical limit for crystalline materials. Near-theoretical strengths have previously been observed only in nano- and micro-scale regions of materials that are effectively defect-free. Results are summarized in a new deformation mechanism map revealing that when confinement and strain rate are sufficient, strengths approaching the theoretical limit can be achieved in cm-scale sized samples of rocks rich in defects. Thus, non-frictional failure processes must be considered when interpreting rock deformation data collected under high confinement and low temperature. Further, even at higher temperatures the load-bearing ability of crustal rocks under high confinement may not be limited by a frictional process under typical geologic strain rates.

  7. Intergranular fracture in UO{sub 2}: derivation of traction-separation law from atomistic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yongfeng; Millett, P.C.; Tonks, M.R.

    2013-07-01

    In this study, the intergranular fracture behavior of UO{sub 2} was studied by molecular dynamics simulations using the Basak potential. In addition, the constitutive traction-separation law was derived from atomistic data using the cohesive-zone model. In the simulations a bicrystal model with the (100) symmetric tilt Σ5 grain boundaries was utilized. Uniaxial tension along the grain boundary normal was applied to simulate Mode-I fracture. The fracture was observed to propagate along the grain boundary by micro-pore nucleation and coalescence, giving an overall intergranular fracture behavior. Phase transformations from the Fluorite to the Rutile and Scrutinyite phases were identified at themore » propagating crack tips. These new phases are metastable and they transformed back to the Fluorite phase at the wake of crack tips as the local stress concentration was relieved by complete cracking. Such transient behavior observed at atomistic scale was found to substantially increase the energy release rate for fracture. Insertion of Xe gas into the initial notch showed minor effect on the overall fracture behavior. (authors)« less

  8. Intergranular fracture in UO2: derivation of traction-separation law from atomistic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yongfeng Zhang; Paul C Millett; Michael R Tonks

    2013-10-01

    In this study, the intergranular fracture behavior of UO2 was studied by molecular dynamics simulations using the Basak potential. In addition, the constitutive traction-separation law was derived from atomistic data using the cohesive-zone model. In the simulations a bicrystal model with the (100) symmetric tilt E5 grain boundaries was utilized. Uniaxial tension along the grain boundary normal was applied to simulate Mode-I fracture. The fracture was observed to propagate along the grain boundary by micro-pore nucleation and coalescence, giving an overall intergranular fracture behavior. Phase transformations from the Fluorite to the Rutile and Scrutinyite phases were identified at the propagatingmore » crack tips. These new phases are metastable and they transformed back to the Fluorite phase at the wake of crack tips as the local stress concentration was relieved by complete cracking. Such transient behavior observed at atomistic scale was found to substantially increase the energy release rate for fracture. Insertion of Xe gas into the initial notch showed minor effect on the overall fracture behavior.« less

  9. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  10. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach

    PubMed Central

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences. PMID:27920733

  11. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    PubMed

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.

  12. Adaptive scaling model of the main pycnocline and the associated overturning circulation

    NASA Astrophysics Data System (ADS)

    Fuckar, Neven-Stjepan

    This thesis examines a number of crucial factors and processes that control the structure of the main pycnocline and the associated overturning circulation that maintains the ocean stratification. We construct an adaptive scaling model: a semi-empirical low-order theory based on the total transformation balance that linearly superimposes parameterized transformation rate terms of various mechanisms that participate in the water-mass conversion between the warm water sphere and the cold water sphere. The depth of the main pycnocline separates the light-water domain from the dense-water domain beneath the surface, hence we introduce a new definition in an integral form that is dynamically based on the large-scale potential vorticity (i.e., vertical density gradient is selected for the kernel function of the normalized vertical integral). We exclude the abyssal pycnocline from our consideration and limit our domain of interest to the top 2 km of water column. The goal is to understand the controlling mechanisms, and analytically predict and describe a wide spectrum of ocean steady states in terms of key large-scale indices relevant for understanding the ocean's role in climate. A devised polynomial equation uses the average depth of the main pycnocline as a single unknown (the key vertical scale of the upper ocean stratification) and gives us an estimate for the northern hemisphere deep water production and export across the equator from the parts of this equation. The adaptive scaling model aims to elucidate the roles of a limited number of dominant processes that determine some key upper ocean circulation and stratification properties. Additionally, we use a general circulation model in a series of simplified single-basin ocean configurations and surface forcing fields to confirm the usefulness of our analytical model and further clarify several aspects of the upper ocean structure. An idealized numerical setup, containing all the relevant physical and dynamical properties, is key to obtaining a clear understanding, uncomplicated by the effect of the real world geometry or intricacy of realistic surface radiative and turbulent fluxes. We show that wind-driven transformation processes can be decomposed into two terms separately driven by the mid-latitude westerlies and the low-latitude easterlies. Our analytical model smoothly connects all the classical limits describing different ocean regimes in a single-basin single-hemisphere geometry. The adjective "adaptive" refers to a simple and quantitatively successful adjustment to the description of a single-basin two-hemisphere ocean, with and without a circumpolar channel under the hemispherically symmetric surface buoyancy. For example, our water-mass conversion framework, unifying wind-driven and thermohaline processes, provides us with further insight into the "Drake Passage effect without Drake Passage". The modification of different transformation pathways in the Southern Hemisphere results in the equivalent net conversion changes. The introduction of hemispheric asymmetry in the surface density can lead to significant hemispheric differences in the main pycnocline structure. This demonstrates the limitations of our analytical model based on only one key vertical scale. Also, we show a strong influence of the northern hemisphere surface density change in high latitudes on the southern hemisphere stratification and circumpolar transport.

  13. Determination of statistics for any rotation of axes of a bivariate normal elliptical distribution. [of wind vector components

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Crutcher, H. L.

    1976-01-01

    Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.

  14. Cross-platform normalization of microarray and RNA-seq data for machine learning applications

    PubMed Central

    Thompson, Jeffrey A.; Tan, Jie

    2016-01-01

    Large, publicly available gene expression datasets are often analyzed with the aid of machine learning algorithms. Although RNA-seq is increasingly the technology of choice, a wealth of expression data already exist in the form of microarray data. If machine learning models built from legacy data can be applied to RNA-seq data, larger, more diverse training datasets can be created and validation can be performed on newly generated data. We developed Training Distribution Matching (TDM), which transforms RNA-seq data for use with models constructed from legacy platforms. We evaluated TDM, as well as quantile normalization, nonparanormal transformation, and a simple log2 transformation, on both simulated and biological datasets of gene expression. Our evaluation included both supervised and unsupervised machine learning approaches. We found that TDM exhibited consistently strong performance across settings and that quantile normalization also performed well in many circumstances. We also provide a TDM package for the R programming language. PMID:26844019

  15. Ultra-Wideband Radar Transient Detection using Time-Frequency and Wavelet Transforms.

    DTIC Science & Technology

    1992-12-01

    if p==2, mesh(flipud(abs(spdatamatrix).A2)) end 2. Wigner - Ville Distribution function P = wvd (data,winlenstep,begintheendp) % Filename: wvd.m % Title...short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution , and time-scale methods, such as the a trous...such as the short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution [1], and time-scale methods, such

  16. Atmospheric constituent density profiles from full disk solar occultation experiments

    NASA Technical Reports Server (NTRS)

    Lumpe, J. D.; Chang, C. S.; Strickland, D. J.

    1991-01-01

    Mathematical methods are described which permit the derivation of the number of density profiles of atmospheric constituents from solar occultation measurements. The algorithm is first applied to measurements corresponding to an arbitrary solar-intensity distribution to calculate the normalized absorption profile. The application of Fourier transform to the integral equation yields a precise expression for the corresponding number density, and the solution is employed with the data given in the form of Laguerre polynomials. The algorithm is employed to calculate the results for the case of uniform distribution of solar intensity, and the results demonstrate the convergence properties of the method. The algorithm can be used to effectively model representative model-density profiles with constant and altitude-dependent scale heights.

  17. Atomic scale study of nanocontacts

    NASA Astrophysics Data System (ADS)

    Buldum, A.; Ciraci, S.; Batra, Inder P.; Fong, C. Y.

    1998-03-01

    Nanocontact and subsequent pulling off a sharp Ni(111) tip on a Cu(110) surface are investigated by using molecular dynamics method with embedded atom model. As the contact is formed, the sharp tip experiences multiple jump to contact in the attractive force range. The contact interface develops discontinuously mainly due to disorder-order transformations which lead to disappearance of a layer and hence abrupt changes in the normal force variation. Atom exchange occurs in the repulsive range. The connective neck is reduced also discontinuously by pulling off the tip. The novel atomic structure of the neck under the tensile force is analyzed. We also presented a comperative study for the contact by a Si(111) tip on Si(111)-(2x1) surface.

  18. Rational functional representation of flap noise spectra including correction for reflection effects

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1974-01-01

    A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on Thomas' (1969) N-independent-source model extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown-flap data taken from turbofan engine tests and from large-scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.

  19. Finite Element Simulation and Experimental Verification of Internal Stress of Quenched AISI 4140 Cylinders

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua

    2017-03-01

    The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.

  20. Novel algorithm to identify and differentiate specific digital signature of breath sound in patients with diffuse parenchymal lung disease.

    PubMed

    Bhattacharyya, Parthasarathi; Mondal, Ashok; Dey, Rana; Saha, Dipanjan; Saha, Goutam

    2015-05-01

    Auscultation is an important part of the clinical examination of different lung diseases. Objective analysis of lung sounds based on underlying characteristics and its subsequent automatic interpretations may help a clinical practice. We collected the breath sounds from 8 normal subjects and 20 diffuse parenchymal lung disease (DPLD) patients using a newly developed instrument and then filtered off the heart sounds using a novel technology. The collected sounds were thereafter analysed digitally on several characteristics as dynamical complexity, texture information and regularity index to find and define their unique digital signatures for differentiating normality and abnormality. For convenience of testing, these characteristic signatures of normal and DPLD lung sounds were transformed into coloured visual representations. The predictive power of these images has been validated by six independent observers that include three physicians. The proposed method gives a classification accuracy of 100% for composite features for both the normal as well as lung sound signals from DPLD patients. When tested by independent observers on the visually transformed images, the positive predictive value to diagnose the normality and DPLD remained 100%. The lung sounds from the normal and DPLD subjects could be differentiated and expressed according to their digital signatures. On visual transformation to coloured images, they retain 100% predictive power. This technique may assist physicians to diagnose DPLD from visual images bearing the digital signature of the condition. © 2015 Asian Pacific Society of Respirology.

  1. Intracellular interaction of EBV/C3d receptor (CR2) with p68, a calcium-binding protein present in normal but not in transformed B lymphocytes.

    PubMed

    Barel, M; Gauffre, A; Lyamani, F; Fiandino, A; Hermann, J; Frade, R

    1991-08-15

    To analyze direct intracellular interactions of CR2 in normal human B lymphocytes, we used polyclonal anti-Id anti-CR2 antibodies (Ab2) prepared against the highly purified CR2 molecule (gp140) as original immunogen. We previously demonstrated that this Ab2 contained specificities that mimicked extracellular and intracellular domains of CR2 and was helpful for identifying CR2-specific ligands. Indeed, some Ab2 specificities recognized human C3d and EBV, two extracellular CR2 ligands. In addition, other Ab2 specificities interacted directly, as CR2, with the intracellular p53 antioncoprotein that is expressed in transformed cells and not in normal cells. We demonstrate herein that Ab2 detected in normal B lymphocytes a 68-kDa protein, p68, that was not expressed in transformed B cells. p68 was localized in purified plasma membranes and cytosol fractions. Direct interaction of purified CR2 with purified p68 was demonstrated. Competitive studies supported that CR2 and Ab2 interacted with identical sites on p68. These interactions were calcium dependent. p68 was identified as a calcium-binding protein by its ability to be solubilized from B lymphocyte membranes by EGTA, a calcium-chelating agent, to bind specifically on phenothiazine-Sepharose in a calcium-dependent interaction, and to be recognized by specific antibodies directed against human p68, a calcium-binding protein of the annexin VI family. Thus, demonstration of different intracellular interactions of CR2 with distinct regulatory proteins, such as p53, the antioncoprotein, and p68, a calcium-binding protein, supports involvement of two regulatory pathways of signal transduction through CR2, depending on the normal or transformed state of human B lymphocytes.

  2. Regulation of plant immunity through modulation of phytoalexin synthesis

    USDA-ARS?s Scientific Manuscript database

    Soybean hairy roots transformed with the resveratrol synthase and resveratrol oxymethyl transferase genes driven by constitutive Arabidopsis actin and CsVMV promoters were characterized. Transformed hairy roots accumulated the stilbenic compounds resveratrol and pterostilbene, which are normally not...

  3. Impact of compost process conditions on organic micro pollutant degradation during full scale composting.

    PubMed

    Sadef, Yumna; Poulsen, Tjalfe Gorm; Bester, Kai

    2015-06-01

    Knowledge about the effects of oxygen concentration, nutrient availability and moisture content on removal of organic micro-pollutants during aerobic composting is at present very limited. Impact of oxygen concentration, readily available nitrogen content (NH4(+), NO3(-)), and moisture content on biological transformation of 15 key organic micro-pollutants during composting, was therefore investigated using bench-scale degradation experiments based on non-sterile compost samples, collected at full-scale composting facilities. In addition, the adequacy of bench-scale composting experiments for representing full-scale composting conditions, was investigated using micro-pollutant concentration measurements from both bench- and full-scale composting experiments. Results showed that lack of oxygen generally prevented transformation of organic micro-pollutants. Increasing readily available nitrogen content from about 50 mg N per 100 g compost to about 140 mg N per 100 g compost actually reduced micro-pollutant transformation, while changes in compost moisture content from 50% to 20% by weight, only had minor influence on micro-pollutant transformation. First-order micro-pollutant degradation rates for 13 organic micro-pollutants were calculated using data from both full- and bench-scale experiments. First-order degradation coefficients for both types of experiments were similar and ranged from 0.02 to 0.03 d(-1) on average, indicating that if a proper sampling strategy is employed, bench-scale experiments can be used to represent full-scale composting conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Rational suicide: an impoverished self-transformation.

    PubMed

    Maris, R

    1982-01-01

    The normal human condition is such that even with the best that life can offer suicide is understandable. Life is short, often painful, unpredictable, and lonely. In addition the lives of some individuals are in effect "suicidal careers" in that the harshness of normal life is combined for them with extra suicidal catalysts. Suicide makes sense. Minimally suicide resolves the life problem for the suicide. At the same time suicide is an impoverished self-transformation. Life, as trying and despairing as it can be, is still all we have, The suicide resolves the life problem by obliterating life itself, rather than by transforming self, history, and society. The suicide gives his or her life back inappropriately. In this sense no suicide is ever rational.

  5. Slant rectification in Russian passport OCR system using fast Hough transform

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Bezmaternykh, Pavel; Nikolaev, Dmitry; Arlazarov, Vladimir

    2017-03-01

    In this paper, we introduce slant detection method based on Fast Hough Transform calculation and demonstrate its application in industrial system for Russian passports recognition. About 1.5% of this kind of documents appear to be slant or italic. This fact reduces recognition rate, because Optical Recognition Systems are normally designed to process normal fonts. Our method uses Fast Hough Transform to analyse vertical strokes of characters extracted with the help of x-derivative of a text line image. To improve the quality of detector we also introduce field grouping rules. The resulting algorithm allowed to reach high detection quality. Almost all errors of considered approach happen on passports of nonstandard fonts, while slant detector works in appropriate way.

  6. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  7. Normal block faulting in the Airport Graben, Managua pull-apart rift, Nicaragua: gravity and magnetic constraints

    NASA Astrophysics Data System (ADS)

    Campos-Enriquez, J. O.; Zambrana Arias, X.; Keppie, D.; Ramón Márquez, V.

    2012-12-01

    Regional scale models have been proposed for the Nicaraguan depression: 1) parallel rifting of the depression (and volcanic front) due to roll back of the underlying subducted Cocos plate; 2) right-lateral strike-slip faulting parallel to the depression and locally offset by pull-apart basins; 3) right-lateral strike-slip faulting parallel to the depression and offset by left-lateral transverse or bookshelf faults. At an intermediate scale, Funk et al. (2011) interpret the depression as half graben type structures. The E-W Airport graben lies in the southeastern part of the Managua graben (Nicaragua), across which the active Central American volcanic arc is dextrally offset, possibly the result of a subducted transform fault where the subduction angle changes. The Managua graben lies within the late Quaternary Nicaragua depression produced by backarc rifting during roll back of the Middle American Trench. The Managua graben formed as a pull-apart rift associated with dextral bookshelf faulting during dextral shear between the forearc and arc and is the locus of two historical, large earthquakes that destroyed the city of Managua. In order to asses future earthquake risk, four E-W gravity and magnetic profiles were undertaken to determine its structure across the Airport graben, which is bounded by the Cofradia and Airport fault zones, to the east and west, respectively. These data indicated the presence of a series of normal faults bounding down-thrown and up-thrown fault blocks and a listric normal fault, Sabana Grande Fault. The models imply that this area has been subjected to tectonic extension. These faults appear to be part of the bookshelf suite and will probably be the locus of future earthquakes, which could destroy the airport and surrounding part of Managua. Three regional SW-NE gravity profiles running from the Pacific Ocean up to the Caribbean See indicate a change in crustal structure: from north to south the crust thins. According to these regional crustal models the offset observed in the Volcanic Front around the Nicaragua Lake is associated with a weakness zone related with: 1) this N-S change in crustal structure, 2) to the subduction angle of the Cocos plate, and 3) to the distance to the Middle America Trench (i.e. the location of the mantle wedge). As mentioned above a subducted transform fault might have given rise to this crustal discontinuity.

  8. Spanning the scales of mechanical metamaterials using time domain simulations in transformed crystals, graphene flakes and structured soils

    NASA Astrophysics Data System (ADS)

    Aznavourian, Ronald; Puvirajesinghe, Tania M.; Brûlé, Stéphane; Enoch, Stefan; Guenneau, Sébastien

    2017-11-01

    We begin with a brief historical survey of discoveries of quasi-crystals and graphene, and then introduce the concept of transformation crystallography, which consists of the application of geometric transforms to periodic structures. We consider motifs with three-fold, four-fold and six-fold symmetries according to the crystallographic restriction theorem. Furthermore, we define motifs with five-fold symmetry such as quasi-crystals generated by a cut-and-projection method from periodic structures in higher-dimensional space. We analyze elastic wave propagation in the transformed crystals and (Penrose-type) quasi-crystals with the finite difference time domain freeware SimSonic. We consider geometric transforms underpinning the design of seismic cloaks with square, circular, elliptical and peanut shapes in the context of honeycomb crystals that can be viewed as scaled-up versions of graphene. Interestingly, the use of morphing techniques leads to the design of cloaks with interpolated geometries reminiscent of Victor Vasarely’s artwork. Employing the case of transformed graphene-like (honeycomb) structures allows one to draw useful analogies between large-scale seismic metamaterials such as soils structured with columns of concrete or grout with soil and nanoscale biochemical metamaterials. We further identify similarities in designs of cloaks for elastodynamic and hydrodynamic waves and cloaks for diffusion (heat or mass) processes, as these are underpinned by geometric transforms. Experimental data extracted from field test analysis of soil structured with boreholes demonstrates the application of crystallography to large scale phononic crystals, coined as seismic metamaterials, as they might exhibit low frequency stop bands. This brings us to the outlook of mechanical metamaterials, with control of phonon emission in graphene through extreme anisotropy, attenuation of vibrations of suspension bridges via low frequency stop bands and the concept of transformed meta-cities. We conclude that these novel materials hold strong applications spanning different disciplines or across different scales from biophysics to geophysics.

  9. Highly Efficient Agrobacterium-Mediated Transformation of Wheat Via In Planta Inoculation

    NASA Astrophysics Data System (ADS)

    Risacher, Thierry; Craze, Melanie; Bowden, Sarah; Paul, Wyatt; Barsby, Tina

    This chapter details a reproducible method for the transformation of spring wheat using Agrobacterium tumefaciens via the direct inoculation of bacteria into immature seeds in planta as described in patent WO 00/63398(1. Transformation efficiencies from 1 to 30% have been obtained and average efficiencies of at least 5% are routinely achieved. Regenerated plants are phenotypically normal with 30-50% of transformation events carrying introduced genes at single insertion sites, a higher rate than is typically reported for transgenic plants produced using biolistic transformation methods.

  10. Molecular cloning and nucleotide sequence of a transforming gene detected by transfection of chicken B-cell lymphoma DNA

    NASA Astrophysics Data System (ADS)

    Goubin, Gerard; Goldman, Debra S.; Luce, Judith; Neiman, Paul E.; Cooper, Geoffrey M.

    1983-03-01

    A transforming gene detected by transfection of chicken B-cell lymphoma DNA has been isolated by molecular cloning. It is homologous to a conserved family of sequences present in normal chicken and human DNAs but is not related to transforming genes of acutely transforming retroviruses. The nucleotide sequence of the cloned transforming gene suggests that it encodes a protein that is partially homologous to the amino terminus of transferrin and related proteins although only about one tenth the size of transferrin.

  11. Retrieving pace in vegetation growth using precipitation and soil moisture

    NASA Astrophysics Data System (ADS)

    Sohoulande Djebou, D. C.; Singh, V. P.

    2013-12-01

    The complexity of interactions between the biophysical components of the watershed increases the challenge of understanding water budget. Hence, the perspicacity of the continuum soil-vegetation-atmosphere's functionality still remains crucial for science. This study targeted the Texas Gulf watershed and evaluated the behavior of vegetation covers by coupling precipitation and soil moisture patterns. Growing season's Normalized Differential Vegetation Index NDVI for deciduous forest and grassland were used over a 23 year period as well as precipitation and soil moisture data. The role of time scales on vegetation dynamics analysis was appraised using both entropy rescaling and correlation analysis. This resulted in that soil moisture at 5 cm and 25cm are potentially more efficient to use for vegetation dynamics monitoring at finer time scale compared to precipitation. Albeit soil moisture at 5 cm and 25 cm series are highly correlated (R2>0.64), it appeared that 5 cm soil moisture series can better explain the variability of vegetation growth. A logarithmic transformation of soil moisture and precipitation data increased correlation with NDVI for the different time scales considered. Based on a monthly time scale we came out with a relationship between vegetation index and the couple soil moisture and precipitation [NDVI=a*Log(% soil moisture)+b*Log(Precipitation)+c] with R2>0.25 for each vegetation type. Further, we proposed to assess vegetation green-up using logistic regression model and transinformation entropy using the couple soil moisture and precipitation as independent variables and vegetation growth metrics (NDVI, NDVI ratio, NDVI slope) as the dependent variable. The study is still ongoing and the results will surely contribute to the knowledge in large scale vegetation monitoring. Keywords: Precipitation, soil moisture, vegetation growth, entropy Time scale, Logarithmic transformation and correlation between soil moisture and NDVI, precipitation and NDVI. The analysis is performed by combining both scenes 7 and 8 data. Schematic illustration of the two dimension transinformation entropy approach. T(P,SM;VI) stand for the transinformation contained in the couple soil moisture (SM)/precipitation (P) and explaining vegetation growth (VI).

  12. On the potential of models for location and scale for genome-wide DNA methylation data

    PubMed Central

    2014-01-01

    Background With the help of epigenome-wide association studies (EWAS), increasing knowledge on the role of epigenetic mechanisms such as DNA methylation in disease processes is obtained. In addition, EWAS aid the understanding of behavioral and environmental effects on DNA methylation. In terms of statistical analysis, specific challenges arise from the characteristics of methylation data. First, methylation β-values represent proportions with skewed and heteroscedastic distributions. Thus, traditional modeling strategies assuming a normally distributed response might not be appropriate. Second, recent evidence suggests that not only mean differences but also variability in site-specific DNA methylation associates with diseases, including cancer. The purpose of this study was to compare different modeling strategies for methylation data in terms of model performance and performance of downstream hypothesis tests. Specifically, we used the generalized additive models for location, scale and shape (GAMLSS) framework to compare beta regression with Gaussian regression on raw, binary logit and arcsine square root transformed methylation data, with and without modeling a covariate effect on the scale parameter. Results Using simulated and real data from a large population-based study and an independent sample of cancer patients and healthy controls, we show that beta regression does not outperform competing strategies in terms of model performance. In addition, Gaussian models for location and scale showed an improved performance as compared to models for location only. The best performance was observed for the Gaussian model on binary logit transformed β-values, referred to as M-values. Our results further suggest that models for location and scale are specifically sensitive towards violations of the distribution assumption and towards outliers in the methylation data. Therefore, a resampling procedure is proposed as a mode of inference and shown to diminish type I error rate in practically relevant settings. We apply the proposed method in an EWAS of BMI and age and reveal strong associations of age with methylation variability that are validated in an independent sample. Conclusions Models for location and scale are promising tools for EWAS that may help to understand the influence of environmental factors and disease-related phenotypes on methylation variability and its role during disease development. PMID:24994026

  13. Transformative environmental governance

    USGS Publications Warehouse

    Chaffin, Brian C.; Garmestani, Ahjond S.; Gunderson, Lance H.; Harm Benson, Melinda; Angeler, David G.; Arnold, Craig Anthony (Tony); Cosens, Barbara; Kundis Craig, Robin; Ruhl, J.B.; Allen, Craig R.

    2016-01-01

    Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to alternative, more desirable, or more functional regimes by altering the structures and processes that define the system. Transformative governance is rooted in ecological theories to explain cross-scale dynamics in complex systems, as well as social theories of change, innovation, and technological transformation. Similar to adaptive governance, transformative governance involves a broad set of governance components, but requires additional capacity to foster new social-ecological regimes including increased risk tolerance, significant systemic investment, and restructured economies and power relations. Transformative governance has the potential to actively respond to regime shifts triggered by climate change, and thus future research should focus on identifying system drivers and leading indicators associated with social-ecological thresholds.

  14. Stability of Retained Austenite in High-Al, Low-Si TRIP-Assisted Steels Processed via Continuous Galvanizing Heat Treatments

    NASA Astrophysics Data System (ADS)

    McDermid, J. R.; Zurob, H. S.; Bian, Y.

    2011-12-01

    Two galvanizable high-Al, low-Si transformation-induced plasticity (TRIP)-assisted steels were subjected to isothermal bainitic transformation (IBT) temperatures compatible with the continuous galvanizing (CGL) process and the kinetics of the retained austenite (RA) to martensite transformation during room temperature deformation studied as a function of heat treatment parameters. It was determined that there was a direct relationship between the rate of strain-induced transformation and optimal mechanical properties, with more gradual transformation rates being favored. The RA to martensite transformation kinetics were successfully modeled using two methodologies: (1) the strain-based model of Olsen and Cohen and (2) a simple relationship with the normalized flow stress, ( {{{σ_{{flow}} - σ_{YS} }/{σ_{YS }}}} ) . For the strain-based model, it was determined that the model parameters were a strong function of strain and alloy thermal processing history and a weak function of alloy chemistry. It was verified that the strain-based model in the present work agrees well with those derived by previous workers using TRIP-assisted steels of similar composition. It was further determined that the RA to martensite transformation kinetics for all alloys and heat treatments could be described using a simple model vs the normalized flow stress, indicating that the RA to martensite transformation is stress-induced rather than strain-induced for temperatures above the Ms^{σ }.

  15. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  16. Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis

    PubMed Central

    Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.

    2015-01-01

    Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223

  17. Edge-SIFT: discriminative binary descriptor for scalable partial-duplicate mobile search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Lu, Ke; Huang, Qingming; Gao, Wen

    2013-07-01

    As the basis of large-scale partial duplicate visual search on mobile devices, image local descriptor is expected to be discriminative, efficient, and compact. Our study shows that the popularly used histogram-based descriptors, such as scale invariant feature transform (SIFT) are not optimal for this task. This is mainly because histogram representation is relatively expensive to compute on mobile platforms and loses significant spatial clues, which are important for improving discriminative power and matching near-duplicate image patches. To address these issues, we propose to extract a novel binary local descriptor named Edge-SIFT from the binary edge maps of scale- and orientation-normalized image patches. By preserving both locations and orientations of edges and compressing the sparse binary edge maps with a boosting strategy, the final Edge-SIFT shows strong discriminative power with compact representation. Furthermore, we propose a fast similarity measurement and an indexing framework with flexible online verification. Hence, the Edge-SIFT allows an accurate and efficient image search and is ideal for computation sensitive scenarios such as a mobile image search. Experiments on a large-scale dataset manifest that the Edge-SIFT shows superior retrieval accuracy to Oriented BRIEF (ORB) and is superior to SIFT in the aspects of retrieval precision, efficiency, compactness, and transmission cost.

  18. Re-design of a physically-based catchment scale agrochemical model for the simulation of parameter spaces and flexible transformation schemes

    NASA Astrophysics Data System (ADS)

    Stegen, Ronald; Gassmann, Matthias

    2017-04-01

    The use of a broad variation of agrochemicals is essential for the modern industrialized agriculture. During the last decades, the awareness of the side effects of their use has grown and with it the requirement to reproduce, understand and predict the behaviour of these agrochemicals in the environment, in order to optimize their use and minimize the side effects. The modern modelling has made great progress in understanding and predicting these chemicals with digital methods. While the behaviour of the applied chemicals is often investigated and modelled, most studies only simulate parent chemicals, considering total annihilation of the substance. However, due to a diversity of chemical, physical and biological processes, the substances are rather transformed into new chemicals, which themselves are transformed until, at the end of the chain, the substance is completely mineralized. During this process, the fate of each transformation product is determined by its own environmental characteristics and the pathway and results of transformation can differ largely by substance and environmental influences, that can occur in different compartments of the same site. Simulating transformation products introduces additional model uncertainties. Thus, the calibration effort increases compared to simulations of the transport and degradation of the primary substance alone. The simulation of the necessary physical processes needs a lot of calculation time. Due to that, few physically-based models offer the possibility to simulate transformation products at all, mostly at the field scale. The few models available for the catchment scale are not optimized for this duty, i.e. they are only able to simulate a single parent compound and up to two transformation products. Thus, for simulations of large physico-chemical parameter spaces, the enormous calculation time of the underlying hydrological model diminishes the overall performance. In this study, the structure of the model ZIN-AGRITRA is re-designed for the transport and transformation of an unlimited amount of agrochemicals in the soil-water-plant system at catchment scale. The focus is, besides a good hydrological standard, on a flexible variation of transformation processes and the optimization for the use of large numbers of different substances. Due to the new design, a reduction of the calculation time per tested substance is acquired, allowing faster testing of parameter spaces. Additionally, the new concept allows for the consideration of different transformation processes and products in different environmental compartments. A first test of calculation time improvements and flexible transformation pathways was performed in a Mediterranean meso-scale catchment, using the insecticide Chlorpyrifos and two of its transformation products, which emerge from different transformation processes, as test substances.

  19. Placeless Organizations: Collaborating for Transformation

    ERIC Educational Resources Information Center

    Nardi, Bonnie A.

    2007-01-01

    This article defines and discusses placeless organizations as sites and generators of learning on a large scale. The emphasis is on how placeless organizations structure themselves to carry out social transformation--necessarily involving intensive learning--on a national or global scale. The argument is made that place is not a necessary…

  20. DEVELOPMENT OF THE METHOD AND U.S. NORMALIZATION DATABASE FOR LIFE CYCLE IMPACT ASSESSMENT AND SUSTAINABILITY METRICS

    EPA Science Inventory

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as, life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relati...

  1. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  2. The Effect of the Multivariate Box-Cox Transformation on the Power of MANOVA.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    Most of the multivariate statistical techniques rely on the assumption of multivariate normality. The effects of non-normality on multivariate tests are assumed to be negligible when variance-covariance matrices and sample sizes are equal. Therefore, in practice, investigators do not usually attempt to remove non-normality. In this simulation…

  3. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.

    2016-06-15

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtainedmore » by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.« less

  4. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    PubMed Central

    Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions. PMID:27277017

  5. A general approach to double-moment normalization of drop size distributions

    NASA Astrophysics Data System (ADS)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  6. ZIP8 expression in human proximal tubule cells, human urothelial cells transformed by Cd+2 and As+3 and in specimens of normal human urothelium and urothelial cancer

    PubMed Central

    2012-01-01

    Background ZIP8 functions endogenously as a Zn+2/HCO3- symporter that can also bring cadmium (Cd+2) into the cell. It has also been proposed that ZIP8 participates in Cd-induced testicular necrosis and renal disease. In this study real-time PCR, western analysis, immunostaining and fluorescent localization were used to define the expression of ZIP8 in human kidney, cultured human proximal tubule (HPT) cells, normal and malignant human urothelium and Cd+2 and arsenite (As+3) transformed urothelial cells. Results It was shown that in the renal system both the non-glycosylated and glycosylated form of ZIP8 was expressed in the proximal tubule cells with localization of ZIP8 to the cytoplasm and cell membrane; findings in line with previous studies on ZIP8. The studies in the bladder were the first to show that ZIP8 was expressed in normal urothelium and that ZIP8 could be localized to the paranuclear region. Studies in the UROtsa cell line confirmed a paranuclear localization of ZIP8, however addition of growth medium to the cells increased the expression of the protein in the UROtsa cells. In archival human samples of the normal urothelium, the expression of ZIP8 was variable in intensity whereas in urothelial cancers ZIP8 was expressed in 13 of 14 samples, with one high grade invasive urothelial cancer showing no expression. The expression of ZIP8 was similar in the Cd+2 and As+3 transformed UROtsa cell lines and their tumor transplants. Conclusion This is the first study which shows that ZIP8 is expressed in the normal urothelium and in bladder cancer. In addition the normal UROtsa cell line and its transformed counterparts show similar expression of ZIP8 compared to the normal urothelium and the urothelial cancers suggesting that the UROtsa cell line could serve as a model system to study the expression of ZIP8 in bladder disease. PMID:22550998

  7. Crystallographic texture and microstructural changes in fusion welds of recrystallized Zry-4 rolled plates

    NASA Astrophysics Data System (ADS)

    Moya Riffo, A.; Vicente Alvarez, M. A.; Santisteban, J. R.; Vizcaino, P.; Limandri, S.; Daymond, M. R.; Kerr, D.; Okasinski, J.; Almer, J.; Vogel, S. C.

    2017-05-01

    This work presents a detailed characterization of the microstructural and crystallographic texture changes observed in the transition region in a weld between two Zircaloy-4 cold rolled and recrystallized plates. The microstructural study was performed by optical microscopy under polarized light and scanning electron microscopy (SEM). Texture changes were characterized at different lengthscales: in the micrometric size, orientation imaging maps (OIM) were constructed by electron backscatter diffraction (EBSD), in the millimetre scale, high energy XRD experiments were done at the Advanced Photon Source (USA) and compared to neutron diffraction texture determinations performed in the HIPPO instrument at Los Alamos National Laboratory. In the heat affected zone (HAZ) we observed the development of Widmanstätten microstructures, typical of the α(hcp) to β(bcc) phase transformation. Associated with these changes a rotation of the c-poles is found in the HAZ and fusion zone. While the base material shows the typical texture of a cold rolled plate, with their c-poles pointing 35° apart from the normal direction of the plate in the normal-transversal line, in the HAZ, c-poles align along the transversal direction of the plate and then re-orient along different directions, all of these changes occurring within a lengthscale in the order of mm. The evolution of texture in this narrow region was captured by both OIM and XRD, and is consistent with previous measurements done by Neutron Diffraction in the HIPPO diffractometer at Los Alamos National Laboratory, USA. The microstructural and texture changes along the HAZ were interpreted as arising due to the effect of differences in the cooling rate and β grain size on the progress of the different α variants during transformation. Fast cooling rates and large β grains are associated to weak variant selection during the β->α transformation, while slow cooling rates and fine β grains result in strong variant selection.

  8. Crystallographic texture and microstructural changes in fusion welds of recrystallized Zry-4 rolled plates

    DOE PAGES

    Riffo, A. Moya; Vicente Alvarez, M. A.; Santisteban, J. R.; ...

    2017-02-08

    This study presents a detailed characterization of the microstructural and crystallographic texture changes observed in the transition region in a weld between two Zircaloy-4 cold rolled and recrystallized plates. The microstructural study was performed by optical microscopy under polarized light and scanning electron microscopy (SEM). Texture changes were characterized at different lengthscales: in the micrometric size, orientation imaging maps (OIM) were constructed by electron backscatter diffraction (EBSD), in the millimetre scale, high energy XRD experiments were done at the Advanced Photon Source (USA) and compared to neutron diffraction texture determinations performed in the HIPPO instrument at Los Alamos National Laboratory.more » In the heat affected zone (HAZ) we observed the development of Widmanstätten microstructures, typical of the α( hcp) to β( bcc) phase transformation. Associated with these changes a rotation of the c-poles is found in the HAZ and fusion zone. While the base material shows the typical texture of a cold rolled plate, with their c-poles pointing 35° apart from the normal direction of the plate in the normal-transversal line, in the HAZ, c-poles align along the transversal direction of the plate and then re-orient along different directions, all of these changes occurring within a lengthscale in the order of mm. The evolution of texture in this narrow region was captured by both OIM and XRD, and is consistent with previous measurements done by Neutron Diffraction in the HIPPO diffractometer at Los Alamos National Laboratory, USA. The microstructural and texture changes along the HAZ were interpreted as arising due to the effect of differences in the cooling rate and β grain size on the progress of the different α variants during transformation. Fast cooling rates and large β grains are associated to weak variant selection during the β–>α transformation, while slow cooling rates and fine β grains result in strong variant selection.« less

  9. Crystallographic texture and microstructural changes in fusion welds of recrystallized Zry-4 rolled plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riffo, A. Moya; Vicente Alvarez, M. A.; Santisteban, J. R.

    This study presents a detailed characterization of the microstructural and crystallographic texture changes observed in the transition region in a weld between two Zircaloy-4 cold rolled and recrystallized plates. The microstructural study was performed by optical microscopy under polarized light and scanning electron microscopy (SEM). Texture changes were characterized at different lengthscales: in the micrometric size, orientation imaging maps (OIM) were constructed by electron backscatter diffraction (EBSD), in the millimetre scale, high energy XRD experiments were done at the Advanced Photon Source (USA) and compared to neutron diffraction texture determinations performed in the HIPPO instrument at Los Alamos National Laboratory.more » In the heat affected zone (HAZ) we observed the development of Widmanstätten microstructures, typical of the α( hcp) to β( bcc) phase transformation. Associated with these changes a rotation of the c-poles is found in the HAZ and fusion zone. While the base material shows the typical texture of a cold rolled plate, with their c-poles pointing 35° apart from the normal direction of the plate in the normal-transversal line, in the HAZ, c-poles align along the transversal direction of the plate and then re-orient along different directions, all of these changes occurring within a lengthscale in the order of mm. The evolution of texture in this narrow region was captured by both OIM and XRD, and is consistent with previous measurements done by Neutron Diffraction in the HIPPO diffractometer at Los Alamos National Laboratory, USA. The microstructural and texture changes along the HAZ were interpreted as arising due to the effect of differences in the cooling rate and β grain size on the progress of the different α variants during transformation. Fast cooling rates and large β grains are associated to weak variant selection during the β–>α transformation, while slow cooling rates and fine β grains result in strong variant selection.« less

  10. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    PubMed

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  11. Flux transformers made of commercial high critical temperature superconducting wires.

    PubMed

    Dyvorne, H; Scola, J; Fermon, C; Jacquinot, J F; Pannetier-Lecoeur, M

    2008-02-01

    We have designed flux transformers made of commercial BiSCCO tapes closed by soldering with normal metal. The magnetic field transfer function of the flux transformer was calculated as a function of the resistance of the soldered contacts. The performances of different kinds of wires were investigated for signal delocalization and gradiometry. We also estimated the noise introduced by the resistance and showed that the flux transformer can be used efficiently for weak magnetic field detection down to 1 Hz.

  12. Numerical inverse Laplace transformation for determining the system response of linear systems in the time domain

    NASA Technical Reports Server (NTRS)

    Friedrich, R.; Drewelow, W.

    1978-01-01

    An algorithm is described that is based on the method of breaking the Laplace transform down into partial fractions which are then inverse-transformed separately. The sum of the resulting partial functions is the wanted time function. Any problems caused by equation system forms are largely limited by appropriate normalization using an auxiliary parameter. The practical limits of program application are reached when the degree of the denominator of the Laplace transform is seven to eight.

  13. The Role of Coseismic Coulomb Stress Changes in Shaping the Hard Link Between Normal Fault Segments

    NASA Astrophysics Data System (ADS)

    Hodge, M.; Fagereng, Å.; Biggs, J.

    2018-01-01

    The mechanism and evolution of fault linkage is important in the growth and development of large faults. Here we investigate the role of coseismic stress changes in shaping the hard links between parallel normal fault segments (or faults), by comparing numerical models of the Coulomb stress change from simulated earthquakes on two en echelon fault segments to natural observations of hard-linked fault geometry. We consider three simplified linking fault geometries: (1) fault bend, (2) breached relay ramp, and (3) strike-slip transform fault. We consider scenarios where either one or both segments rupture and vary the distance between segment tips. Fault bends and breached relay ramps are favored where segments underlap or when the strike-perpendicular distance between overlapping segments is less than 20% of their total length, matching all 14 documented examples. Transform fault linkage geometries are preferred when overlapping segments are laterally offset at larger distances. Few transform faults exist in continental extensional settings, and our model suggests that propagating faults or fault segments may first link through fault bends or breached ramps before reaching sufficient overlap for a transform fault to develop. Our results suggest that Coulomb stresses arising from multisegment ruptures or repeated earthquakes are consistent with natural observations of the geometry of hard links between parallel normal fault segments.

  14. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  15. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  16. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  17. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  18. Topographic relationships for design rainfalls over Australia

    NASA Astrophysics Data System (ADS)

    Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.

    2016-02-01

    Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.

  19. Approximation of the ruin probability using the scaled Laplace transform inversion

    PubMed Central

    Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak

    2015-01-01

    The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796

  20. Small-Scale Dayside Magnetic Reconnection Analysis via MMS

    NASA Astrophysics Data System (ADS)

    Pritchard, K. R.; Burch, J. L.; Fuselier, S. A.; Webster, J.; Genestreti, K.; Torbert, R. B.; Rager, A. C.; Phan, T.; Argall, M. R.; Le Contel, O.; Russell, C. T.; Strangeway, R. J.; Giles, B. L.

    2017-12-01

    The Magnetospheric Multiscale (MMS) mission has the primary objective of understanding the physics of the reconnection electron diffusion region (EDR), where magnetic energy is transformed into particle energy. In this poster, we present data from an EDR encounter that occurred in late December 2016 at approximately 11:00 MLT with a moderate guide field. The spacecraft were in a tetrahedral formation with an average inter-spacecraft distance of approximately 7 kilometers. During this event electron crescent-shaped distributions were observed in the electron stagnation region as is typical for asymmetric reconnection. Based on the observed ion velocity jets, the spacecraft traveled just south of the EDR. Because of the close spacecraft separation, fairly accurate computation of the Hall, electron pressure divergence, and electron inertia components of the reconnection electric field could be made. In the region of the crescent distributions good agreement was observed, with the strongest component being the normal electric field and the most significant sources being electron pressure divergence and the Hall electric field. While the strongest currents were in the out-of-plane direction, the dissipation was strongest in the normal direction because of the larger magnitude of the normal electric field component. These results are discussed in light of recent 3D PIC simulations performed by other groups.

  1. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  2. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  3. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  4. IAU resolutions on reference systems and time scales in practice

    NASA Astrophysics Data System (ADS)

    Brumberg, V. A.; Groten, E.

    2001-03-01

    To be consistent with IAU/IUGG (1991) resolutions ICRS and ITRS should be treated as four-dimensional reference systems with TCB and TCG time scales, respectively, interrelated by a four-dimensional general relativistic transformation. This two-way transformation is given in the form adapted for actual application. The use of TB and TT instead of TCB and TCG, respectively, involves scaling factors complicating the use of this transformation in practice. New IAU B1 (2000) resolution is commented taking in mind some points of possible confusion in its practical application. The problem of the relationship of the theory of reference systems with the parameters of common relevance to astronomy, geodesy and geodynamics is briefly outlined.

  5. Digital signal processing techniques for pitch shifting and time scaling of audio signals

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2016-09-01

    In this paper, we present the techniques used for modifying the spectral content (pitch shifting) and for changing the time duration (time scaling) of an audio signal. A short introduction gives a necessary background for understanding the discussed issues and contains explanations of the terms used in the paper. In subsequent sections we present three different techniques appropriate both for pitch shifting and for time scaling. These techniques use three different time-frequency representations of a signal, namely short-time Fourier transform (STFT), continuous wavelet transform (CWT) and constant-Q transform (CQT). The results of simulation studies devoted to comparison of the properties of these methods are presented and discussed in the paper.

  6. Reaffirming normal: the high risk of pathologizing healthy adults when interpreting the MMPI-2-RF.

    PubMed

    Odland, Anthony P; Lammy, Andrew B; Perle, Jonathan G; Martin, Phillip K; Grote, Christopher L

    2015-01-01

    Monte Carlo simulations were utilized to determine the proportion of the normal population expected to have scale elevations on the MMPI-2-RF when multiple scores are interpreted. Results showed that when all 40 MMPI-2-RF scales are simultaneously considered, approximately 70% of normal adults are likely to have at least one scale elevation at or above 65 T, and as many as 20% will have five or more elevated scales. When the Restructured Clinical (RC) Scales are under consideration, 34% of normal adults have at least one elevated score. Interpretation of the Specific Problem Scales and Personality Psychopathology Five Scales--Revised also yielded higher than expected rates of significant scores, with as many as one in four normal adults possibly being miscategorized as having features of a personality disorder by the latter scales. These findings are consistent with the growing literature on rates of apparently abnormal scores in the normal population due to multiple score interpretation. Findings are discussed in relation to clinical assessment, as well as in response to recent work suggesting that the MMPI-2-RF's multiscale composition does not contribute to high rates of elevated scores.

  7. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  8. Defining surfaces for skewed, highly variable data

    USGS Publications Warehouse

    Helsel, D.R.; Ryker, S.J.

    2002-01-01

    Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.

  9. A DNA methylation map of human cancer at single base-pair resolution

    PubMed Central

    Vidal, E; Sayols, S; Moran, S; Guillaumet-Adkins, A; Schroeder, M P; Royo, R; Orozco, M; Gut, M; Gut, I; Lopez-Bigas, N; Heyn, H; Esteller, M

    2017-01-01

    Although single base-pair resolution DNA methylation landscapes for embryonic and different somatic cell types provided important insights into epigenetic dynamics and cell-type specificity, such comprehensive profiling is incomplete across human cancer types. This prompted us to perform genome-wide DNA methylation profiling of 22 samples derived from normal tissues and associated neoplasms, including primary tumors and cancer cell lines. Unlike their invariant normal counterparts, cancer samples exhibited highly variable CpG methylation levels in a large proportion of the genome, involving progressive changes during tumor evolution. The whole-genome sequencing results from selected samples were replicated in a large cohort of 1112 primary tumors of various cancer types using genome-scale DNA methylation analysis. Specifically, we determined DNA hypermethylation of promoters and enhancers regulating tumor-suppressor genes, with potential cancer-driving effects. DNA hypermethylation events showed evidence of positive selection, mutual exclusivity and tissue specificity, suggesting their active participation in neoplastic transformation. Our data highlight the extensive changes in DNA methylation that occur in cancer onset, progression and dissemination. PMID:28581523

  10. Mellin Transform-Based Correction Method for Linear Scale Inconsistency of Intrusion Events Identification in OFPS

    NASA Astrophysics Data System (ADS)

    Wang, Baocheng; Qu, Dandan; Tian, Qing; Pang, Liping

    2018-05-01

    For the problem that the linear scale of intrusion signals in the optical fiber pre-warning system (OFPS) is inconsistent, this paper presents a method to correct the scale. Firstly, the intrusion signals are intercepted, and an aggregate of the segments with equal length is obtained. Then, the Mellin transform (MT) is applied to convert them into the same scale. The spectral characteristics are obtained by the Fourier transform. Finally, we adopt back-propagation (BP) neural network to identify intrusion types, which takes the spectral characteristics as input. We carried out the field experiments and collected the optical fiber intrusion signals which contain the picking signal, shoveling signal, and running signal. The experimental results show that the proposed algorithm can effectively improve the recognition accuracy of the intrusion signals.

  11. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  12. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  13. Transformative environmental governance

    EPA Science Inventory

    Transformative governance is an approach to environmental governance that has the capacity to respond to, manage, and trigger regime shifts in coupled social-ecological systems (SESs) at multiple scales. The goal of transformative governance is to actively shift degraded SESs to ...

  14. Structural transformation in monolayer materials: a 2D to 1D transformation.

    PubMed

    Momeni, Kasra; Attariani, Hamed; LeSar, Richard A

    2016-07-20

    Reducing the dimensions of materials to atomic scales results in a large portion of atoms being at or near the surface, with lower bond order and thus higher energy. At such scales, reduction of the surface energy and surface stresses can be the driving force for the formation of new low-dimensional nanostructures, and may be exhibited through surface relaxation and/or surface reconstruction, which can be utilized for tailoring the properties and phase transformation of nanomaterials without applying any external load. Here we used atomistic simulations and revealed an intrinsic structural transformation in monolayer materials that lowers their dimension from 2D nanosheets to 1D nanostructures to reduce their surface and elastic energies. Experimental evidence of such transformation has also been revealed for one of the predicted nanostructures. Such transformation plays an important role in bi-/multi-layer 2D materials.

  15. Loss of Canonical Smad4 Signaling Promotes KRAS Driven Malignant Transformation of Human Pancreatic Duct Epithelial Cells and Metastasis

    PubMed Central

    Leung, Lisa; Radulovich, Nikolina; Zhu, Chang-Qi; Wang, Dennis; To, Christine; Ibrahimov, Emin; Tsao, Ming-Sound

    2013-01-01

    Pancreatic ductal adenocarcinoma (PDAC) is the fourth most common cause of cancer death in North America. Activating KRAS mutations and Smad4 loss occur in approximately 90% and 55% of PDAC, respectively. While their roles in the early stages of PDAC development have been confirmed in genetically modified mouse models, their roles in the multistep malignant transformation of human pancreatic duct cells have not been directly demonstrated. Here, we report that Smad4 represents a barrier in KRAS-mediated malignant transformation of the near normal immortalized human pancreatic duct epithelial (HPDE) cell line model. Marked Smad4 downregulation by shRNA in KRAS G12V expressing HPDE cells failed to cause tumorigenic transformation. However, KRAS-mediated malignant transformation occurred in a new HPDE-TGF-β resistant (TβR) cell line that completely lacks Smad4 protein expression and is resistant to the mito-inhibitory activity of TGF-β. This transformation resulted in tumor formation and development of metastatic phenotype when the cells were implanted orthotopically into the mouse pancreas. Smad4 restoration re-established TGF-β sensitivity, markedly increased tumor latency by promoting apoptosis, and decreased metastatic potential. These results directly establish the critical combination of the KRAS oncogene and complete Smad4 inactivation in the multi-stage malignant transformation and metastatic progression of normal human HPDE cells. PMID:24386371

  16. In vitro analysis of multistage carcinogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nettesheim, P.; Fitzgerald, D.J.; Kitamura, H.

    1987-11-01

    Several key events in the multistep process of neoplastic transformation of rat tracheal epithelium (RTE) are described. Whether tracheal epithelium is exposed in vivo to carcinogenic agents or whether primary tracheal epithelial cells are exposed in vitro to carcinogens, initiated stem cells can be detected soon after the exposure by their ability to grow under selective conditions in culture. These initiated stem cells differ fundamentally from normal stem cells in their response to factors normally constraining proliferation and self-renewal. Thus, disruption of inhibitory control mechanisms of stem cell replication appears to be the first event in RTE cell transformation. Whilemore » the probability of self-renewal (PSR) is clearly increased in initiated stem cells, most of the descendants derived form such stem cells differentiate and become terminal and do not express transformed characteristics. Progression from the first to the second stage of RTE cell transformation, the stage of the immortal growth variant (IGV), is characterized by loss of responsiveness to the growth-restraining effects of retinoic acid. In the third stage of neoplastic transformation, the stage during which neoplastic growth variants (NGV) appear, a growth factor receptor gene is inappropriately expressed in some of the transformants. Thus, it appears that loss of growth-restraining mechanisms may be an early event, and activation of a growth stimulatory mechanism a late event, in neoplastic transformation of RTE cells.« less

  17. STAT3-regulated exosomal miR-21 promotes angiogenesis and is involved in neoplastic processes of transformed human bronchial epithelial cells.

    PubMed

    Liu, Yi; Luo, Fei; Wang, Bairu; Li, Huiqiao; Xu, Yuan; Liu, Xinlu; Shi, Le; Lu, Xiaolin; Xu, Wenchao; Lu, Lu; Qin, Yu; Xiang, Quanyong; Liu, Qizhan

    2016-01-01

    Although microRNA (miRNA) enclosed in exosomes can mediate intercellular communication, the roles of exosomal miRNA and angiogenesis in lung cancer remain unclear. We investigated functions of STAT3-regulated exosomal miR-21 derived from cigarette smoke extract (CSE)-transformed human bronchial epithelial (HBE) cells in the angiogenesis of CSE-induced carcinogenesis. miR-21 levels in serum were higher in smokers than those in non-smokers. The medium from transformed HBE cells promoted miR-21 levels in normal HBE cells and angiogenesis of human umbilical vein endothelial cells (HUVEC). Transformed cells transferred miR-21 into normal HBE cells via exosomes. Knockdown of STAT3 reduced miR-21 levels in exosomes derived from transformed HBE cells, which blocked the angiogenesis. Exosomes derived from transformed HBE cells elevated levels of vascular endothelial growth factor (VEGF) in HBE cells and thereby promoted angiogenesis in HUVEC cells. Inhibition of exosomal miR-21, however, decreased VEGF levels in recipient cells, which blocked exosome-induced angiogenesis. Thus, miR-21 in exosomes leads to STAT3 activation, which increases VEGF levels in recipient cells, a process involved in angiogenesis and malignant transformation of HBE cells. These results, demonstrating the function of exosomal miR-21 from transformed HBE cells, provide a new perspective for intervention strategies to prevent carcinogenesis of lung cancer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Enhanced malignant transformation is accompanied by increased survival recovery after ionizing radiation in Chinese hamster embryo fibroblasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boothman, D.A.

    Transformed Chinese hamster embryo fibroblasts (CHEF), which gradually increase in tumor-forming ability in nude mice, were isolated from normal diploid CHEF/18 cells. Transformed CHEF cells (i.e., T30-4 > 21-2M3 > 21-2 > normal CHEF/18) showed gradual increases in potentially lethal damage (PLD) survival recovery. {beta}-Lapachone and camptothecin, modulators of topoisomerase I (Topo I) activity, not only prevented survival recovery in normal as well as in tumor cells, but enhanced unscheduled DNA synthesis. These seemingly conflicting results are due to the fact that Topo I activity can be modulated by inhibitors to convert single-stranded DNA lesions into double-stranded breaks. Increases inmore » unscheduled DNA synthesis may result from a continual supply of free ends, on which DNA repair processes may act. Altering Topo I activity with modulators appears to increase X-ray lethality via a DNA lesion modification suicide pathway. Cells down-regulate Topo I immediately after ionizing radiation to prevent Topo I-mediated lesion modification and to enhance survival recovery. 16 refs., 3 figs., 1 tab.« less

  19. A gene involved in control of human cellular senescence on human chromosome 1q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensler, P.J.; Pereira-Smith, O.M.; Annab, L.A.

    1994-04-01

    Normal cells in culture exhibit limited division potential and have been used as a model for cellular senescence. In contrast, tumor-derived or carcinogen- or virus-transformed cells are capable of indefinite division. Fusion of normal human diploid fibroblasts with immortal human cells yielded hybrids having limited life spans, indicating that cellular senescence was dominant. Fusions of various immortal human cell lines with each other led to the identification of four complementation groups for indefinite division. The purpose of this study was to determine whether human chromosome 1 could complement the recessive immortal defect of human cell lines assigned to one ofmore » the four complementation groups. Using microcell fusion, the authors introduced a single normal human chromosome 1 into immortal human cell lines representing the complementation groups and determined that it caused loss of proliferative potential of an osteosarcoma-derived cell line (TE85), a cytomegalovirus-transformed lung fibroblast cell line (CMV-Mj-HEL-1), and a Ki-ras[sup +]-transformed derivative of TE85 (143B TK[sup [minus

  20. Identification of Novel Prognostic Genetic Markers in Prostate Cancer

    DTIC Science & Technology

    2000-02-01

    alterations in two normal- and three malignant-derived prostate epithelial cell lines immortalized with the E6 and E7 transforming genes of human papilloma virus (HPV...malignant-derived prostate epithelial cell lines immortalized with the E6 and E7 transforming genes of human papilloma virus (HPV) 16. These studies...transforming genes of human papilloma virus (HPV) 16 (13). The cell lines demonstrated several numerical and structural chromosomal alterations

  1. Model-Based Clustering and Data Transformations for Gene Expression Data

    DTIC Science & Technology

    2001-04-30

    transformation parameters, e.g. Andrews, Gnanadesikan , and Warner (1973). Aitchison tests: Aitchison (1986) tested three aspects of the data for...N in the Box-Cox transformation in Equation (5) is estimated by maximum likelihood using the observa- tions (Andrews, Gnanadesikan , and Warner 1973...Compositional Data. Chapman and Hall. Andrews, D. F., R. Gnanadesikan , and J. L. Warner (1973). Methods for assessing multivari- ate normality. In P. R

  2. The Impact of Ethnicity-Dependent Differences in Breast Epithelial Hierarchy on Tumor Incidence and Characteristics

    DTIC Science & Technology

    2016-10-01

    TNBC) is significantly higher in African American than Caucasian women suggesting that the biology of normal breast epithelial cells between these two...we have generated immortalized cell lines from healthy breast tissues of African American and Caucasian women and transformed these cells with...progenitor phenotype. Transformed cells are being characterized for signal transduction pathway activation. Transformed cells from African American women

  3. Normal reference values for bladder wall thickness on CT in a healthy population.

    PubMed

    Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T

    2018-02-01

    To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2  = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.

  4. Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis

    NASA Astrophysics Data System (ADS)

    Evans, Alan C.; Dai, Weiqian; Collins, D. Louis; Neelin, Peter; Marrett, Sean

    1991-06-01

    We describe the implementation, experience and preliminary results obtained with a 3-D computerized brain atlas for topographical and functional analysis of brain sub-regions. A volume-of-interest (VOI) atlas was produced by manual contouring on 64 adjacent 2 mm-thick MRI slices to yield 60 brain structures in each hemisphere which could be adjusted, originally by global affine transformation or local interactive adjustments, to match individual MRI datasets. We have now added a non-linear deformation (warp) capability (Bookstein, 1989) into the procedure for fitting the atlas to the brain data. Specific target points are identified in both atlas and MRI spaces which define a continuous 3-D warp transformation that maps the atlas on to the individual brain image. The procedure was used to fit MRI brain image volumes from 16 young normal volunteers. Regional volume and positional variability were determined, the latter in such a way as to assess the extent to which previous linear models of brain anatomical variability fail to account for the true variation among normal individuals. Using a linear model for atlas deformation yielded 3-D fits of the MRI data which, when pooled across subjects and brain regions, left a residual mis-match of 6 - 7 mm as compared to the non-linear model. The results indicate a substantial component of morphometric variability is not accounted for by linear scaling. This has profound implications for applications which employ stereotactic coordinate systems which map individual brains into a common reference frame: quantitative neuroradiology, stereotactic neurosurgery and cognitive mapping of normal brain function with PET. In the latter case, the combination of a non-linear deformation algorithm would allow for accurate measurement of individual anatomic variations and the inclusion of such variations in inter-subject averaging methodologies used for cognitive mapping with PET.

  5. A pulsed electron gun for the Plane Wave Transformer Linac

    NASA Astrophysics Data System (ADS)

    Mahadevan, S.; Gandhi, M. L.; Nandedkar, R. V.

    2003-01-01

    A pulsed diode electron gun delivering 500 mA current at 40 kV is described. The gun geometry is optimized using the Electron Trajectory Program EGUN at higher scaling factors by choosing the closest converging starting surface. The effect of an annular gap between cathode and focusing electrode on beam behaviour is compensated by using a suitable focusing electrode. The estimated perveance is 0.065 μperv and the normalized emittance is within 5 π mm mrad. The variation in current density at the cathode has been limited to within 10% across the face of the cathode. Salient features of the pulsed power supply and an insight of its interconnection with the gun are presented. The current measured at the Faraday cup is in agreement with the designed perveance.

  6. Current Saturation Avoidance with Real-Time Control using DPCS

    NASA Astrophysics Data System (ADS)

    Ferrara, M.; Hutchinson, I.; Wolfe, S.; Stillerman, J.; Fredian, T.

    2008-11-01

    Tokamak ohmic-transformer and equilibrium-field coils need to be able to operate near their maximum current capabilities. However if they reach their upper limit during high-performance discharges or in the presence of a strong off-normal event, shape control is compromised, and instability, even plasma disruptions can result. On Alcator C-Mod we designed and tested an anti-saturation routine which detects the impending saturation of OH and EF currents and interpolates to a neighboring safe equilibrium in real-time. The routine was implemented with a multi-processor, multi-time-scale control scheme, which is based on a master process and multiple asynchronous slave processes. The scheme is general and can be used for any computationally-intensive algorithm. USDoE award DE- FC02-99ER545512.

  7. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    PubMed

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  8. Retinal identification based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform.

    PubMed

    Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming

    2013-07-18

    Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  9. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    PubMed Central

    Meng, Xianjing; Yin, Yilong; Yang, Gongping; Xi, Xiaoming

    2013-01-01

    Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT), which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF) is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes. PMID:23873409

  10. Ecohydrological consequences of vegetation interactions within the critical zone in the tropical Andes: multi-scale assessment of vegetation change consequences

    NASA Astrophysics Data System (ADS)

    Villegas, J. C.; Salazar, J. F.; Arias, P. A.; León, J. D.

    2017-12-01

    Land cover transformation is currently one of the most important challenges in tropical South America. These transformations occur both because of climate-related ecological perturbations, as well as in response to ongoing socio-economic processes. A fundamental difference between those two drivers is the spatial and temporal scale at which they operate. However, when considered in a larger context, both drivers affect the ability of ecosystems to provide fundamental services to society. In this work, we use a multi-scale approach to identify key-mechanisms through which land cover transformation significantly affects ecological, hydrological and ecoclimatological dynamics, potentially leading to loss of societally-critical regulation services. We propose a suite of examples spanning multiple spatial and temporal scales that illustrate the effects of land cover trnasformations in ecological, hydrological, biogeochemical and climatic functions in tropical South America. These examples highlight important global-change-effects management challenges, as well as the need to consider the feedbacks and interactions between multi-scale processes.

  11. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  12. How do people transform landscapes? A sociological perspective on suburban sprawl and tropical deforestation.

    PubMed

    Rudel, Thomas K

    2009-07-01

    Humans transformed landscapes at an unprecedented scale and pace during the 20th century, creating sprawling urban areas in affluent countries and large-scale agricultural expanses in tropics. To date, attempts to explain these processes in other disciplines have had a disembodied, a historical quality to them. A sociological account of these changes emphasizes the role of strategic actions by states and coalitions of interested parties in transforming landscapes. It identifies the agents of change and the timing of transformative events. Case studies of suburban sprawl and tropical deforestation illustrate the value of the sociological approach and the wide range of situations to which it applies.

  13. An evaluation of procedures to estimate monthly precipitation probabilities

    NASA Astrophysics Data System (ADS)

    Legates, David R.

    1991-01-01

    Many frequency distributions have been used to evaluate monthly precipitation probabilities. Eight of these distributions (including Pearson type III, extreme value, and transform normal probability density functions) are comparatively examined to determine their ability to represent accurately variations in monthly precipitation totals for global hydroclimatological analyses. Results indicate that a modified version of the Box-Cox transform-normal distribution more adequately describes the 'true' precipitation distribution than does any of the other methods. This assessment was made using a cross-validation procedure for a global network of 253 stations for which at least 100 years of monthly precipitation totals were available.

  14. Bradykinin-induced growth inhibition of normal rat kidney (NRK) cells is paralleled by a decrease in epidermal-growth-factor receptor expression.

    PubMed Central

    Van Zoelen, E J; Peters, P H; Afink, G B; Van Genesen, S; De Roos, D G; Van Rotterdam, W; Theuvenet, A P

    1994-01-01

    Normal rat kidney fibroblasts, grown to density arrest in the presence of epidermal growth factor (EGF), can be induced to undergo phenotypic transformation by treatment with transforming growth factor beta or retinoic acid. Here we show that bradykinin blocks this growth-stimulus-induced loss of density-dependent growth arrest by a specific receptor-mediated mechanism. The effects of bradykinin are specific, and are not mimicked by other phosphoinositide-mobilizing agents such as prostaglandin F2 alpha. Northern-blot analysis and receptor-binding studies demonstrate that bradykinin also inhibits the retinoic acid-induced increase in EGF receptor levels in these cells. These studies provide additional evidence that EGF receptor levels modulate EGF-induced expression of the transformed phenotype in these cells. Images Figure 5 PMID:8135739

  15. Multiresolution forecasting for futures trading using wavelet decompositions.

    PubMed

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  16. Conormal distributions in the Shubin calculus of pseudodifferential operators

    NASA Astrophysics Data System (ADS)

    Cappiello, Marco; Schulz, René; Wahlberg, Patrik

    2018-02-01

    We characterize the Schwartz kernels of pseudodifferential operators of Shubin type by means of a Fourier-Bros-Iagolnitzer transform. Based on this, we introduce as a generalization a new class of tempered distributions called Shubin conormal distributions. We study their transformation behavior, normal forms, and microlocal properties.

  17. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  18. Normal modes and mode transformation of pure electron vortex beams

    PubMed Central

    Thirunavukkarasu, G.; Mousley, M.; Babiker, M.

    2017-01-01

    Electron vortex beams constitute the first class of matter vortex beams which are currently routinely produced in the laboratory. Here, we briefly review the progress of this nascent field and put forward a natural quantum basis set which we show is suitable for the description of electron vortex beams. The normal modes are truncated Bessel beams (TBBs) defined in the aperture plane or the Fourier transform of the transverse structure of the TBBs (FT-TBBs) in the focal plane of a lens with the said aperture. As these modes are eigenfunctions of the axial orbital angular momentum operator, they can provide a complete description of the two-dimensional transverse distribution of the wave function of any electron vortex beam in such a system, in analogy with the prominent role Laguerre–Gaussian (LG) beams played in the description of optical vortex beams. The characteristics of the normal modes of TBBs and FT-TBBs are described, including the quantized orbital angular momentum (in terms of the winding number l) and the radial index p>0. We present the experimental realization of such beams using computer-generated holograms. The mode analysis can be carried out using astigmatic transformation optics, demonstrating close analogy with the astigmatic mode transformation between LG and Hermite–Gaussian beams. This article is part of the themed issue ‘Optical orbital angular momentum’. PMID:28069769

  19. Normal modes and mode transformation of pure electron vortex beams.

    PubMed

    Thirunavukkarasu, G; Mousley, M; Babiker, M; Yuan, J

    2017-02-28

    Electron vortex beams constitute the first class of matter vortex beams which are currently routinely produced in the laboratory. Here, we briefly review the progress of this nascent field and put forward a natural quantum basis set which we show is suitable for the description of electron vortex beams. The normal modes are truncated Bessel beams (TBBs) defined in the aperture plane or the Fourier transform of the transverse structure of the TBBs (FT-TBBs) in the focal plane of a lens with the said aperture. As these modes are eigenfunctions of the axial orbital angular momentum operator, they can provide a complete description of the two-dimensional transverse distribution of the wave function of any electron vortex beam in such a system, in analogy with the prominent role Laguerre-Gaussian (LG) beams played in the description of optical vortex beams. The characteristics of the normal modes of TBBs and FT-TBBs are described, including the quantized orbital angular momentum (in terms of the winding number l) and the radial index p>0. We present the experimental realization of such beams using computer-generated holograms. The mode analysis can be carried out using astigmatic transformation optics, demonstrating close analogy with the astigmatic mode transformation between LG and Hermite-Gaussian beams.This article is part of the themed issue 'Optical orbital angular momentum'. © 2017 The Author(s).

  20. Stabilization of growth of a pearlite colony because of interaction between carbon and lattice dilatations

    NASA Astrophysics Data System (ADS)

    Razumov, I. K.

    2017-10-01

    The previously proposed model of pearlite transformation develops taking into account the possible interaction between carbon and lattice dilatations arising in austenite near the pearlite colony. The normal stresses caused by the colony stimulate autocatalysis of plates, and tangential stresses promote the stabilization of the transformation front. The mechanism of ferrite branching, which can play an important role in the kinetics of pearlite and bainite transformations, is discussed.

  1. An algorithm to compute the sequency ordered Walsh transform

    NASA Technical Reports Server (NTRS)

    Larsen, H.

    1976-01-01

    A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.

  2. 630 kVA high temperature superconducting transformer

    NASA Astrophysics Data System (ADS)

    Zueger, H.

    This document describes the 630 KVA HTS transformer project made by ABB jointly with EDF and ASC. The project started April 1994 and its goal was to manufacture a real scale superconducting distribution transformer and to operate it during one year in the grid of Geneva's utility (SIG). The conclusion highlights the future perspective of HTS transformers.

  3. Cultural Adaptation of Headmasters' Transformational Leadership Scale and a Study on Teachers' Perceptions

    ERIC Educational Resources Information Center

    Balyer, Aydin; Özcan, Kenan

    2012-01-01

    Problem Statement: Transformational leadership increases organization members' commitment and engagement in meeting organizational goals and it enhances skills and capacities. Many studies reveal that transformational leadership behaviors, such as idealized influence, inspirational motivation, individualized consideration, innovative climate, and…

  4. Fundamentals of Research Data and Variables: The Devil Is in the Details.

    PubMed

    Vetter, Thomas R

    2017-10-01

    Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).

  5. Accurate lumen diameter measurement in curved vessels in carotid ultrasound: an iterative scale-space and spatial transformation approach.

    PubMed

    Krishna Kumar, P; Araki, Tadashi; Rajan, Jeny; Saba, Luca; Lavra, Francesco; Ikeda, Nobutaka; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Gupta, Ajay; Suri, Jasjit S

    2017-08-01

    Monitoring of cerebrovascular diseases via carotid ultrasound has started to become a routine. The measurement of image-based lumen diameter (LD) or inter-adventitial diameter (IAD) is a promising approach for quantification of the degree of stenosis. The manual measurements of LD/IAD are not reliable, subjective and slow. The curvature associated with the vessels along with non-uniformity in the plaque growth poses further challenges. This study uses a novel and generalized approach for automated LD and IAD measurement based on a combination of spatial transformation and scale-space. In this iterative procedure, the scale-space is first used to get the lumen axis which is then used with spatial image transformation paradigm to get a transformed image. The scale-space is then reapplied to retrieve the lumen region and boundary in the transformed framework. Then, inverse transformation is applied to display the results in original image framework. Two hundred and two patients' left and right common carotid artery (404 carotid images) B-mode ultrasound images were retrospectively analyzed. The validation of our algorithm has done against the two manual expert tracings. The coefficient of correlation between the two manual tracings for LD was 0.98 (p < 0.0001) and 0.99 (p < 0.0001), respectively. The precision of merit between the manual expert tracings and the automated system was 97.7 and 98.7%, respectively. The experimental analysis demonstrated superior performance of the proposed method over conventional approaches. Several statistical tests demonstrated the stability and reliability of the automated system.

  6. Expression patterns of ERVWE1/Syncytin-1 and other placentally expressed human endogenous retroviruses along the malignant transformation process of hydatidiform moles.

    PubMed

    Bolze, Pierre-Adrien; Patrier, Sophie; Cheynet, Valérie; Oriol, Guy; Massardier, Jérôme; Hajri, Touria; Guillotte, Michèle; Bossus, Marc; Sanlaville, Damien; Golfier, François; Mallet, François

    2016-03-01

    Up to 20% of hydatidiform moles are followed by malignant transformation in gestational trophoblastic neoplasia and require chemotherapy. Syncytin-1 is involved in human placental morphogenesis and is also expressed in various cancers. We assessed the predictive value of the expression of Syncytin-1 and its interactants in the malignant transformation process of hydatidiform moles. Syncytin-1 glycoprotein was localized by immunohistochemistry in hydatidiform moles, gestational trophoblastic neoplasia and control placentas. The transcription levels of its locus ERVWE1, its interaction partners (hASCT1, hASCT2, TLR4 and DC-SIGN) and two loci (ERVFRDE1 and ERV3) involved the expression of other placental envelopes were assessed by real-time PCR. Syncytin-1 glycoprotein was expressed in syncytiotrophoblast of hydatidiform moles with an apical enhancement when compared with normal placentas. Moles with further malignant transformation had a higher staining intensity of Syncytin-1 surface unit C-terminus but the transcription level of its locus ERVWE1 was not different from that of moles with further remission and normal placentas. hASCT1 and TLR4, showed lower transcription levels in complete moles when compared to normal placentas. ERVWE1, ERVFRDE1 and ERV3 transcription was down-regulated in hydatidiform moles and gestational trophoblastic neoplasia. Variations of Syncytin-1 protein localization and down-regulation of hASCT1 and TLR4 transcription are likely to reflect altered functions of Syncytin-1 in the premalignant context of complete moles. The reduced transcription in gestational trophoblastic diseases of ERVWE1, ERVFRDE1 and ERV3, which expression during normal pregnancy is differentially regulated by promoter region methylation, suggest a joint dysregulation mechanism in malignant context. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Influence of Transformation Plasticity on the Distribution of Internal Stress in Three Water-Quenched Cylinders

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Qin, Shengwei; Zhang, Jiazhi; Wang, Ying; Rong, Yonghua; Zuo, Xunwei; Chen, Nailu

    2017-10-01

    Based on the hardenability of three medium carbon steels, cylinders with the same 60-mm diameter and 240-mm length were designed for quenching in water to obtain microstructures, including a pearlite matrix (Chinese steel mark: 45), a bainite matrix (42CrMo), and a martensite matrix (40CrNiMo). Through the combination of normalized functions describing transformation plasticity (TP), the thermo-elasto-plastic constitutive equation was deduced. The results indicate that the finite element simulation (FES) of the internal stress distribution in the three kinds of hardenable steel cylinders based on the proposed exponent-modified (Ex-Modified) normalized function is more consistent with the X-ray diffraction (XRD) measurements than those based on the normalized functions proposed by Abrassart, Desalos, and Leblond, which is attributed to the fact that the Ex-Modified normalized function better describes the TP kinetics. In addition, there was no significant difference between the calculated and measured stress distributions, even though TP was taken into account for the 45 carbon steel; that is, TP can be ignored in FES. In contrast, in the 42CrMo and 40CrNiMo alloyed steels, the significant effect of TP on the residual stress distributions was demonstrated, meaning that TP must be included in the FES. The rationality of the preceding conclusions was analyzed. The complex quenching stress is a consequence of interactions between the thermal and phase transformation stresses. The separated calculations indicate that the three steels exhibit similar thermal stress distributions for the same water-quenching condition, but different phase transformation stresses between 45 carbon steel and alloyed steels, leading to different distributions of their axial and tangential stresses.

  8. Biological responses of progestogen metabolites in normal and cancerous human breast.

    PubMed

    Pasqualini, Jorge R; Chetrite, Gérard S

    2010-12-01

    At present, more than 200 progestogen molecules are available, but their biological response is a function of various factors: affinity to progesterone or other receptors, their structure, the target tissues considered, biological response, experimental conditions, dose, method of administration and metabolic transformations. Metabolic transformation is of huge importance because in various biological processes the metabolic product(s) not only control the activity of the maternal hormone but also have an important activity of its own. In this regard, it was observed that the 20-dihydro derivative of the progestogen dydrogesterone (Duphaston®) is significantly more active than the parent compound in inhibiting sulfatase and 17β-hydroxysteroid dehydrogenase in human breast cancer cells. Estrone sulfatase activity is also inhibited by norelgestromin, a norgestimate metabolite. Interesting information was obtained with a similar progestogen, tibolone, which is rapidly metabolized into the active 3α/3β-hydroxy and 4-ene metabolites. All these metabolites can inhibit sulfatase and 17β-hydroxysteroid dehydrogenase and stimulate sulfotransferase in human breast cancer cells. Another attractive aspect is the metabolic transformation of progesterone itself in human breast tissues. In the normal breast progesterone is mainly converted to 4-ene derivatives, whereas in the tumor tissue it is converted mostly to 5α-pregnane derivatives. 20α-Dihydroprogesterone is found mainly in normal breast tissue and possesses antiproliferative properties as well as the ability to act as an anti-aromatase agent. Consequently, this progesterone metabolite could be involved in the control of estradiol production in the normal breast and therefore implicated in one of the multifactorial mechanisms of the breast carcinogenesis process. In conclusion, a better understanding of both natural and synthetic hormone metabolic transformations and their control could potentially provide attractive new therapies for the treatment of hormone-dependent pathologies.

  9. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed Central

    2011-01-01

    Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614

  10. The Box-Cox power transformation on nursing sensitive indicators: does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed

    Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy

    2011-08-19

    Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  11. Regeneration of multiple shoots from transgenic potato events facilitates the recovery of phenotypically normal lines: assessing a cry9Aa2 gene conferring insect resistance

    PubMed Central

    2011-01-01

    Background The recovery of high performing transgenic lines in clonal crops is limited by the occurrence of somaclonal variation during the tissue culture phase of transformation. This is usually circumvented by developing large populations of transgenic lines, each derived from the first shoot to regenerate from each transformation event. This study investigates a new strategy of assessing multiple shoots independently regenerated from different transformed cell colonies of potato (Solanum tuberosum L.). Results A modified cry9Aa2 gene, under the transcriptional control of the CaMV 35S promoter, was transformed into four potato cultivars using Agrobacterium-mediated gene transfer using a nptII gene conferring kanamycin resistance as a selectable marker gene. Following gene transfer, 291 transgenic lines were grown in greenhouse experiments to assess somaclonal variation and resistance to potato tuber moth (PTM), Phthorimaea operculella (Zeller). Independently regenerated lines were recovered from many transformed cell colonies and Southern analysis confirmed whether they were derived from the same transformed cell. Multiple lines regenerated from the same transformed cell exhibited a similar response to PTM, but frequently exhibited a markedly different spectrum of somaclonal variation. Conclusions A new strategy for the genetic improvement of clonal crops involves the regeneration and evaluation of multiple shoots from each transformation event to facilitate the recovery of phenotypically normal transgenic lines. Most importantly, regenerated lines exhibiting the phenotypic appearance most similar to the parental cultivar are not necessarily derived from the first shoot regenerated from a transformed cell colony, but can frequently be a later regeneration event. PMID:21995716

  12. Precipitation and Phase Transformations in 2101 Lean Duplex Stainless Steel During Isothermal Aging

    NASA Astrophysics Data System (ADS)

    Maetz, Jean-Yves; Cazottes, Sophie; Verdu, Catherine; Kleber, Xavier

    2016-01-01

    The effect of isothermal aging at 963 K (690 °C) on the microstructure of a 2101 lean duplex stainless steel, with the composition Fe-21.5Cr-5Mn-1.6Ni-0.22N-0.3Mo, was investigated using a multi-technique and multi-scale approach. The kinetics of phase transformation and precipitation was followed from a few minutes to thousands of hours using thermoelectric power measurements; based on these results, certain aging states were selected for electron microscopy characterization. Scanning electron microscopy, electron back-scattered diffraction, and transmission electron microscopy were used to quantitatively describe the microstructural evolution through crystallographic analysis, chemical analysis, and volume fraction measurements from the macroscopic scale down to the nanometric scale. During aging, the precipitation of M23C6 carbides, Cr2N nitrides, and σ phase as well as the transformation of ferrite into austenite and austenite into martensite was observed. These complex microstructural changes are controlled by Cr volume diffusion. The precipitation and phase transformation mechanisms are described.

  13. A program for handling map projections of small-scale geospatial raster data

    USGS Publications Warehouse

    Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.

    2012-01-01

    Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.

  14. TGF-Alpha Expression During Breast Tumorigenesis

    DTIC Science & Technology

    1999-07-01

    contributors to breast tumorigenesis. Therefore, we examined TGFa expression in normal and transformed epithelium. We also examined TOFa and EGFR protein in...patient samples it is difficult to determine what effect tissue culture has on normal T(3Fa protein synthesis and processing. TOFa protein expression

  15. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2015-10-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box-Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples ( n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated.

  16. Nucleoprotein Changes in Plant Tumor Growth

    PubMed Central

    Rasch, Ellen; Swift, Hewson; Klein, Richard M.

    1959-01-01

    Tumor cell transformation and growth were studied in a plant neoplasm, crown gall of bean, induced by Agrobacterium rubi. Ribose nucleic acid (RNA), deoxyribose nucleic acid (DNA), histone, and total protein were estimated by microphotometry of nuclei, nucleoli, and cytoplasm in stained tissue sections. Transformation of normal cells to tumor cells was accompanied by marked increases in ribonucleoprotein content of affected tissues, reaching a maximum 2 to 3 days after inoculation with virulent bacteria. Increased DNA levels were in part associated with increased mitotic frequency, but also with progressive accumulation of nuclei in the higher DNA classes, formed by repeated DNA doubling without intervening reduction by mitosis. Some normal nuclei of the higher DNA classes (with 2, 4, or 8 times the DNA content of diploid nuclei) were reduced to diploid levels by successive cell divisions without intervening DNA synthesis. The normal relation between DNA synthesis and mitosis was thus disrupted in tumor tissue. Nevertheless, clearly defined DNA classes, as found in homologous normal tissues, were maintained in the tumor at all times. PMID:13673042

  17. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    PubMed Central

    Hittner, James B.

    2014-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared with its major alternatives, including the Spearman rank-order correlation, the bootstrap estimate, the Box–Cox transformation family, and a general normalizing transformation (i.e., rankit), as well as to various bias adjustments. Nonnormality caused the correlation coefficient to be inflated by up to +.14, particularly when the nonnormality involved heavy-tailed distributions. Traditional bias adjustments worsened this problem, further inflating the estimate. The Spearman and rankit correlations eliminated this inflation and provided conservative estimates. Rankit also minimized random error for most sample sizes, except for the smallest samples (n = 10), where bootstrapping was more effective. Overall, results justify the use of carefully chosen alternatives to the Pearson correlation when normality is violated. PMID:29795841

  18. Measuring Resistance to Change at the Within-Session Level

    PubMed Central

    Tonneau, François; Ríos, Américo; Cabrera, Felipe

    2006-01-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results. PMID:16903495

  19. Measuring resistance to change at the within-session level.

    PubMed

    Tonneau, François; Ríos, Américo; Cabrera, Felipe

    2006-07-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results.

  20. Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations

    DOE PAGES

    Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F.; ...

    2016-03-04

    Chemical imaging at the atomic-scale provides a useful real-space approach to chemically investigate solid crystal structures, and has been recently demonstrated in aberration corrected scanning transmission electron microscopy (STEM). Atomic-scale chemical imaging by STEM using energy-dispersive X-ray spectroscopy (EDS) offers easy data interpretation with a one-to-one correspondence between image and structure but has a severe shortcoming due to the poor efficiency of X-ray generation and collection. As a result, it requires a long acquisition time of typical > few 100 seconds, limiting its potential applications. Here we describe the development of an atomic-scale STEM EDS chemical imaging technique that cutsmore » the acquisition time to one or a few seconds, efficiently reducing the acquisition time by more than 100 times. This method was demonstrated using LaAlO 3 (LAO) as a model crystal. Applying this method to the study of phase transformation induced by electron-beam radiation in a layered lithium transition-metal (TM) oxide, i.e., Li[Li 0.2Ni 0.2Mn 0.6]O 2 (LNMO), a cathode materials for lithium-ion batteries, we obtained a time-series of the atomic-scale chemical imaging, showing the transformation progressing by preferably jumping of Ni atoms from the TM layers into the Li-layers. The new capability offers an opportunity for temporal, atomic-scale chemical mapping of crystal structures for the investigation of materials susceptible to electron irradiation as well as phase transformation and dynamics at the atomic-scale.« less

  1. Feedback linearization of singularly perturbed systems based on canonical similarity transformations

    NASA Astrophysics Data System (ADS)

    Kabanov, A. A.

    2018-05-01

    This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.

  2. Genetic changes in mammalian cells transformed by helium ions

    NASA Astrophysics Data System (ADS)

    Durante, M.; Grossi, G.; Yang, T. C.; Roots, R.

    Midterm Syrian Hamster embryo (SHE) cells were employed to study high LET-radiation induced tumorigenesis. Normal SHE cells (secondary passage) were irradiated with accelerated helium ions at an incident energy of 22 MeV/u (9-10 keV/μm). Transformed clones were isolated after growth in soft agar of cells obtained from the foci of the initial monolayer plated postirradiation. To study the progression process of malignant transformation, the transformed clones were followed by monolayer subculturing for prolonged periods of time. Subsequently, neoplasia tests in nude mice were done. In this work, however, we have focused on karyotypic changes in the banding patterns of the chromosomes during the early part of the progressive process of cell transformation for helium ion-induced transformed cells.

  3. Two Point Space-Time Correlation of Density Fluctuations Measured in High Velocity Free Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2006-01-01

    Two-point space-time correlations of air density fluctuations in unheated, fully-expanded free jets at Mach numbers M(sub j) = 0.95, 1.4, and 1.8 were measured using a Rayleigh scattering based diagnostic technique. The molecular scattered light from two small probe volumes of 1.03 mm length was measured for a completely non-intrusive means of determining the turbulent density fluctuations. The time series of density fluctuations were analyzed to estimate the integral length scale L in a moving frame of reference and the convective Mach number M(sub c) at different narrow Strouhal frequency (St) bands. It was observed that M(sub c) and the normalized moving frame length scale L*St/D, where D is the jet diameter, increased with Strouhal frequency before leveling off at the highest resolved frequency. Significant differences were observed between data obtained from the lip shear layer and the centerline of the jet. The wave number frequency transform of the correlation data demonstrated progressive increase in the radiative part of turbulence fluctuations with increasing jet Mach number.

  4. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    PubMed

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  5. The anharmonic quartic force field infrared spectra of three polycyclic aromatic hydrocarbons: Naphthalene, anthracene, and tetracene.

    PubMed

    Mackie, Cameron J; Candian, Alessandra; Huang, Xinchuan; Maltseva, Elena; Petrignani, Annemieke; Oomens, Jos; Buma, Wybren Jan; Lee, Timothy J; Tielens, Alexander G G M

    2015-12-14

    Current efforts to characterize and study interstellar polycyclic aromatic hydrocarbons (PAHs) rely heavily on theoretically predicted infrared (IR) spectra. Generally, such studies use the scaled harmonic frequencies for band positions and double harmonic approximation for intensities of species, and then compare these calculated spectra with experimental spectra obtained under matrix isolation conditions. High-resolution gas-phase experimental spectroscopic studies have recently revealed that the double harmonic approximation is not sufficient for reliable spectra prediction. In this paper, we present the anharmonic theoretical spectra of three PAHs: naphthalene, anthracene, and tetracene, computed with a locally modified version of the SPECTRO program using Cartesian derivatives transformed from Gaussian 09 normal coordinate force constants. Proper treatments of Fermi resonances lead to an impressive improvement on the agreement between the observed and theoretical spectra, especially in the C-H stretching region. All major IR absorption features in the full-scale matrix-isolated spectra, the high-temperature gas-phase spectra, and the most recent high-resolution gas-phase spectra obtained under supersonically cooled molecular beam conditions in the CH-stretching region are assigned.

  6. The anharmonic quartic force field infrared spectra of three polycyclic aromatic hydrocarbons: Naphthalene, anthracene, and tetracene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackie, Cameron J., E-mail: mackie@strw.leidenuniv.nl; Candian, Alessandra; Tielens, Alexander G. G. M.

    2015-12-14

    Current efforts to characterize and study interstellar polycyclic aromatic hydrocarbons (PAHs) rely heavily on theoretically predicted infrared (IR) spectra. Generally, such studies use the scaled harmonic frequencies for band positions and double harmonic approximation for intensities of species, and then compare these calculated spectra with experimental spectra obtained under matrix isolation conditions. High-resolution gas-phase experimental spectroscopic studies have recently revealed that the double harmonic approximation is not sufficient for reliable spectra prediction. In this paper, we present the anharmonic theoretical spectra of three PAHs: naphthalene, anthracene, and tetracene, computed with a locally modified version of the SPECTRO program using Cartesianmore » derivatives transformed from Gaussian 09 normal coordinate force constants. Proper treatments of Fermi resonances lead to an impressive improvement on the agreement between the observed and theoretical spectra, especially in the C–H stretching region. All major IR absorption features in the full-scale matrix-isolated spectra, the high-temperature gas-phase spectra, and the most recent high-resolution gas-phase spectra obtained under supersonically cooled molecular beam conditions in the CH-stretching region are assigned.« less

  7. Mass flow and energy balance plus economic analysis of a full-scale biogas plant in the rice-wine-pig system.

    PubMed

    Li, Jiang; Kong, Chuixue; Duan, Qiwu; Luo, Tao; Mei, Zili; Lei, Yunhui

    2015-10-01

    This paper presents mass flow and energy balance as well as an economic analysis for a biogas plant in a rice-wine-pig system at a practical rather than laboratory scale. Results showed feeding amount was 65.30 t d(-1) (total solid matter (TSM) 1.3%) for the normal temperature continuous stirred tank reactor (CSTR), and 16.20 t d(-1) (TSM 8.4%) for the mesophilic CSTR. The digestion produced 80.50 t d(-1) of mass, with 76.41 t d(-1) flowing into rice fields and 4.49 t d(-1) into composting. Energy consumption of this plant fluctuated with seasons, and surplus energy was 823, 221 kWh/year. Thus, biogas plant was critical for material recycling and energy transformation of this agro-ecosystem. The economic analysis showed that the payback time of the plant was 10.9 years. It also revealed application of biogas as a conventional energy replacement would be attractive for a crop-wine-livestock ecosystem with anaerobic digestion of manure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  9. Real-time atomistic observation of structural phase transformations in individual hafnia nanorods

    DOE PAGES

    Hudak, Bethany M.; Depner, Sean W.; Waetzig, Gregory R.; ...

    2017-05-12

    High-temperature phases of hafnium dioxide have exceptionally high dielectric constants and large bandgaps, but quenching them to room temperature remains a challenge. Scaling the bulk form to nanocrystals, while successful in stabilizing the tetragonal phase of isomorphous ZrO 2, has produced nanorods with a twinned version of the room temperature monoclinic phase in HfO 2. Here we use in situ heating in a scanning transmission electron microscope to observe the transformation of an HfO 2 nanorod from monoclinic to tetragonal, with a transformation temperature suppressed by over 1000°C from bulk. When the nanorod is annealed, we observe with atomic-scale resolutionmore » the transformation from twinned-monoclinic to tetragonal, starting at a twin boundary and propagating via coherent transformation dislocation; the nanorod is reduced to hafnium on cooling. Unlike the bulk displacive transition, nanoscale size-confinement enables us to manipulate the transformation mechanism, and we observe discrete nucleation events and sigmoidal nucleation and growth kinetics.« less

  10. Study of low insertion loss and miniaturization wavelet transform and inverse transform processor using SAW devices.

    PubMed

    Jiang, Hua; Lu, Wenke; Zhang, Guoan

    2013-07-01

    In this paper, we propose a low insertion loss and miniaturization wavelet transform and inverse transform processor using surface acoustic wave (SAW) devices. The new SAW wavelet transform devices (WTDs) use the structure with two electrode-widths-controlled (EWC) single phase unidirectional transducers (SPUDT-SPUDT). This structure consists of the input withdrawal weighting interdigital transducer (IDT) and the output overlap weighting IDT. Three experimental devices for different scales 2(-1), 2(-2), and 2(-3) are designed and measured. The minimum insertion loss of the three devices reaches 5.49dB, 4.81dB, and 5.38dB respectively which are lower than the early results. Both the electrode width and the number of electrode pairs are reduced, thus making the three devices much smaller than the early devices. Therefore, the method described in this paper is suitable for implementing an arbitrary multi-scale low insertion loss and miniaturization wavelet transform and inverse transform processor using SAW devices. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. MRS3D: 3D Spherical Wavelet Transform on the Sphere

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2011-12-01

    Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. We present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We tested the 3D wavelet transform and as a toy-application, applied a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and found we can successfully remove noise without much loss to the large scale structure. The new spherical 3D isotropic wavelet transform, called MRS3D, is ideally suited to analysing and denoising future 3D spherical cosmological surveys; it uses a novel discrete spherical Fourier-Bessel Transform. MRS3D is based on two packages, IDL and Healpix and can be used only if these two packages have been installed.

  12. Birkhoff Normal Form for Some Nonlinear PDEs

    NASA Astrophysics Data System (ADS)

    Bambusi, Dario

    We consider the problem of extending to PDEs Birkhoff normal form theorem on Hamiltonian systems close to nonresonant elliptic equilibria. As a model problem we take the nonlinear wave equation with Dirichlet boundary conditions on [0,π] g is an analytic skewsymmetric function which vanishes for u=0 and is periodic with period 2π in the x variable. We prove, under a nonresonance condition which is fulfilled for most g's, that for any integer M there exists a canonical transformation that puts the Hamiltonian in Birkhoff normal form up to a reminder of order M. The canonical transformation is well defined in a neighbourhood of the origin of a Sobolev type phase space of sufficiently high order. Some dynamical consequences are obtained. The technique of proof is applicable to quite general semilinear equations in one space dimension.

  13. Nonstationary Deformation of an Elastic Layer with Mixed Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Kubenko, V. D.

    2016-11-01

    The analytic solution to the plane problem for an elastic layer under a nonstationary surface load is found for mixed boundary conditions: normal stress and tangential displacement are specified on one side of the layer (fourth boundary-value problem of elasticity) and tangential stress and normal displacement are specified on the other side of the layer (second boundary-value problem of elasticity). The Laplace and Fourier integral transforms are applied. The inverse Laplace and Fourier transforms are found exactly using tabulated formulas and convolution theorems for various nonstationary loads. Explicit analytical expressions for stresses and displacements are derived. Loads applied to a constant surface area and to a surface area varying in a prescribed manner are considered. Computations demonstrate the dependence of the normal stress on time and spatial coordinates. Features of wave processes are analyzed

  14. The relationship between oceanic transform fault segmentation, seismicity, and thermal structure

    NASA Astrophysics Data System (ADS)

    Wolfson-Schwehr, Monica

    Mid-ocean ridge transform faults (RTFs) are typically viewed as geometrically simple, with fault lengths readily constrained by the ridge-transform intersections. This relative simplicity, combined with well-constrained slip rates, make them an ideal environment for studying strike-slip earthquake behavior. As the resolution of available bathymetric data over oceanic transform faults continues to improve, however, it is being revealed that the geometry and structure of these faults can be complex, including such features as intra-transform pull-apart basins, intra-transform spreading centers, and cross-transform ridges. To better determine the resolution of structural complexity on RTFs, as well as the prevalence of RTF segmentation, fault structure is delineated on a global scale. Segmentation breaks the fault system up into a series of subparallel fault strands separated by an extensional basin, intra-transform spreading center, or fault step. RTF segmentation occurs across the full range of spreading rates, from faults on the ultraslow portion of the Southwest Indian Ridge to faults on the ultrafast portion of the East Pacific Rise (EPR). It is most prevalent along the EPR, which hosts the fastest spreading rates in the world and has undergone multiple changes in relative plate motion over the last couple of million years. Earthquakes on RTFs are known to be small, to scale with the area above the 600°C isotherm, and to exhibit some of the most predictable behaviors in seismology. In order to determine whether segmentation affects the global RTF scaling relations, the scalings are recomputed using an updated seismic catalog and fault database in which RTF systems are broken up according to their degree of segmentation (as delineated from available bathymetric datasets). No statistically significant differences between the new computed scaling relations and the current scaling relations were found, though a few faults were identified as outliers. Finite element analysis is used to model 3-D RTF fault geometry assuming a viscoplastic rheology in order to determine how segmentation affects the underlying thermal structure of the fault. In the models, fault segment length, length and location along fault of the intra-transform spreading center, and slip rate are varied. A new scaling relation is developed for the critical fault offset length (OC) that significantly reduces the thermal area of adjacent fault segments, such that adjacent segments are fully decoupled at ~4 OC . On moderate to fast slipping RTFs, offsets ≥ 5 km are sufficient to significantly reduce the thermal influence between two adjacent transform fault segments. The relationship between fault structure and seismic behavior was directly addressed on the Discovery transform fault, located at 4°S on the East Pacific Rise. One year of microseismicity recorded on an OBS array, and 24 years of Mw ≥ 5.4 earthquakes obtained from the Global Centroid Moment Tensor catalog, were correlated with surface fault structure delineated from high-resolution multibeam bathymetry. Each of the 15 Mw ≥ 5.4 earthquakes was relocated into one of five distinct repeating rupture patches, while microseismicity was found to be reduced within these patches. While the endpoints of these patches appeared to correlate with structural features on the western segment of Discovery, small step-overs in the primary fault trace were not observed at patch boundaries. This indicates that physical segmentation of the fault is not the primary control on the size and location of large earthquakes on Discovery, and that along-strike heterogeneity in fault zone properties must play an important role.

  15. Spectrum transformation for divergent iterations

    NASA Technical Reports Server (NTRS)

    Gupta, Murli M.

    1991-01-01

    Certain spectrum transformation techniques are described that can be used to transform a diverging iteration into a converging one. Two techniques are considered called spectrum scaling and spectrum enveloping and how to obtain the optimum values of the transformation parameters is discussed. Numerical examples are given to show how this technique can be used to transform diverging iterations into converging ones; this technique can also be used to accelerate the convergence of otherwise convergent iterations.

  16. 78 FR 76789 - Additional Connect America Fund Phase II Issues

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-19

    ... inspection and copying during normal business hours in the FCC Reference Information Center, Portals II, 445... Phase I to Phase II. 2. Timing of Phase II Support Disbursements. In the USF/ICC Transformation Order... language in paragraph 180 of the USF/ICC Transformation Order. We now seek to more fully develop the record...

  17. ALK and TGF-Beta Resistance in Breast Cancer

    DTIC Science & Technology

    2017-10-01

    and H.F. Lodish, Role of transforming growth factor beta in human disease. N Engl J Med, 2000. 342(18): p. 1350-8. 3. Massague, J., S.W. Blain, and... Transforming growth factor-beta signaling in normal and malignant hematopoiesis. Leukemia, 2003. 17(9): p. 1731-7. 5. Lehman, H.L., et al., Modeling and

  18. Short-Term Plasticity of the Visuomotor Map during Grasping Movements in Humans

    ERIC Educational Resources Information Center

    Safstrom, Daniel; Edin, Benoni B.

    2005-01-01

    During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects…

  19. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  20. Automated PET-only quantification of amyloid deposition with adaptive template and empirically pre-defined ROI

    NASA Astrophysics Data System (ADS)

    Akamatsu, G.; Ikari, Y.; Ohnishi, A.; Nishida, H.; Aita, K.; Sasaki, M.; Yamamoto, Y.; Sasaki, M.; Senda, M.

    2016-08-01

    Amyloid PET is useful for early and/or differential diagnosis of Alzheimer’s disease (AD). Quantification of amyloid deposition using PET has been employed to improve diagnosis and to monitor AD therapy, particularly in research. Although MRI is often used for segmentation of gray matter and for spatial normalization into standard Montreal Neurological Institute (MNI) space where region-of-interest (ROI) template is defined, 3D MRI is not always available in clinical practice. The purpose of this study was to examine the feasibility of PET-only amyloid quantification with an adaptive template and a pre-defined standard ROI template that has been empirically generated from typical cases. A total of 68 subjects who underwent brain 11C-PiB PET were examined. The 11C-PiB images were non-linearly spatially normalized to the standard MNI T1 atlas using the same transformation parameters of MRI-based normalization. The automatic-anatomical-labeling-ROI (AAL-ROI) template was applied to the PET images. All voxel values were normalized by the mean value of cerebellar cortex to generate the SUVR-scaled images. Eleven typical positive images and eight typical negative images were normalized and averaged, respectively, and were used as the positive and negative template. Positive and negative masks which consist of voxels with SUVR  ⩾1.7 were extracted from both templates. Empirical PiB-prone ROI (EPP-ROI) was generated by subtracting the negative mask from the positive mask. The 11C-PiB image of each subject was non-rigidly normalized to the positive and negative template, respectively, and the one with higher cross-correlation was adopted. The EPP-ROI was then inversely transformed to individual PET images. We evaluated differences of SUVR between standard MRI-based method and PET-only method. We additionally evaluated whether the PET-only method would correctly categorize 11C-PiB scans as positive or negative. Significant correlation was observed between the SUVRs obtained with AAL-ROI and those with EPP-ROI when MRI-based normalization was used, the latter providing higher SUVR. When EPP-ROI was used, MRI-based method and PET-only method provided almost identical SUVR. All 11C-PiB scans were correctly categorized into positive and negative using a cutoff value of 1.7 as compared to visual interpretation. The 11C-PiB SUVR were 2.30  ±  0.24 and 1.25  ±  0.11 for the positive and negative images. PET-only amyloid quantification method with adaptive templates and EPP-ROI can provide accurate, robust and simple amyloid quantification without MRI.

  1. Phase transformations in steels: Processing, microstructure, and performance

    DOE PAGES

    Gibbs, Paul J.

    2014-04-03

    In this study, contemporary steel research is revealing new processing avenues to tailor microstructure and properties that, until recently, were only imaginable. Much of the technological versatility facilitating this development is provided by the understanding and utilization of the complex phase transformation sequences available in ferrous alloys. Today we have the opportunity to explore the diverse phenomena displayed by steels with specialized analytical and experimental tools. Advances in multi-scale characterization techniques provide a fresh perspective into microstructural relationships at the macro- and micro-scale, enabling a fundamental understanding of the role of phase transformations during processing and subsequent deformation.

  2. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    PubMed

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  3. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation

    PubMed Central

    Sountsov, Pavel; Santucci, David M.; Lisman, John E.

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522

  4. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  5. An Automated Method of Scanning Probe Microscopy (SPM) Data Analysis and Reactive Site Tracking for Mineral-Water Interface Reactions Observed at the Nanometer Scale

    NASA Astrophysics Data System (ADS)

    Campbell, B. D.; Higgins, S. R.

    2008-12-01

    Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.

  6. Tangent function transformation of the Abbreviated Injury Scale improves accuracy and simplifies scoring.

    PubMed

    Wang, Muding; Qiu, Wusi; Qiu, Fang; Mo, Yinan; Fan, Wenhui

    2015-03-16

    The Injury Severity Score (ISS) and the New Injury Severity Score (NISS) are widely used for anatomic severity assessments after trauma. We present here the Tangent Injury Severity Score (TISS), which transforms the Abbreviated Injury Scale (AIS) as a predictor of mortality. The TISS is defined as the sum of the tangent function of AIS/6 to the power 3.04 multiplied by 18.67 of a patient's three most severe AIS injuries regardless of body regions. TISS values were calculated for every patient in two large independent data sets: 3,908 and 4,171 patients treated during a 6-year period at level-3 first-class comprehensive hospitals: the Affiliated Hospital of Hangzhou Normal University and Fengtian Hospital Affiliated to Shenyang Medical College, China. The power of TISS to predict mortality was compared with previously calculated NISS values for the same patients in each data set. The TISS is more predictive of survival than NISS (Hangzhou: receiver operating characteristic (ROC): NISS = 0.929, TISS = 0.949; p = 0.002; Shenyang: ROC: NISS = 0.924, TISS = 0.942; p = 0.008). Moreover, TISS provides a better fit throughout its entire range of prediction (Hosmer Lemeshow statistic for Hangzhou NISS = 29.71; p < 0.001, TISS = 19.59; p = 0.003; Hosmer Lemeshow statistic for Shenyang NISS = 33.49; p < 0.001, TISS = 21.19; p = 0.002). The TISS shows more accurate prediction of prognosis and a linear relation to mortality. The TISS might be a better injury scoring tool with simple computation.

  7. A lung sound classification system based on the rational dilation wavelet transform.

    PubMed

    Ulukaya, Sezer; Serbes, Gorkem; Sen, Ipek; Kahya, Yasemin P

    2016-08-01

    In this work, a wavelet based classification system that aims to discriminate crackle, normal and wheeze lung sounds is presented. While the previous works related with this problem use constant low Q-factor wavelets, which have limited frequency resolution and can not cope with oscillatory signals, in the proposed system, the Rational Dilation Wavelet Transform, whose Q-factors can be tuned, is employed. Proposed system yields an accuracy of 95 % for crackle, 97 % for wheeze, 93.50 % for normal and 95.17 % for total sound signal types using energy feature subset and proposed approach is superior to conventional low Q-factor wavelet analysis.

  8. Large-Scale Deformation and Uplift Associated with Serpentinization

    NASA Astrophysics Data System (ADS)

    Germanovich, L. N.; Lowell, R. P.; Smith, J. E.

    2014-12-01

    Geologic and geophysical data suggest that partially serpentinized peridotites and serpentinites are a significant part of the oceanic lithosphere. All serpentinization reactions are exothermic and result in volume expansion as high as 40%. Volume expansion beneath the seafloor will lead to surface uplift and elevated stresses in the neighborhood of the region undergoing serpentinization. The serpentinization-induced stresses are likely to result in faulting or tensile fracturing that promote the serpentinization process by creating new permeability and allowing fluid access to fresh peridotite. To explore these issues, we developed a first-order model of crustal deformation by considering an inclusion undergoing transformation strain in an elastic half-space. Using solutions for inclusions of different shapes, orientations, and depths, we calculate the surface uplift and mechanical stresses generated by the serpentinization processes. We discuss the topographic features at the TAG hydrothermal field (Mid-Atlantic Ridge, 26°N), uplift of the Miyazaki Plain (Southwestern Japan), and tectonic history of the Atlantic Massif (inside corner high of the Mid-Atlantic Ridge, 30°N, and the Atlantis Transform Fault). Our analysis suggests that an anomalous salient of 3 km in diameter and 100 m high at TAG may have resulted from approximately 20% transformational strain in a region beneath the footwall of the TAG detachment fault. This serpentinization process tends to promote slip along some overlying normal faults, which may then enhance fluid pathways to the deeper crust to continue the serpentinization process. The serpentinization also favors slip and seismicity along the antithetic faults identified below the TAG detachment fault. Our solution for the Miyazaki Plain above the Kyushu-Palau subduction zone explains the observed uplift of 120 m, but the transformational strain needs only be 3%. Transformational strains associated with serpentinization in this region may promote thrust-type events in the aseismic slip zone near the upper boundary of the subducting Philippine Sea Plate. Thermal effects of serpentinization in both regions are small.

  9. GBM secretome induces transient transformation of human neural precursor cells.

    PubMed

    Venugopal, Chitra; Wang, X Simon; Manoranjan, Branavan; McFarlane, Nicole; Nolte, Sara; Li, Meredith; Murty, Naresh; Siu, K W Michael; Singh, Sheila K

    2012-09-01

    Glioblastoma (GBM) is the most aggressive primary brain tumor in humans, with a uniformly poor prognosis. The tumor microenvironment is composed of both supportive cellular substrates and exogenous factors. We hypothesize that exogenous factors secreted by brain tumor initiating cells (BTICs) could predispose normal neural precursor cells (NPCs) to transformation. When NPCs are grown in GBM-conditioned media, and designated as "tumor-conditioned NPCs" (tcNPCs), they become highly proliferative and exhibit increased stem cell self-renewal, or the unique ability of stem cells to asymmetrically generate another stem cell and a daughter cell. tcNPCs also show an increased transcript level of stem cell markers such as CD133 and ALDH and growth factor receptors such as VEGFR1, VEGFR2, EGFR and PDGFRα. Media analysis by ELISA of GBM-conditioned media reveals an elevated secretion of growth factors such as EGF, VEGF and PDGF-AA when compared to normal neural stem cell-conditioned media. We also demonstrate that tcNPCs require prolonged or continuous exposure to the GBM secretome in vitro to retain GBM BTIC characteristics. Our in vivo studies reveal that tcNPCs are unable to form tumors, confirming that irreversible transformation events may require sustained or prolonged presence of the GBM secretome. Analysis of GBM-conditioned media by mass spectrometry reveals the presence of secreted proteins Chitinase-3-like 1 (CHI3L1) and H2A histone family member H2AX. Collectively, our data suggest that GBM-secreted factors are capable of transiently altering normal NPCs, although for retention of the transformed phenotype, sustained or prolonged secretome exposure or additional transformation events are likely necessary.

  10. Qualitative and semiquantitative Fourier transformation using a noncoherent system.

    PubMed

    Rogers, G L

    1979-09-15

    A number of authors have pointed out that a system of zone plates combined with a diffuse source, transparent input, lens, and focusing screen will display on the output screen the Fourier transform of the input. Strictly speaking, the transform normally displayed is the cosine transform, and the bipolar output is superimposed on a dc gray level to give a positive-only intensity variation. By phase-shifting one zone plate the sine transform is obtained. Temporal modulation is possible. It is also possible to redesign the system to accept a diffusely reflecting input at the cost of introducing a phase gradient in the output. Results are given of the sine and cosine transforms of a small circular aperture. As expected, the sine transform is a uniform gray. Both transforms show unwanted artifacts beyond 0.1 rad off-axis. An analysis shows this is due to unwanted circularly symmetrical moire patterns between the zone plates.

  11. Spatial and Temporal Scales of Surface Water-Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Boano, F.

    2016-12-01

    The interfaces between surface water and groundwater (i.e., river and lake sediments) represent hotspots for nutrient transformation in watersheds. This intense biochemical activity stems from the peculiar physicochemical properties of these interface areas. Here, the exchange of water and nutrients between surface and subsurface environments creates an ecotone region that can support the presence of different microbial species responsible for nutrient transformation. Previous studies have elucidated that water exchange between rivers and aquifers is organized in a complex system of nested flow cells. Each cell entails a range of residence timescales spanning multiple order of magnitudes, providing opportunities for different biochemical reactions to occur. Physically-bases models represent useful tools to deal with the wide range of spatial and temporal scales that characterize surface-subsurface water exchange. This contribution will present insights about how hydrodynamic processes control scale organization for surface water - groundwater interactions. The specific focus will be the influence of exchange processes on microbial activity and nutrient transformation, discussing how groundwater flow at watershed scale controls flow conditions and hence constrain microbial reactions at much smaller scales.

  12. Platelet-mediated transformation of mtDNA-less human cells: Analysis of phenotypic variability among clones from normal individuals-and complementation behavior of the tRNA[sup Lys] mutation causing myoclonic epilepsy and ragged red fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chomyn, A.; Lai, S.T.; Shakeley, R.

    1994-06-01

    In the present work, the authors demonstrate the possibility of using human blood platelets as mitochondrial donors for the repopulation of mtDNA-less ([rho][sup o]) cells. The noninvasive nature of platelet isolation, combined with the prolonged viability of platelet mitochondria and the simplicity and efficiency of the mitochondria-transfer procedure, has substantially increased the applicability of the [rho][sup o] cell transformation approach for mitochondrial genetic analysis and for the study of mtDNA-linked diseases. This approach has been applied to platelets from several normal human individuals and one individual affected by the myoclonic-epilepsy-and-ragged-red-fibers (MERRF) encephalomyopathy. A certain variability in respiratory capacity was observedmore » among the platelet-derived [rho][sup o] cell transformants from a given normal subject, and it was shown to be unrelated to their mtDNA content. The results of sequential transfer of mitochondria from selected transformants into a [rho][sup o] cell line different from the first [rho][sup o] acceptor strongly suggest that this variability reflected, at least in part, differences in nuclear gene content and/or activity among the original recipient cells. A much greater variability in respiratory capacity was observed among the transformants derived from the MERRF patient and was found to be related to the presence and amount of the mitochondrial tRNA[sup Lys] mutation associated with the MERRF syndrome. An analysis of the relationship between proportion of mtDNA carrying the MERRF mutation and degree of respiratory activity in various transformations derived from the MERRF patient revealed an unusual complementation behavior of the tRNA[sup Lys] mutation, possibly reflecting the distribution of mutant mtDNA among the platelet mitochondria. 29 refs., 4 figs., 1 tab.« less

  13. Brushite coatings on titanium for orthopedic implants: Studies on deposition and transformation

    NASA Astrophysics Data System (ADS)

    Kumar, Mukesh

    Hydroxyapatite (HA, Ca5(PO4)3OH) coating on the metallic substrate is expected to assist bone growth and implant integration. However, HA is quite stable in physiological solution and the use of other more reactive calcium phosphate ceramics (CPC) could induce faster bone growth by providing calcium and phosphate ions to the interacting physiological solution. This study utilized a non-line of sight electrodeposition process to achieve brushite (CaHPO4.2H2O) coatings. The uses of potassium or sodium chloride as a conducting electrolyte in the depositing bath enhanced deposition rates and altered the morphology of the coatings. Analysis suggested a strained deposit with sight specific substitution of cations from the conducting electrolyte. Such a deposit (modified brushite) was determined to have CaHPO 4.2H2O and CaY2(1-x)HPO4•2H 2O (x ˜0.95) with Y as Na0 or K. Whereas normal brushite was obtained from unsupported baths. The deposited mass of brushite increased with charge consumed and bonding to the substrate decreased with increasing deposition time. Though inconclusive. in-situ studies on electrodeposition did not rule out the possibility of ionic species responsible for the deposit. Transformations of both forms of brushite were investigated in calcium free Hank's type simulated body fluid. Modified brushite showed periodic appearance of freshly precipitated, but poorly crystalline HA, without the benefit of monetite (CaHPO4) as an intermediate. However, normal brushite transformation showed nonstoichiometric HA with monetite as an intermediate. Normal brushite demonstrated a slower transformation to HA when compared to the transformation kinetics of modified brushite. It is shown that lattice strain due to localized ion incorporation could be used to after the properties of brushite coatings to adjust the kinetics of transformation and indirectly the amount of calcium and phosphate ions released into the surrounding.

  14. The modified polyconic projection for the IMW.

    USGS Publications Warehouse

    Snyder, J.P.

    1982-01-01

    The modified polyconic map projection designed by Lallemand and adopted for the International Map of the World between 1909 and 1962 has two meridians and two parallels which are true to scale. Constructed geometrically in the past, forward and inverse coordinate transformations may be calculated analytically in order to transfer data from existing quadrangles to other maps. The equations for these transformations are derived and used to calculate representative tables of coordinates and scale factors. Although the projection is neither equal-area nor conformal, scale does not vary more than 0.06% throughout the quadrangle.-Author

  15. Brain abnormality segmentation based on l1-norm minimization

    NASA Astrophysics Data System (ADS)

    Zeng, Ke; Erus, Guray; Tanwar, Manoj; Davatzikos, Christos

    2014-03-01

    We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.

  16. Scaling and scale invariance of conservation laws in Reynolds transport theorem framework

    NASA Astrophysics Data System (ADS)

    Haltas, Ismail; Ulusoy, Suleyman

    2015-07-01

    Scale invariance is the case where the solution of a physical process at a specified time-space scale can be linearly related to the solution of the processes at another time-space scale. Recent studies investigated the scale invariance conditions of hydrodynamic processes by applying the one-parameter Lie scaling transformations to the governing equations of the processes. Scale invariance of a physical process is usually achieved under certain conditions on the scaling ratios of the variables and parameters involved in the process. The foundational axioms of hydrodynamics are the conservation laws, namely, conservation of mass, conservation of linear momentum, and conservation of energy from continuum mechanics. They are formulated using the Reynolds transport theorem. Conventionally, Reynolds transport theorem formulates the conservation equations in integral form. Yet, differential form of the conservation equations can also be derived for an infinitesimal control volume. In the formulation of the governing equation of a process, one or more than one of the conservation laws and, some times, a constitutive relation are combined together. Differential forms of the conservation equations are used in the governing partial differential equation of the processes. Therefore, differential conservation equations constitute the fundamentals of the governing equations of the hydrodynamic processes. Applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework instead of applying to the governing partial differential equations may lead to more fundamental conclusions on the scaling and scale invariance of the hydrodynamic processes. This study will investigate the scaling behavior and scale invariance conditions of the hydrodynamic processes by applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework.

  17. Rational function representation of flap noise spectra including correction for reflection effects. [acoustic properties of engine exhaust jets deflected for externally blown flaps

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1974-01-01

    A rational function is presented for the acoustic spectra generated by deflection of engine exhaust jets for under-the-wing and over-the-wing versions of externally blown flaps. The functional representation is intended to provide a means for compact storage of data and for data analysis. The expressions are based on Fourier transform functions for the Strouhal normalized pressure spectral density, and on a correction for reflection effects based on the N-independent-source model of P. Thomas extended by use of a reflected ray transfer function. Curve fit comparisons are presented for blown flap data taken from turbofan engine tests and from large scale cold-flow model tests. Application of the rational function to scrubbing noise theory is also indicated.

  18. DFT simulation, quantum chemical electronic structure, spectroscopic and structure-activity investigations of 4-acetylpyridine

    NASA Astrophysics Data System (ADS)

    Atilgan, A.; Yurdakul, Ş.; Erdogdu, Y.; Güllüoğlu, M. T.

    2018-06-01

    The spectroscopic (UV-Vis and infrared), structural and some electronic property observations of the 4-acetylpyridine (4-AP) were reported, which are investigated by using some spectral methods and DFT calculations. FT-IR spectra were obtained for 4-AP at room temperature in the region 4000 cm-1- 400 cm-1. In the DFT calculations, the B3LYP functional with 6-311G++G(d,p) basis set was applied to carry out the quantum mechanical calculations. The Fourier Transform Infrared (FT-IR) and FT-Raman spectra were interpreted by using of normal coordinate analysis based on scaled quantum mechanical force field. The present work expands our understanding of the both the vibrational and structural properties as well as some electronic properties of the 4-AP by means of the theoretical and experimental methods.

  19. HoDOr: histogram of differential orientations for rigid landmark tracking in medical images

    NASA Astrophysics Data System (ADS)

    Tiwari, Abhishek; Patwardhan, Kedar Anil

    2018-03-01

    Feature extraction plays a pivotal role in pattern recognition and matching. An ideal feature should be invariant to image transformations such as translation, rotation, scaling, etc. In this work, we present a novel rotation-invariant feature, which is based on Histogram of Oriented Gradients (HOG). We compare performance of the proposed approach with the HOG feature on 2D phantom data, as well as 3D medical imaging data. We have used traditional histogram comparison measures such as Bhattacharyya distance and Normalized Correlation Coefficient (NCC) to assess efficacy of the proposed approach under effects of image rotation. In our experiments, the proposed feature performs 40%, 20%, and 28% better than the HOG feature on phantom (2D), Computed Tomography (CT-3D), and Ultrasound (US-3D) data for image matching, and landmark tracking tasks respectively.

  20. Between soap bubbles and vesicles: The dynamics of freely floating smectic bubbles

    NASA Astrophysics Data System (ADS)

    Stannarius, Ralf; May, Kathrin; Harth, Kirsten; Trittel, Torsten

    2013-03-01

    The dynamics of droplets and bubbles, particularly on microscopic scales, are of considerable importance in biological, environmental, and technical contexts. We introduce freely floating bubbles of smectic liquid crystals and report their unique dynamic properties. Smectic bubbles can be used as simple models for dynamic studies of fluid membranes. In equilibrium, they form minimal surfaces like soap films. However, shape transformations of closed smectic membranes that change the surface area involve the formation and motion of molecular layer dislocations. These processes are slow compared to the capillary wave dynamics, therefore the effective surface tension is zero like in vesicles. Freely floating smectic bubbles are prepared from collapsing catenoid films and their dynamics is studied with optical high-speed imaging. Experiments are performed under normal gravity and in microgravity during parabolic flights. Supported by DLR within grant OASIS-Co.

  1. Equilibrium dynamical correlations in the Toda chain and other integrable models

    NASA Astrophysics Data System (ADS)

    Kundu, Aritra; Dhar, Abhishek

    2016-12-01

    We investigate the form of equilibrium spatiotemporal correlation functions of conserved quantities in the Toda lattice and in other integrable models. From numerical simulations we find that the correlations satisfy ballistic scaling with a remarkable collapse of data from different times. We examine special limiting choices of parameter values, for which the Toda lattice tends to either the harmonic chain or the equal mass hard-particle gas. In both these limiting cases, one can obtain the correlations exactly and we find excellent agreement with the direct Toda simulation results. We also discuss a transformation to "normal mode" variables, as commonly done in hydrodynamic theory of nonintegrable systems, and find that this is useful, to some extent, even for the integrable system. The striking differences between the Toda chain and a truncated version, expected to be nonintegrable, are pointed out.

  2. Equilibrium dynamical correlations in the Toda chain and other integrable models.

    PubMed

    Kundu, Aritra; Dhar, Abhishek

    2016-12-01

    We investigate the form of equilibrium spatiotemporal correlation functions of conserved quantities in the Toda lattice and in other integrable models. From numerical simulations we find that the correlations satisfy ballistic scaling with a remarkable collapse of data from different times. We examine special limiting choices of parameter values, for which the Toda lattice tends to either the harmonic chain or the equal mass hard-particle gas. In both these limiting cases, one can obtain the correlations exactly and we find excellent agreement with the direct Toda simulation results. We also discuss a transformation to "normal mode" variables, as commonly done in hydrodynamic theory of nonintegrable systems, and find that this is useful, to some extent, even for the integrable system. The striking differences between the Toda chain and a truncated version, expected to be nonintegrable, are pointed out.

  3. Analysis of current density and specific absorption rate in biological tissue surrounding an air-core type of transcutaneous transformer for an artificial heart.

    PubMed

    Shiba, Kenji; Nukaya, Masayuki; Tsuji, Toshio; Koshiji, Kohji

    2006-01-01

    This paper reports on the specific absorption rate (SAR) and the current density analysis of biological tissue surrounding an air-core type of transcutaneous transformer for an artificial heart. The electromagnetic field in the biological tissue surrounding the transformer was analyzed by the transmission-line modeling method, and the SAR and current density as a function of frequency (200k-1 MHz) for a transcutaneous transmission of 20 W were calculated. The model's biological tissue has three layers including the skin, fat and muscle. As a result, the SAR in the vicinity of the transformer is sufficiently small and the normalized SAR value, which is divided by the ICNIRP's basic restriction, is 7 x 10(-3) or less. On the contrary, the current density is slightly in excess of the ICNIRP's basic restrictions as the frequency falls and the output voltage rises. Normalized current density is from 0.2 to 1.2. In addition, the layer in which the current's density is maximized depends on the frequency, the muscle in the low frequency (<700 kHz) and the skin in the high frequency (>700 kHz). The result shows that precision analysis taking into account the biological properties is very important for developing the transcutaneous transformer for TAH.

  4. Loss of Robustness and Addiction to IGF1 during Early Keratinocyte Transformation by Human Papilloma Virus 16

    PubMed Central

    Geiger, Tamar; Levitzki, Alexander

    2007-01-01

    Infection of keratinocytes with high risk human Papilloma virus causes immortalization, and when followed by further mutations, leads to cervical cancer and other anogenital tumors. Here we monitor the progressive loss of robustness in an in vitro model of the early stages of transformation that comprises normal keratinocytes and progressive passages of HPV16 immortalized cells. As transformation progresses, the cells acquire higher proliferation rates and gain the ability to grow in soft agar. Concurrently, the cells lose robustness, becoming more sensitive to serum starvation and DNA damage by Cisplatin. Loss of robustness in the course of transformation correlates with significant reductions in the activities of the anti-apoptotic proteins PKB/Akt, Erk, Jnk and p38 both under normal growth conditions and upon stress. In parallel, loss of robustness is manifested by the shrinkage of the number of growth factors that can rescue starving cells from apoptosis, with the emergence of dependence solely on IGF1. Treatment with IGF1 activates PKB/Akt and Jnk and through them inhibits p53, rescuing the cells from starvation. We conclude that transformation in this model induces higher susceptibility of cells to stress due to reduced anti-apoptotic signaling and hyper-activation of p53 upon stress. PMID:17622350

  5. The Challenge of Post-Normality to Drama Education and Applied Theatre

    ERIC Educational Resources Information Center

    Andersona, Michael

    2014-01-01

    This article examines current discourses surrounding the future of education and society more generally. It focuses on Sardar's discussion of "post-normality" to frame discussions around the transformations in society and speculates on how the qualities inherent in drama education and applied theatre might form responses to…

  6. Stability of strongly nonlinear normal modes

    NASA Astrophysics Data System (ADS)

    Recktenwald, Geoffrey; Rand, Richard

    2007-10-01

    It is shown that a transformation of time can allow the periodic solution of a strongly nonlinear oscillator to be written as a simple cosine function. This enables the stability of strongly nonlinear normal modes in multidegree of freedom systems to be investigated by standard procedures such as harmonic balance.

  7. Discrimination of a chestnut-oak forest unit for geologic mapping by means of a principal component enhancement of Landsat multispectral scanner data.

    USGS Publications Warehouse

    Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.

    1981-01-01

    A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors

  8. Magnetic bead detection using nano-transformers.

    PubMed

    Kim, Hyung Kwon; Hwang, Jong Seung; Hwang, Sung Woo; Ahn, Doyeol

    2010-11-19

    A novel scheme to detect magnetic beads using a nano-scale transformer with a femtoweber resolution is reported. We have performed a Faraday's induction experiment with the nano-transformer at room temperature. The transformer shows the linear output voltage responses to the sinusoidal input current. When magnetic beads are placed on the transformer, the output responses are increased by an amount corresponding to the added magnetic flux from the beads when compared with the case of no beads on the transformer. In this way, we could determine whether magnetic beads are on top of the transformer in a single particle level.

  9. Body Mass Normalization for Ultrasound Measurements of Adolescent Lateral Abdominal Muscle Thickness.

    PubMed

    Linek, Pawel; Saulicz, Edward; Wolny, Tomasz; Myśliwiec, Andrzej

    2017-04-01

    The purpose of this study was to determine the value of the allometric parameter for ultrasound measurements of the thickness of the oblique external (OE), internal (OI), and transversus abdominis (TrA) muscles in the adolescent population. The allometric parameter is the slope of the linear regression line between the log transformed body mass and log transformed muscle size measurement. The study included 321 adolescents between the ages of 10 and 17, consisting of 160 boys and 161 girls. The participants were recruited from local schools and attended regular school classes at normal grade levels. All individuals with no signs of scoliosis (screening with use of a scoliometer), and no surgical procedures performed on the trunk area were included. A real-time ultrasound B-scanner with a linear array transducer was used to obtain images of the lateral abdominal muscles from both sides of the body. The correlation between body mass and the OE muscle was r = 0.69; the OI muscle r = 0.68; and the TrA muscle r = 0.53 (in all cases, P < .0001). The allometric parameter for the OE was 0.88296; the OI 0.718756; and the TrA 0.60986. Using these parameters, no significant correlations were found between body mass and the allometric-scaled thickness of the lateral abdominal muscles. Significant positive correlations exist between body mass and lateral abdominal muscle thickness assessed by ultrasound imaging. Therefore, it is reasonable to advise that the values of the allometric parameters for OE, OI, and TrA obtained in this study should be used in other studies performed on adolescents. © 2016 by the American Institute of Ultrasound in Medicine.

  10. Tnt1 Retrotransposon Mutagenesis: A Tool for Soybean Functional Genomics1[W][OA

    PubMed Central

    Cui, Yaya; Barampuram, Shyam; Stacey, Minviluz G.; Hancock, C. Nathan; Findley, Seth; Mathieu, Melanie; Zhang, Zhanyuan; Parrott, Wayne A.; Stacey, Gary

    2013-01-01

    Insertional mutagenesis is a powerful tool for determining gene function in both model and crop plant species. Tnt1, the transposable element of tobacco (Nicotiana tabacum) cell type 1, is a retrotransposon that replicates via an RNA copy that is reverse transcribed and integrated elsewhere in the plant genome. Based on studies in a variety of plants, Tnt1 appears to be inactive in normal plant tissue but can be reactivated by tissue culture. Our goal was to evaluate the utility of the Tnt1 retrotransposon as a mutagenesis strategy in soybean (Glycine max). Experiments showed that the Tnt1 element was stably transformed into soybean plants by Agrobacterium tumefaciens-mediated transformation. Twenty-seven independent transgenic lines carrying Tnt1 insertions were generated. Southern-blot analysis revealed that the copy number of transposed Tnt1 elements ranged from four to 19 insertions, with an average of approximately eight copies per line. These insertions showed Mendelian segregation and did not transpose under normal growth conditions. Analysis of 99 Tnt1 flanking sequences revealed insertions into 62 (62%) annotated genes, indicating that the element preferentially inserts into protein-coding regions. Tnt1 insertions were found in all 20 soybean chromosomes, indicating that Tnt1 transposed throughout the soybean genome. Furthermore, fluorescence in situ hybridization experiments validated that Tnt1 inserted into multiple chromosomes. Passage of transgenic lines through two different tissue culture treatments resulted in Tnt1 transposition, significantly increasing the number of insertions per line. Thus, our data demonstrate the Tnt1 retrotransposon to be a powerful system that can be used for effective large-scale insertional mutagenesis in soybean. PMID:23124322

  11. Coordinate transformation by minimizing correlations between parameters

    NASA Technical Reports Server (NTRS)

    Kumar, M.

    1972-01-01

    This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.

  12. Oceanographer transform fault structure compared to that of surrounding oceanic crust: Results from seismic refraction data analysis

    NASA Astrophysics Data System (ADS)

    Ambos, E. L.; Hussong, D. M.

    1986-02-01

    A high quality seismic refraction data set was collected near the intersection of the tranform portion of the Oceanographer Fracture Zone (OFZ) with the adjacent northern limb of the Mid-Atlantic Ridge spreading center (MAR). One seismic line was shot down the axis of the transform valley. Another was shot parallel to the spreading center, crossing from normal oceanic crust into the transform valley, and out again. This latter line was recorded by four Ocean Bottom Seismometers (OBSs) spaced along its length, providing complete reversed coverage over the crucial transform valley zone. Findings indicate that whereas the crust of the transform valley is only slightly thinner (4.5 km) compared to normal oceanic crust (5-8 km), the structure is different. Velocities in the range of 6.9 to 7.7. km/sec, which are characteristics of seismic layer 3B, are absent, although a substantial thickness (approximately 3 km) of 6.1-6.8 km/sec material does appear to be present. The upper crust, some 2 km in thickness, is characterized by a high velocity gradient (1.5 sec -1) in which veloxity increases from 2.7 km/sec at the seafloor to 5.8 km/sec at the base of the section. A centrally-located deep of the transform valley has thinner crust (1-2 km), whereas the crust gradually thickens past the transform valley-spreading center intersection. Analysis of the seismic line crossing sub-perpendicular to the transform valley demonstrates abrupt thinning of the upper crustal section, and thickening of the lower crust outside of the trasform valley. In addition, high-velocity material seems to occur under the valley flanks, particularly the southern flanking ridge. This ridge, which is on the side of the transform opposite to the intersection of spreading ridge and transform, may be an expression of uplifted, partially serpentinized upper mantle rocks.

  13. 78 FR 34016 - Wireline Competition Bureau Seeks Comment on Options To Promote Rural Broadband in Rate-Of-Return...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... document is available for inspection and copying during normal business hours in the FCC Reference... carriers supports such networks, and indeed, under the USF/ICC Transformation Order, 76 FR 73830, November... originally sought comment on this proposal in the USF/ICC Transformation Order FNPRM, 76 FR 73830, November...

  14. Improving Your Data Transformations: Applying the Box-Cox Transformation

    ERIC Educational Resources Information Center

    Osborne, Jason W.

    2010-01-01

    Many of us in the social sciences deal with data that do not conform to assumptions of normality and/or homoscedasticity/homogeneity of variance. Some research has shown that parametric tests (e.g., multiple regression, ANOVA) can be robust to modest violations of these assumptions. Yet the reality is that almost all analyses (even nonparametric…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, William; Freidel, Laurent

    We consider the problem of defining localized subsystems in gauge theory and gravity. Such systems are associated to spacelike hypersurfaces with boundaries and provide the natural setting for studying entanglement entropy of regions of space. We present a general formalism to associate a gauge-invariant classical phase space to a spatial slice with boundary by introducing new degrees of freedom on the boundary. In Yang-Mills theory the new degrees of freedom are a choice of gauge on the boundary, transformations of which are generated by the normal component of the nonabelian electric field. In general relativity the new degrees of freedommore » are the location of a codimension-2 surface and a choice of conformal normal frame. These degrees of freedom transform under a group of surface symmetries, consisting of diffeomorphisms of the codimension-2 boundary, and position-dependent linear deformations of its normal plane. We find the observables which generate these symmetries, consisting of the conformal normal metric and curvature of the normal connection. We discuss the implications for the problem of defining entanglement entropy in quantum gravity. Finally, our work suggests that the Bekenstein-Hawking entropy may arise from the different ways of gluing together two partial Cauchy surfaces at a cross-section of the horizon.« less

  16. Assessment of Microcirculatory Hemoglobin Levels in Normal and Diabetic Subjects using Diffuse Reflectance Spectroscopy in the Visible Region — a Pilot Study

    NASA Astrophysics Data System (ADS)

    Sujatha, N.; Anand, B. S. Suresh; Nivetha, K. Bala; Narayanamurthy, V. B.; Seshadri, V.; Poddar, R.

    2015-07-01

    Light-based diagnostic techniques provide a minimally invasive way for selective biomarker estimation when tissues transform from a normal to a malignant state. Spectroscopic techniques based on diffuse reflectance characterize the changes in tissue hemoglobin/oxygenation levels during the tissue transformation process. Recent clinical investigations have shown that changes in tissue oxygenation and microcirculation are observed in diabetic subjects in the initial and progressive stages. In this pilot study, we discuss the potential of diffuse reflectance spectroscopy (DRS) in the visible (Vis) range to differentiate the skin microcirculatory hemoglobin levels between normal and advanced diabetic subjects with and without neuropathy. Average concentration of hemoglobin as well as hemoglobin oxygen saturation within the probed tissue volume is estimated for a total of four different sites in the foot sole. The results indicate a statistically significant decrease in average total hemoglobin and increase in hemoglobin oxygen saturation levels for diabetic foot compared with a normal foot. The present study demonstrates the ability of reflectance spectroscopy in the Vis range to determine and differentiate the changes in tissue hemoglobin and hemoglobin oxygen saturation levels in normal and diabetic subjects.

  17. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  18. Maximum spectral demands in the near-fault region

    USGS Publications Warehouse

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  19. Maximum spectral demands in the near-fault region

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed. ?? 2008, Earthquake Engineering Research Institute.

  20. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  1. Effects of water vapor on protectiveness of Cr2O3 scale at 1073 K

    NASA Astrophysics Data System (ADS)

    Arifin, S. K.; Hamid, M.; Berahim, A. N.; Ani, M. H.

    2018-01-01

    Fe-Cr alloy is commonly being used as boiler tube’s material. It is subjected to prolonged exposure to water vapor oxidation. The ability to withstand high temperature corrosion can normally be attributed to the formation of a dense and slow growing Cr-rich-oxide scale known as chromia, Cr2O3 scale. However, oxidation may limit the alloy’s service lifetime due to decreasing of its protectiveness capability. This paper is to presents an experimental study of thermo gravimetric and Fourier transform infrared analysis of Cr2O3 at 1073 K in dry and humid environment. Samples were used from commercially available Cr2O3 powder. It was cold-pressed into pellet shape of 12 mm diameter and 3 mm thick with hydraulic press for 40 min at 48 MPa. It then sintered at 1173 K in inert gas environment for 8 h. The samples are cooled and placed in 5 mm diameter platinum pan. It is subjected to reaction in dry and wet environment at 1073 K by applying 100%-Ar and Ar-5%H2 gas. Each reaction period is 48 h utilizing Thermo Gravimetric Analyzer, TGA to quantify the mass changes. After the reaction, the samples then characterized with Fourier Transform Infrared Spectroscopy, FT-IR and Field Emission Electron Scanning Microscopy, FE-SEM. The TGA result shows mass decreasing ratio of Cr2O3 in wet (PH2O = 9.5x105Pa) and dry environment is at a factor of 1.2 while parabolic rate at 1.4. FT-IR results confirmed that water vapor significantly broaden the peaks, thus promotes the volatilization of Cr2O3 in wet sample. FESEM shows mostly packed and intact in dry while in wet sample, slightly porous particle arrangement compare to dry. It is concluded that water vapor species decreased Cr2O3 protectiveness capability.

  2. Reduced growth factor requirement of keloid-derived fibroblasts may account for tumor growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, S.B.; Trupin, K.M.; Rodriguez-Eaton, S.

    Keloids are benign dermal tumors that form during an abnormal wound-healing process is genetically susceptible individuals. Although growth of normal and keloid cells did not differ in medium containing 10% (vol/vol) fetal bovine serum, keloid culture grew to significantly higher densities than normal cells in medium containing 5% (vol/vol) fetal bovine serum, keloid cultures grew to significantly higher densities than normal cells in medium containing 5% (vol/vol) plasma or 1% fetal bovine serum. Conditioned medium from keloid cultures did not stimulate growth of normal cells in plasma nor did it contain detectable platelet-derived growth factor or epidermal growth factor. Keloidmore » fibroblasts responded differently than normal adult fibroblasts to transforming growth factor ..beta... Whereas transforming growth factor ..beta.. reduced growth stimulation by epidermal growth factor in cells from normal adult skin or scars, it enhanced the activity of epidermal growth factor in cells from normal adult skin or scars, it enhanced the activity of epidermal growth factor in cells from keloids. Normal and keloid fibroblasts also responded differently to hydrocortisone: growth was stimulated in normal adult cells and unaffected or inhibited in keloid cells. Fetal fibroblasts resembled keloid cells in their ability to grow in plasma and in their response to hydrocortisone. The ability of keloid fibroblasts to grow to higher cell densities in low-serum medium than cells from normal adult skin or from normal early or mature scars suggests that a reduced dependence on serum growth factors may account for their prolonged growth in vivo. Similarities between keloid and fetal cells suggest that keloids may result from the untimely expression of growth-control mechanism that is developmentally regulated.« less

  3. Genome-wide analysis of alternative splicing in medulloblastoma identifies splicing patterns characteristic of normal cerebellar development

    PubMed Central

    Menghi, Francesca; Jacques, Thomas S.; Barenco, Martino; Schwalbe, Ed C.; Clifford, Steven C.; Hubank, Mike; Ham, Jonathan

    2011-01-01

    Alternative splicing is an important mechanism for the generation of protein diversity at a post-transcriptional level. Modifications in the splicing patterns of several genes have been shown to contribute to the malignant transformation of different tissue types. In this study, we used the Affymetrix Exon arrays to investigate patterns of differential splicing between paediatric medulloblastomas and normal cerebellum on a genome-wide scale. Of the 1262 genes identified as potentially generating tumour-associated splice forms, we selected 14 examples of differential splicing of known cassette exons and successfully validated 11 of them by RT-PCR. The pattern of differential splicing of three validated events was characteristic for the molecular subset of Sonic Hedgehog (Shh)-driven medulloblastomas, suggesting that their unique gene signature includes the expression of distinctive transcript variants. Generally, we observed that tumour and normal fetal cerebellar samples shared significantly lower exon inclusion rates compared to normal adult cerebellum. We investigated whether tumour-associated splice forms were expressed in primary cultures of Shh-dependent mouse cerebellar granule cell precursors (GCPs) and found that Shh caused a decrease in the cassette exon inclusion rate of five out of the seven tested genes. Furthermore, we observed a significant increase in exon inclusion between post-natal days 7 and 14 of mouse cerebellar development, at the time when GCPs mature into post-mitotic neurons. We conclude that inappropriate splicing frequently occurs in human medulloblastomas and may be linked to the activation of developmental signalling pathways and a failure of cerebellar precursor cells to differentiate. PMID:21248070

  4. A model for size- and rotation-invariant pattern processing in the visual system.

    PubMed

    Reitboeck, H J; Altmann, J

    1984-01-01

    The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale- and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size- and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via "Mexican hat" filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.

  5. Poisson noise removal with pyramidal multi-scale transforms

    NASA Astrophysics Data System (ADS)

    Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M.

    2013-09-01

    In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume.

  6. 3D modeling to characterize lamina cribrosa surface and pore geometries using in vivo images from normal and glaucomatous eyes

    PubMed Central

    Sredar, Nripun; Ivers, Kevin M.; Queener, Hope M.; Zouridakis, George; Porter, Jason

    2013-01-01

    En face adaptive optics scanning laser ophthalmoscope (AOSLO) images of the anterior lamina cribrosa surface (ALCS) represent a 2D projected view of a 3D laminar surface. Using spectral domain optical coherence tomography images acquired in living monkey eyes, a thin plate spline was used to model the ALCS in 3D. The 2D AOSLO images were registered and projected onto the 3D surface that was then tessellated into a triangular mesh to characterize differences in pore geometry between 2D and 3D images. Following 3D transformation of the anterior laminar surface in 11 normal eyes, mean pore area increased by 5.1 ± 2.0% with a minimal change in pore elongation (mean change = 0.0 ± 0.2%). These small changes were due to the relatively flat laminar surfaces inherent in normal eyes (mean radius of curvature = 3.0 ± 0.5 mm). The mean increase in pore area was larger following 3D transformation in 4 glaucomatous eyes (16.2 ± 6.0%) due to their more steeply curved laminar surfaces (mean radius of curvature = 1.3 ± 0.1 mm), while the change in pore elongation was comparable to that in normal eyes (−0.2 ± 2.0%). This 3D transformation and tessellation method can be used to better characterize and track 3D changes in laminar pore and surface geometries in glaucoma. PMID:23847739

  7. SEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture

    NASA Astrophysics Data System (ADS)

    Qianxiang, Zhou; Chao, Ma; Xiaohui, Zheng

    sEMG analysis of astronaut upper arm during isotonic muscle actions with normal standing posture*1 Introduction Now the research on the isotonic muscle actions by using Surface Electromyography (sEMG) is becoming a pop topic in fields of astronaut life support training and rehabilitations. And researchers paid more attention on the sEMG signal processes for reducing the influence of noise which is produced during monitoring process and the fatigue estimation of isotonic muscle actions with different force levels by using the parameters which are obtained from sEMG signals such as Condition Velocity(CV), Median Frequency(MDF), Mean Frequency(MNF) and so on. As the lucubrated research is done, more and more research on muscle fatigue issue of isotonic muscle actions are carried out with sEMG analysis and subjective estimate system of Borg scales at the same time. In this paper, the relationship between the variable for fatigue based on sEMG and the Borg scale during the course of isotonic muscle actions of the upper arm with different contraction levels are going to be investigated. Methods 13 young male subjects(23.4±2.45years, 64.7±5.43Kg, 171.7±5.41cm) with normal standing postures were introduced to do isotonic actions of the upper arm with different force levels(10% MVC, 30%MVC and 50%MVC). And the MVC which means maximal voluntary contraction was obtained firstly in the experiment. Also the sEMG would be recorded during the experiments; the Borg scales would be recorded for each contraction level. By using one-third band octave method, the fatigue variable (p) based on sEMG were set up and it was expressed as p = i g(fi ) · F (fi ). And g(fi ) is defined as the frequent factor which was 0.42+0.5 cos(π fi /f0 )+0.08 cos(2π fi /f0 ), 0 < FI fi 0, orf0 ≤> f0 . According to the equations, the p could be computed and the relationship between variable p and the Borg scale would be investigated. Results In the research, three kinds of fitted curves between variable p and Borg scale were done, which were the quadratic curve, quintic curve and exponent curve. And 1 * Foundation Item: Supported by National Nature Science Foundation (60673013) the results showed that the relationship could be expressed as quadratic curve curves in certain scales. From the results it could concluded that the variable based on sEMG with one-third band octave method could really reflected the changes of fatigue caused by different isotonic contraction force levels; the variable and the Borg scale could be fitted with conic curves. And the continuous study could be done for learning the numerical relations between fatigue and sEMG during isometric actions with different force levels. Also it would be better for the supports training and rehabilitation training and other involved issues. References 1. Coorevits P, Danneels L, Cambier D, et al. Correlations between short-time Fourier-and continuous wavelet transforms in the analysis of localized back and hip muscle fatigue during isometric contractions[J]. Journal of Electromyography and Kinesiology. 2008, 18(??): 637-644. 2. Ryan E D, Cramer J T, Egan A D, et al. Time and frequency domain responses of the mechanomyogram and electromyogram during isometric ramp contractions: A comparison of the short-time Fourier and continuous wavelet transforms[J]. Journal of Electromyog-raphy and Kinesiology. 2008, 18(??): 54-67. 3. Coorevits P,danneels L, Cambier D E A. Correlations between short-time Fourier-and continuous wavelet transforms in the analysis of localized back and hip muscle fatigue during isometric contractions[J]. Journal of Electromyography and Kinesiology. 2008, 18(??): 637-644. 4. Dimitrova N A, Arabadzhiev T I, Hogrel J Y E A. Fatigue analysis of interference EMG signals obtained from biceps brachii during isometric voluntary contraction at various force levels[J]. Journal of Electromyography and Kinesiology. 2009, 19(??): 252-258. 5. Troiano A, Mesin L, Naddeo F, et al. Assessment of force and fatigue in isometric contractions of upper trapezius muscle by perceived exertion scale and EMG signal[J]. Gait & Posture 6. Eighth Congress of the Italian Society for Clinical Movement Analysis (SIAMOC-Societ?Italiana di Movimento in Clinica). 2008, 28(Supplement 1): 37-38. 7. Strimpakos N, Georgios G, Eleni K, et al. Issues in relation to the repeatability of and correlation between EMG and Borg scale assessments of neck muscle fatigue[J]. Journal of Electromyography and Kinesiology. 2005, 15(??): 452-465. 8. Zhan Benqing, Zhou Qianxiang, Influence of Multi-factors on Fatigue Evaluation of Typ-ical upper Extremity Operation, Space Medicine & Medical Engineering, 2009, 22(??): 313-316.

  8. Thermal Aging Characteristics of Insulation Paper in Mineral Oil under Overloaded Operating Transformers

    NASA Astrophysics Data System (ADS)

    Miyagi, Katsunori; Oe, Etsuo; Yamagata, Naoki; Miyahara, Hideyuki

    A sudden capacity increase in demand during the summer peak, or in contingencies such as malfunctioning transformers, may cause overload for normal transformers. In this paper, on the basis of examples of overloaded transformer operation in distributing substations, thermal aging testing in oil was carried out under various overload patterns, such as short time overload and long time overload, but with the winding insulation paper's life loss kept constant. From the results, various characteristics such as mean degree of polymerization and productions of furfural and (CO2+CO), and their effects on the life loss of the insulation paper were obtained.

  9. Transformation of localized necking of strain space into stress space for advanced high strength steel sheet

    NASA Astrophysics Data System (ADS)

    Nakwattanaset, Aeksuwat; Suranuntchai, Surasak

    2018-03-01

    Normally, Forming Limit Curves (FLCs) can’t explain for shear fracture better than Damage Curve, this article aims to show the experimental of Forming Limit Curve (FLC) for Advanced High Strength Steel (AHSS) sheets grade JAC780Y with the Nakazima forming test and tensile tests of different sample geometries. From these results, the Forming Limit Curve (strain space) was transformed to damage curve (stress space) between plastic strain and stress triaxiality. Therefore, Stress space transformed using by Hill-48 and von-Mises yield function. This article shows that two of these yield criterions can use in the transformation.

  10. Transformer induced instability of the series resonant converter

    NASA Technical Reports Server (NTRS)

    King, R. J.; Stuart, T. A.

    1983-01-01

    It is shown that the common series resonant power converter is subject to a low frequency oscillation that can lead to the loss of cyclic stability. This oscillation is caused by a low frequency resonant circuit formed by the normal L and C components in series with the magnetizing inductance of the output transformer. Three methods for eliminating this oscillation are presented and analyzed. One of these methods requires a change in the circuit topology during the resonance cycle. This requires a new set of steady state equations which are derived and presented in a normalized form. Experimental results are included which demonstrate the nature of the low frequency oscillation before cyclic stability is lost.

  11. Recombinant glucose uptake system

    DOEpatents

    Ingrahm, Lonnie O.; Snoep, Jacob L.; Arfman, Nico

    1997-01-01

    Recombinant organisms are disclosed that contain a pathway for glucose uptake other than the pathway normally utilized by the host cell. In particular, the host cell is one in which glucose transport into the cell normally is coupled to PEP production. This host cell is transformed so that it uses an alternative pathway for glucose transport that is not coupled to PEP production. In a preferred embodiment, the host cell is a bacterium other than Z. mobilis that has been transformed to contain the glf and glk genes of Z. mobilis. By uncoupling glucose transport into the cell from PEP utilization, more PEP is produced for synthesis of products of commercial importance from a given quantity of biomass supplied to the host cells.

  12. A Conservative Inverse Normal Test Procedure for Combining P-Values in Integrative Research.

    ERIC Educational Resources Information Center

    Saner, Hilary

    1994-01-01

    The use of p-values in combining results of studies often involves studies that are potentially aberrant. This paper proposes a combined test that permits trimming some of the extreme p-values. The trimmed statistic is based on an inverse cumulative normal transformation of the ordered p-values. (SLD)

  13. Vestibulo-Ocular Reflex Responses to a Multichannel Vestibular Prosthesis Incorporating a 3D Coordinate Transformation for Correction of Misalignment

    PubMed Central

    Fridman, Gene Y.; Davidovics, Natan S.; Dai, Chenkai; Migliaccio, Americo A.

    2010-01-01

    There is no effective treatment available for individuals unable to compensate for bilateral profound loss of vestibular sensation, which causes chronic disequilibrium and blurs vision by disrupting vestibulo-ocular reflexes that normally stabilize the eyes during head movement. Previous work suggests that a multichannel vestibular prosthesis can emulate normal semicircular canals by electrically stimulating vestibular nerve branches to encode head movements detected by mutually orthogonal gyroscopes affixed to the skull. Until now, that approach has been limited by current spread resulting in distortion of the vestibular nerve activation pattern and consequent inability to accurately encode head movements throughout the full 3-dimensional (3D) range normally transduced by the labyrinths. We report that the electrically evoked 3D angular vestibulo-ocular reflex exhibits vector superposition and linearity to a sufficient degree that a multichannel vestibular prosthesis incorporating a precompensatory 3D coordinate transformation to correct misalignment can accurately emulate semicircular canals for head rotations throughout the range of 3D axes normally transduced by a healthy labyrinth. PMID:20177732

  14. Shape Memory Micro- and Nanowire Libraries for the High-Throughput Investigation of Scaling Effects.

    PubMed

    Oellers, Tobias; König, Dennis; Kostka, Aleksander; Xie, Shenqie; Brugger, Jürgen; Ludwig, Alfred

    2017-09-11

    The scaling behavior of Ti-Ni-Cu shape memory thin-film micro- and nanowires of different geometry is investigated with respect to its influence on the martensitic transformation properties. Two processes for the high-throughput fabrication of Ti-Ni-Cu micro- to nanoscale thin film wire libraries and the subsequent investigation of the transformation properties are reported. The libraries are fabricated with compositional and geometrical (wire width) variations to investigate the influence of these parameters on the transformation properties. Interesting behaviors were observed: Phase transformation temperatures change in the range from 1 to 72 °C (austenite finish, (A f ), 13 to 66 °C (martensite start, M s ) and the thermal hysteresis from -3.5 to 20 K. It is shown that a vanishing hysteresis can be achieved for special combinations of sample geometry and composition.

  15. Experimental evidence of stress-field-induced selection of variants in Ni-Mn-Ga ferromagnetic shape-memory alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y. D.; Key Laboratory for Anisotropy and Texture of Materials; Brown, D. W.

    2007-05-01

    The in situ time-of-flight neutron-diffraction measurements captured well the martensitic transformation behavior of the Ni-Mn-Ga ferromagnetic shape-memory alloys under uniaxial stress fields. We found that a small uniaxial stress applied during phase transformation dramatically disturbed the distribution of variants in the product phase. The observed changes in the distributions of variants may be explained by considering the role of the minimum distortion energy of the Bain transformation in the effective partition among the variants belonging to the same orientation of parent phase. It was also found that transformation kinetics under various stress fields follows the scale law. The present investigationsmore » provide the fundamental approach for scaling the evolution of microstructures in martensitic transitions, which is of general interest to the condensed matter community.« less

  16. Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations.

    PubMed

    Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F; Spoerke, Erik David; Pan, Wei; Zuo, Jian Min

    2016-04-13

    Atomic-scale phenomena fundamentally influence materials form and function that makes the ability to locally probe and study these processes critical to advancing our understanding and development of materials. Atomic-scale chemical imaging by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) is a powerful approach to investigate solid crystal structures. Inefficient X-ray emission and collection, however, require long acquisition times (typically hundreds of seconds), making the technique incompatible with electron-beam sensitive materials and study of dynamic material phenomena. Here we describe an atomic-scale STEM-EDS chemical imaging technique that decreases the acquisition time to as little as one second, a reduction of more than 100 times. We demonstrate this new approach using LaAlO3 single crystal and study dynamic phase transformation in beam-sensitive Li[Li0.2Ni0.2Mn0.6]O2 (LNMO) lithium ion battery cathode material. By capturing a series of time-lapsed chemical maps, we show for the first time clear atomic-scale evidence of preferred Ni-mobility in LNMO transformation, revealing new kinetic mechanisms. These examples highlight the potential of this approach toward temporal, atomic-scale mapping of crystal structure and chemistry for investigating dynamic material phenomena.

  17. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  18. On the effect of response transformations in sequential parameter optimization.

    PubMed

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  19. Mobile robot motion estimation using Hough transform

    NASA Astrophysics Data System (ADS)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  20. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  1. Derivation of improved load transformation matrices for launchers-spacecraft coupled analysis, and direct computation of margins of safety

    NASA Technical Reports Server (NTRS)

    Klein, M.; Reynolds, J.; Ricks, E.

    1989-01-01

    Load and stress recovery from transient dynamic studies are improved upon using an extended acceleration vector in the modal acceleration technique applied to structural analysis. Extension of the normal LTM (load transformation matrices) stress recovery to automatically compute margins of safety is presented with an application to the Hubble space telescope.

  2. Deregulation of a distinct set of microRNAs is associated with transformation of gastritis into MALT lymphoma.

    PubMed

    Thorns, Christoph; Kuba, Johannes; Bernard, Veronica; Senft, Andrea; Szymczak, Silke; Feller, Alfred C; Bernd, Heinz-Wolfram

    2012-04-01

    The mechanisms underlying the transformation from chronic Helicobacter pylori gastritis to gastric extranodal marginal zone lymphoma (MALT lymphoma) are poorly understood. This study aims to identify microRNAs that might be involved in the process of neoplastic transformation. We generated microRNA signatures by RT-PCR in 68 gastric biopsy samples representing normal mucosa, gastritis, suspicious lymphoid infiltrates, and overt MALT lymphoma according to Wotherspoon criteria. Analyses revealed a total of 41 microRNAs that were significantly upregulated (n = 33) or downregulated (n = 8) in succession from normal mucosa to gastritis and to MALT lymphoma. While some of these merely reflect the presence of lymphocytes (e.g. miR-566 and miR-212) or H. pylori infection (e.g. miR-155 and let7f), a distinct set of five microRNAs (miR-150, miR-550, miR-124a, miR-518b and miR-539) was shown to be differentially expressed in gastritis as opposed to MALT lymphoma. This differential expression might therefore indicate a central role of these microRNAs in the process of malignant transformation.

  3. Nearly frictionless faulting by unclamping in long-term interaction models

    USGS Publications Warehouse

    Parsons, T.

    2002-01-01

    In defiance of direct rock-friction observations, some transform faults appear to slide with little resistance. In this paper finite element models are used to show how strain energy is minimized by interacting faults that can cause long-term reduction in fault-normal stresses (unclamping). A model fault contained within a sheared elastic medium concentrates stress at its end points with increasing slip. If accommodating structures free up the ends, then the fault responds by rotating, lengthening, and unclamping. This concept is illustrated by a comparison between simple strike-slip faulting and a mid-ocean-ridge model with the same total transform length; calculations show that the more complex system unclapms the transforms and operates at lower energy. In another example, the overlapping San Andreas fault system in the San Francisco Bay region is modeled; this system is complicated by junctions and stepovers. A finite element model indicates that the normal stress along parts of the faults could be reduced to hydrostatic levels after ???60-100 k.y. of system-wide slip. If this process occurs in the earth, then parts of major transform fault zones could appear nearly frictionless.

  4. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    NASA Astrophysics Data System (ADS)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  5. Phenotypically heterogeneous deletion of the ABH antigen from the transformed bladder urothelium. A scanning electron microscope study.

    PubMed

    De Harven, E; He, S; Hanna, W; Bootsma, G; Connolly, J G

    1987-10-01

    The deletion of ABH blood group antigens from the luminal surface of the bladder mucosa in cases of well differentiated transitional cell carcinomata, and the formation of pleomorphic microvilli have both been associated with aggressive biological behaviour and invasiveness of the tumors. We have studied cold cup biopsies from 8 normal mucosae and 17 papillary transitional cell carcinomata of the urinary bladder. The aim of our study was to correlate the formation of uniform or pleomorphic microvilli with the extent of deletion of the ABH blood group antigens on the surface of normal and transformed bladder urothelium. Immunogold scanning electron microscopy (SEM) in the backscattered electron (BE) imaging mode was used for this purpose. In the normal urothelium, uniform labeling of the luminal cells was demonstrated. In well differentiated tumors, the superficial cells exhibited uniform microvilli and a heterogeneous expression of the ABH antigens, giving characteristic 'mosaic' patterns of the antigenic labeling across the mucosal surface. These patterns were sharply delimitated at cell junctions when viewed by SEM; these observations were confirmed by transmission electron microscopy. In higher grade tumors, decreased ABH antigen expression, pleomorphic microvilli and/or featureless luminal cells were observed. In the transformed urothelium, the formation of uniform microvilli appeared to precede the loss of ABH antigen in most cases.

  6. Comparative Study of the Detection of Chromium Content in Rice Leaves by 532 nm and 1064 nm Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Shen, Tingting; Ye, Lanhan; Kong, Wenwen; Wang, Wei; Liu, Xiaodan

    2018-01-01

    Fast detection of toxic metals in crops is important for monitoring pollution and ensuring food safety. In this study, laser-induced breakdown spectroscopy (LIBS) was used to detect the chromium content in rice leaves. We investigated the influence of laser wavelength (532 nm and 1064 nm excitation), along with the variations of delay time, pulse energy, and lens-to-sample distance (LTSD), on the signal (sensitivity and stability) and plasma features (temperature and electron density). With the optimized experimental parameters, univariate analysis was used for quantifying the chromium content, and several preprocessing methods (including background normalization, area normalization, multiplicative scatter correction (MSC) transformation and standardized normal variate (SNV) transformation were used to further improve the analytical performance. The results indicated that 532 nm excitation showed better sensitivity than 1064 nm excitation, with a detection limit around two times lower. However, the prediction accuracy for both excitation wavelengths was similar. The best result, with a correlation coefficient of 0.9849, root-mean-square error of 3.89 mg/kg and detection limit of 2.72 mg/kg, was obtained using the SNV transformed signal (Cr I 425.43 nm) induced by 532 nm excitation. The results indicate the inspiring capability of LIBS for toxic metals detection in plant materials. PMID:29463032

  7. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  8. Comparative human cellular radiosensitivity: I. The effect of SV40 transformation and immortalisation on the gamma-irradiation survival of skin derived fibroblasts from normal individuals and from ataxia-telangiectasia patients and heterozygotes.

    PubMed

    Arlett, C F; Green, M H; Priestley, A; Harcourt, S A; Mayne, L V

    1988-12-01

    We have compared cell killing following 60Co gamma irradiation in 22 primary human fibroblast strains, nine SV40-immortalized human fibroblast lines and seven SV40-transformed pre-crisis human fibroblast cultures. We have examined material from normal individuals, from ataxia-telangiectasia (A-T) patients and from A-T heterozygotes. We have confirmed the greater sensitivity of A-T derived cells to gamma radiation. The distinction between A-T and normal cells is maintained in cells immortalized by SV40 virus but the immortal cells are more gamma radiation resistant than the corresponding primary fibroblasts. Cells transformed by plasmids (pSV3gpt and pSV3neo) expressing SV40 T-antigen, both pre- and post-crisis, show this increased resistance, indicating that it is expression of SV40 T-antigen, rather than immortalization per se which is responsible for the change. We use D0, obtained from a straight line fit, and D, estimated from a multitarget curve, as parameters to compare radiosensitivity. We suggest that both have their advantages; D0 is perhaps more reproducible, but D is more realistic when comparing shouldered and non-shouldered data.

  9. Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review.

    PubMed

    Sudarshan, Vidya K; Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chandran, Vinod; Molinari, Filippo; Fujita, Hamido; Ng, Kwan Hoong

    2016-02-01

    Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A novel X-linked disorder with developmental delay and autistic features.

    PubMed

    Kaya, Namik; Colak, Dilek; Albakheet, Albandary; Al-Owain, Mohammad; Abu-Dheim, Nada; Al-Younes, Banan; Al-Zahrani, Jawaher; Mukaddes, Nahit M; Dervent, Aysin; Al-Dosari, Naji; Al-Odaib, Ali; Kayaalp, Inci V; Al-Sayed, Moeenaladin; Al-Hassnan, Zuhair; Nester, Michael J; Al-Dosari, Mohammad; Al-Dhalaan, Hesham; Chedrawi, Aziza; Gunoz, Hulya; Karakas, Bedri; Sakati, Nadia; Alkuraya, Fowzan S; Gascon, Generaso G; Ozand, Pinar T

    2012-04-01

    Genomic duplications that lead to autism and other human diseases are interesting pathological lesions since the underlying mechanism almost certainly involves dosage sensitive genes. We aim to understand a novel genomic disorder with profound phenotypic consequences, most notably global developmental delay, autism, psychosis, and anorexia nervosa. We evaluated the affected individuals, all maternally related, using childhood autism rating scale (CARS) and Vineland Adaptive scales, magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) brain, electroencephalography (EEG), electromyography (EMG), muscle biopsy, high-resolution molecular karyotype arrays, Giemsa banding (G-banding) and fluorescent in situ hybridization (FISH) experiments, mitochondrial DNA (mtDNA) sequencing, X-chromosome inactivation study, global gene expression analysis on Epstein-Barr virus (EBV)-transformed lymphoblasts, and quantitative reverse-transcription polymerase chain reaction (qRT-PCR). We have identified a novel Xq12-q13.3 duplication in an extended family. Clinically normal mothers were completely skewed in favor of the normal chromosome X. Global transcriptional profiling of affected individuals and controls revealed significant alterations of genes and pathways in a pattern consistent with previous microarray studies of autism spectrum disorder patients. Moreover, expression analysis revealed copy number-dependent increased messenger RNA (mRNA) levels in affected patients compared to control individuals. A subset of differentially expressed genes was validated using qRT-PCR. Xq12-q13.3 duplication is a novel global developmental delay and autism-predisposing chromosomal aberration; pathogenesis of which may be mediated by increased dosage of genes contained in the duplication, including NLGN3, OPHN1, AR, EFNB1, TAF1, GJB1, and MED12. Copyright © 2011 American Neurological Association.

  11. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    PubMed

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  12. Strain Rate Effect on Tensile Flow Behavior and Anisotropy of a Medium-Manganese TRIP Steel

    NASA Astrophysics Data System (ADS)

    Alturk, Rakan; Hector, Louis G.; Matthew Enloe, C.; Abu-Farha, Fadi; Brown, Tyson W.

    2018-06-01

    The dependence of the plastic anisotropy on the nominal strain rate for a medium-manganese (10 wt.% Mn) transformation-induced plasticity (TRIP) steel with initial austenite volume fraction of 66% (balance ferrite) has been investigated. The material exhibited yield point elongation, propagative instabilities during hardening, and austenite transformation to α'-martensite either directly or through ɛ-martensite. Uniaxial strain rates within the range of 0.005-500 s-1 along the 0°, 45°, and 90° orientations were selected based upon their relevance to automotive applications. The plastic anisotropy ( r) and normal anisotropy ( r n) indices corresponding to each direction and strain rate were determined using strain fields obtained from stereo digital image correlation systems that enabled both quasistatic and dynamic measurements. The results provide evidence of significant, orientation-dependent strain rate effects on both the flow stress and the evolution of r and r n with strain. This has implications not only for material performance during forming but also for the development of future strain-rate-dependent anisotropic yield criteria. Since tensile data alone for the subject medium-manganese TRIP steel do not satisfactorily determine the microstructural mechanisms responsible for the macroscopic-scale behavior observed on tensile testing, additional tests that must supplement the mechanical test results presented herein are discussed.

  13. Stimulated neutrino transformation through turbulence on a changing density profile and application to supernovae

    DOE PAGES

    Patton, Kelly M.; Kneller, James P.; McLaughlin, Gail C.

    2015-01-06

    We apply the model of stimulated neutrino transitions to neutrinos travelling through turbulence on a non constant density profile. We describe a method to predict the location of large amplitude transitions and demonstrate the effectiveness of this method by comparing to numerical calculations using a model supernova (SN) profile. The important wavelength scales of turbulence, both those that stimulate neutrino transformations and those that suppress them, are presented and discussed. We then examine the effects of changing the parameters of the turbulent spectrum, specifically the root-mean-square amplitude and cutoff wavelength, and show how the stimulated transitions model offers an explanationmore » for the increase in both the amplitude and number of transitions with large amplitude turbulence, as well as a suppression or absence of transitions for long cutoff wavelengths. The method can also be used to predict the location of transitions between anti-neutrino states which, in the normal hierarchy we are using, will not undergo Mikheev-Smirnov-Wolfenstein transitions. Lastly, the stimulated neutrino transitions method is applied to a turbulent 2D supernova simulation and explains the minimal observed effect on neutrino oscillations in the simulation as being due to excessive long wavelength modes suppressing transitions and the absence of modes that fulfill the parametric resonance condition.« less

  14. Strain Rate Effect on Tensile Flow Behavior and Anisotropy of a Medium-Manganese TRIP Steel

    NASA Astrophysics Data System (ADS)

    Alturk, Rakan; Hector, Louis G.; Matthew Enloe, C.; Abu-Farha, Fadi; Brown, Tyson W.

    2018-04-01

    The dependence of the plastic anisotropy on the nominal strain rate for a medium-manganese (10 wt.% Mn) transformation-induced plasticity (TRIP) steel with initial austenite volume fraction of 66% (balance ferrite) has been investigated. The material exhibited yield point elongation, propagative instabilities during hardening, and austenite transformation to α'-martensite either directly or through ɛ-martensite. Uniaxial strain rates within the range of 0.005-500 s-1 along the 0°, 45°, and 90° orientations were selected based upon their relevance to automotive applications. The plastic anisotropy (r) and normal anisotropy (r n) indices corresponding to each direction and strain rate were determined using strain fields obtained from stereo digital image correlation systems that enabled both quasistatic and dynamic measurements. The results provide evidence of significant, orientation-dependent strain rate effects on both the flow stress and the evolution of r and r n with strain. This has implications not only for material performance during forming but also for the development of future strain-rate-dependent anisotropic yield criteria. Since tensile data alone for the subject medium-manganese TRIP steel do not satisfactorily determine the microstructural mechanisms responsible for the macroscopic-scale behavior observed on tensile testing, additional tests that must supplement the mechanical test results presented herein are discussed.

  15. Lower crustal flow and the role of shear in basin subsidence: An example from the Dead Sea basin

    USGS Publications Warehouse

    Al-Zoubi, A.; ten Brink, Uri S.

    2002-01-01

    We interpret large-scale subsidence (5–6 km depth) with little attendant brittle deformation in the southern Dead Sea basin, a large pull-apart basin along the Dead Sea transform plate boundary, to indicate lower crustal thinning due to lower crustal flow. Along-axis flow within the lower crust could be induced by the reduction of overburden pressure in the central Dead Sea basin, where brittle extensional deformation is observed. Using a channel flow approximation, we estimate that lower crustal flow would occur within the time frame of basin subsidence if the viscosity is ≤7×1019–1×1021 Pa s, a value compatible with the normal heat flow in the region. Lower crustal viscosity due to the strain rate associated with basin extension is estimated to be similar to or smaller than the viscosity required for a channel flow. However, the viscosity under the basin may be reduced to 5×1017–5×1019 Pa s by the enhanced strain rate due to lateral shear along the transform plate boundary. Thus, lower crustal flow facilitated by shear may be a viable mechanism to enlarge basins and modify other topographic features even in the absence of underlying thermal anomalies.

  16. Taking the Reins: Institutional Transformation in Higher Education. The ACE Series on Higher Education

    ERIC Educational Resources Information Center

    Eckel, Peter D.; Kezar, Adrianna

    2011-01-01

    Peter Eckel and Adrianna Kezar have written this book to offer insight to campus leaders who face transformational change--to help them mount a proactive, rather than a reactive, process to effect transformation. They believe that most institutional leaders have little to no experience with implementing large-scale change and lack a solid…

  17. Statistical Assessment of Estimated Transformations in Observed-Score Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; González, Jorge

    2016-01-01

    Equating methods make use of an appropriate transformation function to map the scores of one test form into the scale of another so that scores are comparable and can be used interchangeably. The equating literature shows that the ways of judging the success of an equating (i.e., the score transformation) might differ depending on the adopted…

  18. NREL and Hawaiian Electric Navigate Uncharted Waters of Energy

    Science.gov Websites

    Transformation (Part 1) | News | NREL 1) NREL and Hawaiian Electric Navigate Uncharted Waters of Energy Transformation (Part 1) April 23, 2018 The 27.6-MW Eurus solar array on the island of Oahu been a renewable energy transformation at this scale before. There are gaps and issues that we know we

  19. Wheat (Triticum aestivum L.) transformation using immature embryos.

    PubMed

    Ishida, Yuji; Tsunashima, Masako; Hiei, Yukoh; Komari, Toshihiko

    2015-01-01

    Wheat may now be transformed very efficiently by Agrobacterium tumefaciens. Under the protocol hereby described, immature embryos of healthy plants of wheat cultivar Fielder grown in a well-conditioned greenhouse were pretreated with centrifuging and cocultivated with A. tumefaciens. Transgenic wheat plants were obtained routinely from between 40 and 90 % of the immature embryos, thus infected in our tests. All regenerants were normal in morphology and fully fertile. About half of the transformed plants carried single copy of the transgene, which are inherited by the progeny in a Mendelian fashion.

  20. The ratio and allometric scaling of speed, power, and strength in elite male rugby union players.

    PubMed

    Crewther, Blair T; McGuigan, Mike R; Gill, Nicholas D

    2011-07-01

    This study compared the effectiveness of ratio and allometric scaling for normalizing speed, power, and strength in elite male rugby union players. Thirty rugby players (body mass [BM] 107.1 ± 10.1 kg, body height [BH] 187.8 ± 7.1 cm) were assessed for sprinting speed, peak power during countermovement jumps and squat jumps, and horizontal jumping distance. One-repetition maximum strength was assessed during a bench press, chin-up, and back squat. Performance was normalized using ratio and allometric scaling (Y/X), where Y is the performance, X, the body size variable (i.e., BM or BH), and b is the power exponent. An exponent of 1.0 was used during ratio scaling. Allometric scaling was applied using proposed exponents and derived exponents for each data set. The BM and BH variables were significantly related, or close to, performance during the speed, power and/or strength tests (p < 0.001-0.066). Ratio scaling and allometric scaling using proposed exponents were effective in normalizing performance (i.e., no significant correlations) for some of these tests. Allometric scaling with derived exponents normalized performance across all the tests undertaken, thereby removing the confounding effects of BM and BH. In terms of practical applications, allometric scaling with derived exponents may be used to normalize performance between larger rugby forwards and smaller rugby backs, and could provide additional information on rugby players of similar body size. Ratio scaling may provide the best predictive measure of performance (i.e., strongest correlations).

  1. Back to Normal! Gaussianizing posterior distributions for cosmological probes

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2014-05-01

    We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.

  2. Colour blindness does not preclude fame as an artist: celebrated Australian artist Clifton Pugh was a protanope.

    PubMed

    Cole, Barry L; Harris, Ross W

    2009-09-01

    The aim was to make a posthumous diagnosis of the abnormal colour vision of the acclaimed artist Clifton Pugh and to analyse his use of colours to discern the strategies he used to overcome his limited colour perception. A pedigree of Pugh's family was constructed by searching public records. Pugh had no daughters but he had two older brothers, one of whom was still living. We tested the colour vision of this brother and one of his daughters and one of his grandsons. Three children of the other brother were questioned about the colour vision of their father and one daughter was tested for heterozygosity with the Medmont C100. Four observers with normal colour vision categorised the colours used by Pugh in a sample of 59 of his paintings. Protanopic transformations of some of these paintings were made using the Vischeck algorithms to gain an appreciation of how Pugh saw his own paintings. The validity of the transformations was tested by asking a protanope to report if the transformations looked the same as the normal colour images of 10 of Pugh's paintings. Pugh's brother was a severe protan. His daughter showed Schmidt's sign and was a carrier of the protan gene and her son was a protanope. The oldest brother was reported as having normal colour vision. Therefore, it is almost certain that Clifton Pugh was a protanope. Pugh used all colours in his paintings but preferred to structure them on brown, black and blue or, for high key paintings, on cream or flesh colours. He used greens and purples sparingly. The protanopic Vischeck transformations did not always look the same as the normal colour image for the protanope observer. A severe colour vision deficiency does not preclude success as a painter. It is a handicap but there are strategies artists can use to overcome it.

  3. Universal binding energy relations in metallic adhesion

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Smith, J. R.; Rose, J. H.

    1981-01-01

    Scaling relations which map metallic adhesive binding energy onto a single universal binding energy curve are discussed in relation to adhesion, friction, and wear in metals. The scaling involved normalizing the energy to the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. The universal curve was found to be accurately represented by E*(A*)= -(1+beta A) exp (-Beta A*) where E* is the normalized binding energy, A* is the normalized separation, and beta is the normalized decay constant. The calculated cohesive energies of potassium, barium, copper, molybdenum, and samarium were also found to scale by similar relations, suggesting that the universal relation may be more general than for the simple free electron metals.

  4. Coexistence of a well-determined kinetic law and a scale-invariant power law during the same physical process

    NASA Astrophysics Data System (ADS)

    Zreihan, Noam; Faran, Eilon; Vives, Eduard; Planes, Antoni; Shilo, Doron

    2018-01-01

    It is generally claimed that physical processes which display scale-invariant power-law distributions are subjected to a dynamic criticality that prohibits a well-defined kinetic law. In this paper, we demonstrate the coexistence of these two apparently contradicting behaviors during the same physical process—the motion of type-II twin boundaries in martensite Ni-Mn-Ga. The process is investigated by combined measurements of the temporal twin-boundary velocity and the acoustic emitted energy. Velocity values are extracted from high-resolution force measurements taken during displacement-driven mechanical tests, as well as from force-driven magnetic tests, and cover an overall range of six orders of magnitude. Acoustic emission (AE) is measured during mechanical tests. Velocity values follow a normal distribution whose characteristic value is determined by the material's kinetic relation, and its width scales with the average velocity. In addition, it is observed that velocity distributions are characterized by a heavy tail at the right (i.e., faster) end that exhibits a power law over more than one and a half orders of magnitude. At the same time, the AE signals follow a scale-invariant power-law distribution, which is not sensitive to the average twin-boundary velocity. The coexistence of these two different statistical behaviors reflects the complex nature of twin-boundary motion and suggests the possibility that the transformation proceeds through physical subprocesses that are close to criticality alongside other processes that are not.

  5. Transformation of Summary Statistics from Linear Mixed Model Association on All-or-None Traits to Odds Ratio.

    PubMed

    Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M

    2018-04-01

    Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.

  6. Data-Intensive Discovery Methods for Seismic Monitoring

    NASA Astrophysics Data System (ADS)

    Richards, P. G.; Schaff, D. P.; Ammon, C. J.; Cleveland, M.; Young, C. J.; Slinkard, M.; Heck, S.

    2012-12-01

    Seismic events are still mostly located one-at-a-time by Geiger's method of 1909, which uses phase picks and minimizes differences between observed and modeled travel times. But methods that recognize and use seismogram archives as a major resource have been successfully demonstrated---especially for California, China, and for the mid-ocean ridge-transform system---where they enable new insights into earthquake physics and Earth structure, and have raised seismic monitoring to new levels. We report progress on a series of collaborative projects to evaluate such data-intensive methods on ever-larger scales. We use cross correlation (CC): (1) to improve estimates of the relative size of neighboring seismic events in regions of high seismicity; and (2) as a detector, to find new events in current data streams that are similar to events already in the archive, to add to the number of detections of an already known event, or to place a threshold on the size of undetected events occurring near a template event. Elsewhere at this meeting Schaff and Richards report on uses of non-normalized CC measurements to estimate relative event size---a procedure that may be as important as widely-used CC methods to improve the precision of relative location estimates. They have successfully modeled the degradation in CC value that is due to the spatial separation of similar events and can prevent this bias from seriously influencing estimates of relative event size for non-collocated events. Cleveland and Ammon report in more detail on cross-correlation used to measure Rayleigh-wave time shifts, and on improved epicentroid locations and relative origin-time shifts in remote oceanic transform regions. They seek to extend the correlation of R1 waveforms from vertical strike-slip transform-fault earthquakes with waveforms from normal faulting events at nearby ridges, to improve the locations of events offshore from the Pacific northwest and southwestern China. Finally our collaborating Sandia group has reported preliminary results using a 360-core distributed network that took about two hours to search a month-long continuous single channel (sampled at 40 sps) for the occurrence of one or more of 920 waveforms each lasting 40 s and previously recorded by the station. Speed scales with number of cores; and inversely with number of channels, sample rate, and window length. Orders-of-magnitude improvement in speed are anticipated, on these early results; and application to numerous channels. From diverse results such as these, it seems appropriate to consider the future possibility of radical improvement in monitoring virtually all seismically active areas, using archives of prior events as the major resource---though we recognize that such an approach does not directly help to characterize seismic events in inactive regions, or events in active regions which are dissimilar to previously recorded events.

  7. Unique Normal Form and the Associated Coefficients for a Class of Three-Dimensional Nilpotent Vector Fields

    NASA Astrophysics Data System (ADS)

    Li, Jing; Kou, Liying; Wang, Duo; Zhang, Wei

    2017-12-01

    In this paper, we mainly focus on the unique normal form for a class of three-dimensional vector fields via the method of transformation with parameters. A general explicit recursive formula is derived to compute the higher order normal form and the associated coefficients, which can be achieved easily by symbolic calculations. To illustrate the efficiency of the approach, a comparison of our result with others is also presented.

  8. A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases

    NASA Astrophysics Data System (ADS)

    Shi, Hongli; Luo, Shuqian

    2010-12-01

    In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.

  9. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  10. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  11. FT-IR, FT-Raman, NMR studies and ab initio-HF, DFT-B3LYP vibrational analysis of 4-chloro-2-fluoroaniline

    NASA Astrophysics Data System (ADS)

    Arivazhagan, M.; Anitha Rexalin, D.

    2012-10-01

    The Fourier transform infrared (FT-IR) and Fourier transform Raman (FT-Raman) spectra of 4-chloro-2-fluoroaniline (CFA) have been recorded and analyzed. The equilibrium geometry, bonding features and harmonic vibrational frequencies have been investigated with the help of ab initio and density functional theory (DFT) methods. The assignments of the vibrational spectra have been carried out with the help of normal coordinate analysis (NCA) following the scaled quantum mechanical force field methodology. The 1H and 13C nuclear magnetic resonance (NMR) chemical shifts of the molecule are calculated by the Gauge including atomic orbital (GIAO) method. The first order hyperpolarizability (β0) of this novel molecular system and related properties (β, α0 and Δα) of CFA are calculated using B3LYP/6-311++G(d,p) and HF/6-311++G(d,p) methods on the finite-field approach. The calculated results also show that the CFA molecule might have microscopic nonlinear optical (NLO) behavior with non-zero values. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The result confirms the occurrence of intramolecular charge transfer (ICT) within the molecule. The HOMO-LUMO energies UV-vis spectral analysis and MEP are performed by B3LYP/6-311++G(d,p) approach. A detailed interpretation of the infrared and Raman spectra of CFA is also reported based on total energy distribution (TED). The difference between the observed and scaled wave number values of the most of the fundamentals is very small.

  12. Body Mass Normalization for Lateral Abdominal Muscle Thickness Measurements in Adolescent Athletes.

    PubMed

    Linek, Pawel

    2017-09-01

    To determine the value of allometric parameters for ultrasound measurements of the oblique external (OE), oblique internal (OI), and transversus abdominis (TrA) muscles in adolescent athletes. The allometric parameter is the slope of the linear regression line between the log-transformed body mass and log-transformed muscle size measurement. The study included 114 male adolescent football players between the ages of 10 and 19 years. All individuals with no surgical procedures performed on the trunk area and who had played a sport for at least 2 years were included. A real-time B-mode ultrasound scanner with a linear array transducer was used to obtain images of the lateral abdominal muscles from both sides of the body. A stabilometric platform was used to assess the body mass value. The correlations between body mass and the OE, OI, and TrA muscle thicknesses were r = 0.73, r = 0.79, and r = 0.64, respectively (in all cases, P < .0001). The allometric parameters were 0.77 for the OE, 0.67 for the OI, and 0.61 for the TrA. Using these parameters, no significant correlations were found between body mass and the allometric-scaled thickness of the lateral abdominal muscles. Significant positive correlations exist between body mass and lateral abdominal muscle thickness in adolescent athletes. Therefore, it is reasonable to advise that the values of the allometric parameters for the OE, OI, and TrA muscles obtained in this study should be used, and the allometric-scaled thicknesses of those muscles should be analyzed in future research on adolescent athletes. © 2017 by the American Institute of Ultrasound in Medicine.

  13. Potential of dynamically harmonized Fourier transform ion cyclotron resonance cell for high-throughput metabolomics fingerprinting: control of data quality.

    PubMed

    Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle

    2018-01-01

    Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.

  14. Fault severity assessment for rolling element bearings using the Lempel-Ziv complexity and continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Hong, Hoonbin; Liang, Ming

    2009-02-01

    This paper proposes a new version of the Lempel-Ziv complexity as a bearing fault (single point) severity measure based on the continuous wavelet transform (CWT) results, and attempts to address the issues present in the current version of the Lempel-Ziv complexity measure. To establish the relationship between the Lempel-Ziv complexity and bearing fault severity, an analytical model for a single-point defective bearing is adopted and the factors contributing to the complexity value are explained. To avoid the ambiguity between fault and noise, the Lempel-Ziv complexity is jointly applied with the CWT. The CWT is used to identify the best scale where the fault resides and eliminate the interferences of noise and irrelevant signal components as much as possible. Then, the Lempel-Ziv complexity values are calculated for both the envelope and high-frequency carrier signal obtained from wavelet coefficients at the best scale level. As the noise and other un-related signal components have been largely removed, the Lempel-Ziv complexity value will be mostly contributed by the bearing system and hence can be reliably used as a bearing fault measure. The applications to the bearing inner- and outer-race fault signals have demonstrated that the revised Lempel-Ziv complexity can effectively measure the severity of both inner- and outer-race faults. Since the complexity values are not dependent on the magnitude of the measured signal, the proposed method is less sensitive to the data sets measured under different data acquisition conditions. In addition, as the normalized complexity values are bounded between zero and one, it is convenient to observe the fault growing trend by examining the Lempel-Ziv complexity.

  15. FT-IR, FT-Raman, NMR studies and ab initio-HF, DFT-B3LYP vibrational analysis of 4-chloro-2-fluoroaniline.

    PubMed

    Arivazhagan, M; Anitha Rexalin, D

    2012-10-01

    The Fourier transform infrared (FT-IR) and Fourier transform Raman (FT-Raman) spectra of 4-chloro-2-fluoroaniline (CFA) have been recorded and analyzed. The equilibrium geometry, bonding features and harmonic vibrational frequencies have been investigated with the help of ab initio and density functional theory (DFT) methods. The assignments of the vibrational spectra have been carried out with the help of normal coordinate analysis (NCA) following the scaled quantum mechanical force field methodology. The (1)H and (13)C nuclear magnetic resonance (NMR) chemical shifts of the molecule are calculated by the Gauge including atomic orbital (GIAO) method. The first order hyperpolarizability (β(0)) of this novel molecular system and related properties (β, α(0) and Δα) of CFA are calculated using B3LYP/6-311++G(d,p) and HF/6-311++G(d,p) methods on the finite-field approach. The calculated results also show that the CFA molecule might have microscopic nonlinear optical (NLO) behavior with non-zero values. Stability of the molecule arising from hyper conjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. The result confirms the occurrence of intramolecular charge transfer (ICT) within the molecule. The HOMO-LUMO energies UV-vis spectral analysis and MEP are performed by B3LYP/6-311++G(d,p) approach. A detailed interpretation of the infrared and Raman spectra of CFA is also reported based on total energy distribution (TED). The difference between the observed and scaled wave number values of the most of the fundamentals is very small. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Quantum phase transition and quench dynamics in the anisotropic Rabi model

    NASA Astrophysics Data System (ADS)

    Shen, Li-Tuo; Yang, Zhen-Biao; Wu, Huai-Zhi; Zheng, Shi-Biao

    2017-01-01

    We investigate the quantum phase transition (QPT) and quench dynamics in the anisotropic Rabi model when the ratio of the qubit transition frequency to the oscillator frequency approaches infinity. Based on the Schrieffer-Wolff transformation, we find an anti-Hermitian operator that maps the original Hamiltonian into a one-dimensional oscillator Hamiltonian within the spin-down subspace. We analytically derive the eigenenergy and eigenstate of the normal and superradiant phases and demonstrate that the system undergoes a second-order quantum phase transition at a critical border. The critical border is a straight line in a two-dimensional parameter space which essentially extends the dimensionality of QPT in the Rabi model. By combining the Kibble-Zurek mechanism and the adiabatic dynamics method, we find that the residual energy vanishes as the quench time tends to zero, which is a sharp contrast to the universal scaling where the residual energy diverges in the same limit.

  17. Invisibility Cloak Printed on a Photonic Chip

    PubMed Central

    Feng, Zhen; Wu, Bing-Hong; Zhao, Yu-Xi; Gao, Jun; Qiao, Lu-Feng; Yang, Ai-Lin; Lin, Xiao-Feng; Jin, Xian-Min

    2016-01-01

    Invisibility cloak capable of hiding an object can be achieved by properly manipulating electromagnetic field. Such a remarkable ability has been shown in transformation and ray optics. Alternatively, it may be realistic to create a spatial cloak by means of confining electromagnetic field in three-dimensional arrayed waveguides and introducing appropriate collective curvature surrounding an object. We realize the artificial structure in borosilicate by femtosecond laser direct writing, where we prototype up to 5,000 waveguides to conceal millimeter-scale volume. We characterize the performance of the cloak by normalized cross correlation, tomography analysis and continuous three-dimensional viewing angle scan. Our results show invisibility cloak can be achieved in waveguide optics. Furthermore, directly printed invisibility cloak on a photonic chip may enable controllable study and novel applications in classical and quantum integrated photonics, such as invisualising a coupling or swapping operation with on-chip circuits of their own. PMID:27329510

  18. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    PubMed

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  19. Operational calibration of Geostationary Operational Environmental Satellite-8 and-9 imagers and sounders.

    PubMed

    Weinreb, M; Jamieson, M; Fulton, N; Chen, Y; Johnson, J X; Bremer, J; Smith, C; Baucom, J

    1997-09-20

    We describe the operational in-orbit calibration of the Geostationary Operational Environmental Satellite (GOES)-8 and-9 imagers and sounders. In the infrared channels the calibration is based on observations of space and an onboard blackbody. The calibration equation expresses radiance as a quadratic in instrument output. To suppress noise in the blackbody sequences, we filter the calibration slopes. The calibration equation also accounts for an unwanted variation of the reflectances of the instruments' scan mirrors with east-west scan position, which was not discovered until the instruments were in orbit. The visible channels are not calibrated, but the observations are provided relative to the level of space and are normalized to minimize east-west striping in the images. Users receive scaled radiances in a GOES variable format (GVAR) data stream. We describe the procedure users can apply to transform GVAR counts into radiances, temperatures, and mode-A counts.

  20. Invisibility Cloak Printed on a Photonic Chip

    NASA Astrophysics Data System (ADS)

    Feng, Zhen; Wu, Bing-Hong; Zhao, Yu-Xi; Gao, Jun; Qiao, Lu-Feng; Yang, Ai-Lin; Lin, Xiao-Feng; Jin, Xian-Min

    2016-06-01

    Invisibility cloak capable of hiding an object can be achieved by properly manipulating electromagnetic field. Such a remarkable ability has been shown in transformation and ray optics. Alternatively, it may be realistic to create a spatial cloak by means of confining electromagnetic field in three-dimensional arrayed waveguides and introducing appropriate collective curvature surrounding an object. We realize the artificial structure in borosilicate by femtosecond laser direct writing, where we prototype up to 5,000 waveguides to conceal millimeter-scale volume. We characterize the performance of the cloak by normalized cross correlation, tomography analysis and continuous three-dimensional viewing angle scan. Our results show invisibility cloak can be achieved in waveguide optics. Furthermore, directly printed invisibility cloak on a photonic chip may enable controllable study and novel applications in classical and quantum integrated photonics, such as invisualising a coupling or swapping operation with on-chip circuits of their own.

  1. Measuring Joule heating and strain induced by electrical current with Moire interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Bicheng; Basaran, Cemal

    2011-04-01

    This study proposes a new method to locate and measure the temperature of the hot spots caused by Joule Heating by measuring the free thermal expansion in-plane strain. It is demonstrated that the hotspot caused by the Joule heating in a thin metal film/plate structure can be measured by Phase shifting Moire interferometry with continuous wavelet transform (PSMI/CWT) at the microscopic scale. A demonstration on a copper film is conducted to verify the theory under different current densities. A correlation between the current density and strain in two orthogonal directions (one in the direction of the current flow) is proposed.more » The method can also be used for the measurement of the Joule heating in the microscopic solid structures in the electronic packaging devices. It is shown that a linear relationship exists between current density squared and normal strains.« less

  2. Method for production of petroselinic acid and OMEGA12 hexadecanoic acid in transgenic plants

    DOEpatents

    Ohlrogge, John B.; Cahoon, Edgar B.; Shanklin, John; Somerville, Christopher R.

    1995-01-01

    The present invention relates to a process for producing lipids containing the fatty acid petroselinic acid in plants. The production of petroselinic acid is accomplished by genetically transforming plants which do not normally accumulate petroselinic acid with a gene for a .omega.12 desaturase from another species which does normally accumulate petroselinic acid.

  3. Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.

    2009-04-01

    This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.

  4. Malignant transformation of solitary spinal osteochondroma in two mature dogs.

    PubMed

    Green, E M; Adams, W M; Steinberg, H

    1999-01-01

    Canine osteochondroma is an uncommon bony tumor that arises in skeletally immature animals. Consequently, clinical signs typically occur in young dogs as a result of impingement of normal structures by the tumor. Radiographically, osteochondromas are benign in appearance. They are well circumscribed and cause no bony lysis nor periosteal proliferation. Osteochondromas may occur in two forms; solitary or multiple. Although histology and biologic behavior are identical, when in the multiple form the condition has been termed multiple cartilaginous exostoses. Malignant transformation of multiple cartilaginous exostoses has been reported in three mature dogs. We report two dogs with malignant transformation of solitary spinal osteochondromas. Both underwent transformation to osteosarcoma. Despite the benign radiographic appearance of osteochondromas and multiple cartilaginous exostoses, clinical signs should alert the clinician to the possibility of malignant transformation.

  5. CELL SHAPE AND HEXOSE TRANSPORT IN NORMAL AND VIRUS-TRANSFORMED CELLS IN CULTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bissell, M.J.; Farson, D.; Tung, A.S.C.

    1976-07-01

    The rate of hexose transport was compared in normal and virus-transformed cells on a monolayer and in suspension. It was shown that: (1) Both trypsin-removed cells and those suspended for an additional day in methyl cellulose had decreased rates of transport and lower available water space when compared with cells on a monolayer. Thus, cell shape affects the overall rate of hexose transport, especially at higher sugar concentrations. (2) Even in suspension, the initial transport rates remained higher in transformed cells with reference to normal cells. Scanning electron micrographs of normal and transformed chick cells revealed morphological differences only inmore » the flat state. This indicates that the increased rate of hexose transport after transformation is not due to a difference in the shape of these cells on a monolayer. The relation between the geometry of cells, transport rates, and growth regulation is undoubtedly very complex, and our knowledge of these relationships is still very elementary. In a recent review on the influence of geometry on control of cell growth, Folkman and Greenspan (1) pointed out that the permeability of cells in a flat versus a spherical state may indeed be very different. The growth properties of cells on a surface and in suspension have been compared often (1-5). However, with one exception. little is known about the changes in transport properties when cell shape is changed. Foster and Pardee (6) demonstrated that the active transport of a-aminoisobutyric acid was reduced 2.5 times in suspension cultures of Chinese hamster cells with respect to the cells grown on a coverslip. They attributed this to the smaller surface area of suspended cells. While it is not clear why active transport should be dependent on the surface area available, it is possible that once the cells assume a spherical configuration, the carrier proteins are redistributed in such a way as to make them less accessible to the substrate. What happens to facilitated and nonmediated diffusion when cells are placed in suspension has not been determined. The transport of hexoses into animal cells in culture has been shown to occur by facilitated diffusion (7). In chick embryo fibroblasts in culture, there is more than one component to this transport system: a saturable carrier-mediated transport with a Km for 2-deoxy-D-glucose (2DG) of about 1-5 mM (8-1 1) and a nonsaturable component which may include a low affinity transport site and/or nonmediated diffusion (8, 10, 11). It was shown previously (1 1) that growth rate, cell density, glucose deprivation, and virus transformation may alter not only the overall transport rate but also the rate of transport of one mode relative to the other. The rate of the nonmediated uptake, at least, is dependent upon the surface area available, and in addition total level of transport is dependent on the available water space (which in turn is roughly related to cell volume). Since it is estimated that the surface area may change as much as tenfold when cells go from a flat configuration to a spherical one (I), it is apparent that the cell shape could play a role in overall transport characteristics of cultured cells. Using glucose analogues, 2DG and 3-0-methylglucose (3MG), we compared the transport properties of normal and virus-transformed cells in suspension and on monolayers. It was found that both the rates as well as the total levels of transport were decreased after the cells were placed in suspension, with the nonsaturable component being most affected. Nevertheless, a difference in the initial rates of transport between normal and transformed cells remained, indicating that the difference is independent of cell shape.« less

  6. Energy transformations associated with the synoptic and planetary scales during the evolution of a blocking anticyclone and an upstream explosively-developing cyclone

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.; Tsou, Chih-Hua

    1992-01-01

    The eddy kinetic energy (KE), release of eddy potential energy, generation of eddy kinetic energy, and exchange between eddy and zonal kinetic energy are investigated for a blocking anticyclone over the North Atlantic Ocean and an extratropical cyclone that developed during January 17-21, 1979. The results indicate that KE was maintained by baroclinic conversion of potential to kinetic. As released potential energy was being used to generate KE, a portion of the KE was barotropically converted to zonal KE. These transformations were dominated by the synoptic-scale component. While changes in the mass field depended not only on the synoptic scale but also on the interactions between the synoptic and planetary scales, the corresponding changes in the eddy motion fields responded largely to synoptic-scale processes.

  7. Invariant object recognition based on the generalized discrete radon transform

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2004-04-01

    We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.

  8. Mapping soil total nitrogen of cultivated land at county scale by using hyperspectral image

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Zhang, Li Yan; Shu, Meiyan; Yang, Guijun

    2018-02-01

    Monitoring total nitrogen content (TNC) in the soil of cultivated land quantitively and mastering its spatial distribution are helpful for crop growing, soil fertility adjustment and sustainable development of agriculture. The study aimed to develop a universal method to map total nitrogen content in soil of cultivated land by HSI image at county scale. Several mathematical transformations were used to improve the expression ability of HSI image. The correlations between soil TNC and the reflectivity and its mathematical transformations were analyzed. Then the susceptible bands and its transformations were screened to develop the optimizing model of map soil TNC in the Anping County based on the method of multiple linear regression. Results showed that the bands of 14th, 16th, 19th, 37th and 60th with different mathematical transformations were screened as susceptible bands. Differential transformation was helpful for reducing the noise interference to the diagnosis ability of the target spectrum. The determination coefficient of the first order differential of logarithmic transformation was biggest (0.505), while the RMSE was lowest. The study confirmed the first order differential of logarithm transformation as the optimal inversion model for soil TNC, which was used to map soil TNC of cultivated land in the study area.

  9. Local subsystems in gauge theory and gravity

    DOE PAGES

    Donnelly, William; Freidel, Laurent

    2016-09-16

    We consider the problem of defining localized subsystems in gauge theory and gravity. Such systems are associated to spacelike hypersurfaces with boundaries and provide the natural setting for studying entanglement entropy of regions of space. We present a general formalism to associate a gauge-invariant classical phase space to a spatial slice with boundary by introducing new degrees of freedom on the boundary. In Yang-Mills theory the new degrees of freedom are a choice of gauge on the boundary, transformations of which are generated by the normal component of the nonabelian electric field. In general relativity the new degrees of freedommore » are the location of a codimension-2 surface and a choice of conformal normal frame. These degrees of freedom transform under a group of surface symmetries, consisting of diffeomorphisms of the codimension-2 boundary, and position-dependent linear deformations of its normal plane. We find the observables which generate these symmetries, consisting of the conformal normal metric and curvature of the normal connection. We discuss the implications for the problem of defining entanglement entropy in quantum gravity. Finally, our work suggests that the Bekenstein-Hawking entropy may arise from the different ways of gluing together two partial Cauchy surfaces at a cross-section of the horizon.« less

  10. Cluster State Quantum Computation

    DTIC Science & Technology

    2014-02-01

    information of relevance to the transformation. We define the fidelity as the probability that the desired target gate ATar has been faithfully...implemented on the computational modes given a successful measurement of the ancilla modes: 2 , (3) since Tr ( ATar † ATar )=2Mc for a properly normalized...photonic gates The optimization method we have developed maximizes the success probability S for a given target transformation ATar , for given

  11. Chemical Carcinogen-Induced Changes in tRNA Metabolism in Human Cells.

    DTIC Science & Technology

    1981-11-01

    the resolution and quantitation of modified ucleosides in the urine of cancer patients would not be particularly useful for the cell culture studies...Comparison of nucleic acid catabolism by normal human fibroblasts and fibroblasts transformed with methylazoxymethyl alcohol ( MAMA ),an activated...catabolite in long-term, pulse-chase experiments. However, the kinetics of catabolism differed, in that only the MAMA -transformed cells had generated

  12. A platform for exploration into chaining of web services for clinical data transformation and reasoning.

    PubMed

    Maldonado, José Alberto; Marcos, Mar; Fernández-Breis, Jesualdo Tomás; Parcero, Estíbaliz; Boscá, Diego; Legaz-García, María Del Carmen; Martínez-Salvador, Begoña; Robles, Montserrat

    2016-01-01

    The heterogeneity of clinical data is a key problem in the sharing and reuse of Electronic Health Record (EHR) data. We approach this problem through the combined use of EHR standards and semantic web technologies, concretely by means of clinical data transformation applications that convert EHR data in proprietary format, first into clinical information models based on archetypes, and then into RDF/OWL extracts which can be used for automated reasoning. In this paper we describe a proof-of-concept platform to facilitate the (re)configuration of such clinical data transformation applications. The platform is built upon a number of web services dealing with transformations at different levels (such as normalization or abstraction), and relies on a collection of reusable mappings designed to solve specific transformation steps in a particular clinical domain. The platform has been used in the development of two different data transformation applications in the area of colorectal cancer.

  13. Similarity transformation for equilibrium boundary layers, including effects of blowing and suction

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Hussain, Fazle

    2017-03-01

    We present a similarity transformation for the mean velocity profiles in sink flow turbulent boundary layers, including effects of blowing and suction. It is based on symmetry analysis which transforms the governing partial differential equations (for mean mass and momentum) into an ordinary differential equation and yields a new result including an exact, linear relation between the mean normal (V ) and streamwise (U ) velocities. A characteristic length function is further introduced which, under a first order expansion (whose coefficient is η ) in wall blowing and suction velocity, leads to the similarity transformation for U with the value of η ≈-1 /9 . This transformation is shown to be a group invariant and maps different U profiles under different blowing and suction conditions into a (universal) profile for no blowing or suction. Its inverse transformation enables predictions of all mean quantities in the mean mass and momentum equations, in good agreement with DNS data.

  14. Cell transformation mediated by chromosomal deoxyribonucleic acid of polyoma virus-transformed cells.

    PubMed Central

    Della Valle, G; Fenton, R G; Basilico, C

    1981-01-01

    To study the mechanism of deoxyribonucleic acid (DNA)-mediated gene transfer, normal rat cells were transfected with total cellular DNA extracted from polyoma virus-transformed cells. This resulted in the appearance of the transformed phenotype in 1 X 10(-6) to 3 X 10(-6) of the transfected cells. Transformation was invariably associated with the acquisition of integrated viral DNA sequences characteristic of the donor DNA. This was caused not by the integration of free DNA molecules, but by the transfer of large DNA fragments (10 to 20 kilobases) containing linked cellular and viral sequences. Although Southern blot analysis showed that integration did not appear to occur in a homologous region of the recipient chromosome, the frequency of transformation was rather high when compared with that of purified polyoma DNA, perhaps due to "position" effects or to the high efficiency of recombination of large DNA fragments. Images PMID:6100965

  15. Good Practices for Learning to Recognize Actions Using FV and VLAD.

    PubMed

    Wu, Jianxin; Zhang, Yu; Lin, Weiyao

    2016-12-01

    High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.

  16. Extracting surface waves, hum and normal modes: time-scale phase-weighted stack and beyond

    NASA Astrophysics Data System (ADS)

    Ventosa, Sergi; Schimmel, Martin; Stutzmann, Eleonore

    2017-10-01

    Stacks of ambient noise correlations are routinely used to extract empirical Green's functions (EGFs) between station pairs. The time-frequency phase-weighted stack (tf-PWS) is a physically intuitive nonlinear denoising method that uses the phase coherence to improve EGF convergence when the performance of conventional linear averaging methods is not sufficient. The high computational cost of a continuous approach to the time-frequency transformation is currently a main limitation in ambient noise studies. We introduce the time-scale phase-weighted stack (ts-PWS) as an alternative extension of the phase-weighted stack that uses complex frames of wavelets to build a time-frequency representation that is much more efficient and fast to compute and that preserve the performance and flexibility of the tf-PWS. In addition, we propose two strategies: the unbiased phase coherence and the two-stage ts-PWS methods to further improve noise attenuation, quality of the extracted signals and convergence speed. We demonstrate that these approaches enable to extract minor- and major-arc Rayleigh waves (up to the sixth Rayleigh wave train) from many years of data from the GEOSCOPE global network. Finally we also show that fundamental spheroidal modes can be extracted from these EGF.

  17. An evaluation of the ecological and environmental security on China's terrestrial ecosystems.

    PubMed

    Zhang, Hongqi; Xu, Erqi

    2017-04-11

    With rapid economic growth, industrialization, and urbanization, various ecological and environmental problems occur, which threaten and undermine the sustainable development and domestic survival of China. On the national scale, our progress remains in a state of qualitative or semi-quantitative evaluation, lacking a quantitative evaluation and a spatial visualization of ecological and environmental security. This study collected 14 indictors of water, land, air, and biodiversity securities to compile a spatial evaluation of ecological and environmental security in terrestrial ecosystems of China. With area-weighted normalization and scaling transformations, the veto aggregation (focusing on the limit indicator) and balanced aggregation (measuring balanced performance among different indicators) methods were used to aggregate security evaluation indicators. Results showed that water, land, air, and biodiversity securities presented different spatial distributions. A relatively serious ecological and environmental security crisis was found in China, but presented an obviously spatial variation of security evaluation scores. Hotspot areas at the danger level, which are scattered throughout the entirety of the country, were identified. The spatial diversities and causes of ecological and environmental problems in different regions were analyzed. Spatial integration of regional development and proposals for improving the ecological and environmental security were put forward.

  18. Remote sensing of soil organic matter of farmland with hyperspectral image

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Wang, Lei; Yang, Guijun; Zhang, Liyan

    2017-10-01

    Monitoring soil organic matter (SOM) of cultivated land quantitively and mastering its spatial change are helpful for fertility adjustment and sustainable development of agriculture. The study aimed to analyze the response between SOM and reflectivity of hyperspectral image with different pixel size and develop the optimal model of estimating SOM with imaging spectral technology. The wavelet transform method was used to analyze the correlation between the hyperspectral reflectivity and SOM. Then the optimal pixel size and sensitive wavelet feature scale were screened to develop the inversion model of SOM. Result showed that wavelet transform of soil hyperspectrum was help to improve the correlation between the wavelet features and SOM. In the visible wavelength range, the susceptible wavelet features of SOM mainly concentrated 460 603 nm. As the wavelength increased, the wavelet scale corresponding correlation coefficient increased maximum and then gradually decreased. In the near infrared wavelength range, the susceptible wavelet features of SOM mainly concentrated 762 882 nm. As the wavelength increased, the wavelet scale gradually decreased. The study developed multivariate model of continuous wavelet transforms by the method of stepwise linear regression (SLR). The CWT-SLR models reached higher accuracies than those of univariate models. With the resampling scale increasing, the accuracies of CWT-SLR models gradually increased, while the determination coefficients (R2) fluctuated from 0.52 to 0.59. The R2 of 5*5 scale reached highest (0.5954), while the RMSE reached lowest (2.41 g/kg). It indicated that multivariate model based on continuous wavelet transform had better ability for estimating SOM than univariate model.

  19. Exact solution of two collinear cracks normal to the boundaries of a 1D layered hexagonal piezoelectric quasicrystal

    NASA Astrophysics Data System (ADS)

    Zhou, Y.-B.; Li, X.-F.

    2018-07-01

    The electroelastic problem related to two collinear cracks of equal length and normal to the boundaries of a one-dimensional hexagonal piezoelectric quasicrystal layer is analysed. By using the finite Fourier transform, a mixed boundary value problem is solved when antiplane mechanical loading and inplane electric loading are applied. The problem is reduce to triple series equations, which are then transformed to a singular integral equation. For uniform remote loading, an exact solution is obtained in closed form, and explicit expressions for the electroelastic field are determined. The intensity factors of the electroelastic field and the energy release rate at the inner and outer crack tips are given and presented graphically.

  20. Diagnostics of normal and cancer tissues by fiberoptic evanescent wave Fourier transform IR (FEW-FTIR) spectroscopy

    NASA Astrophysics Data System (ADS)

    Afanasyeva, Natalia I.

    1998-06-01

    Fourier Transform Infrared (FTIR) Spectroscopy using optical fibers operated in the attenuated total reflection (ATR) regime in the mid-IR region in the range 850 to 4000 cm-1 has recently found an application in the noninvasive diagnostics of tissues in vivo. The method is suitable for nondestructive, nontoxic, fast (seconds), direct measurements of the spectra of normal and pathological tissues in vitro, ex vivo, and in vivo in real time. The aim of our studies is the express testing of various tumor tissues at the early stages of their development. The method is expected to be further developed for endoscopic and biopsy applications as well as for the research of different materials.

  1. Histomorphometric analysis of nuclear and cellular volumetric alterations in oral lichen planus, lichenoid lesions and normal oral mucosa using image analysis software.

    PubMed

    Venkatesiah, Sowmya S; Kale, Alka D; Hallikeremath, Seema R; Kotrashetti, Vijayalakshmi S

    2013-01-01

    Lichen planus is a chronic inflammatory mucocutaneous disease that clinically and histologically resembles lichenoid lesions, although the latter has a different etiology. Though criteria have been suggested for differentiating oral lichen planus from lichenoid lesions, confusion still prevails. To study the cellular and nuclear volumetric features in the epithelium of normal mucosa, lichen planus, and lichenoid lesions to determine variations if any. A retrospective study was done on 25 histologically diagnosed cases each of oral lichen planus, oral lichenoid lesions, and normal oral mucosa. Cellular and nuclear morphometric measurements were assessed on hematoxylin and eosin sections using image analysis software. Analysis of variance test (ANOVA) and Tukey's post-hoc test. The basal cells of oral lichen planus showed a significant increase in the mean nuclear and cellular areas, and in nuclear volume; there was a significant decrease in the nuclear-cytoplasmic ratio as compared to normal mucosa. The suprabasal cells showed a significant increase in nuclear and cellular areas, nuclear diameter, and nuclear and cellular volumes as compared to normal mucosa. The basal cells of oral lichenoid lesions showed significant difference in the mean cellular area and the mean nuclear-cytoplasmic ratio as compared to normal mucosa, whereas the suprabasal cells differed significantly from normal mucosa in the mean nuclear area and the nuclear and cellular volumes. Morphometry can differentiate lesions of oral lichen planus and oral lichenoid lesions from normal oral mucosa. Thus, morphometry may serve to discriminate between normal and premalignant lichen planus and lichenoid lesions. These lesions might have a high risk for malignant transformation and may behave in a similar manner with respect to malignant transformation.

  2. Invariant 2D object recognition using the wavelet transform and structured neural networks

    NASA Astrophysics Data System (ADS)

    Khalil, Mahmoud I.; Bayoumi, Mohamed M.

    1999-03-01

    This paper applies the dyadic wavelet transform and the structured neural networks approach to recognize 2D objects under translation, rotation, and scale transformation. Experimental results are presented and compared with traditional methods. The experimental results showed that this refined technique successfully classified the objects and outperformed some traditional methods especially in the presence of noise.

  3. High-efficiency Agrobacterium-mediated transformation of Norway spruce (Picea abies) and loblolly pine (Pinus taeda)

    NASA Technical Reports Server (NTRS)

    Wenck, A. R.; Quinn, M.; Whetten, R. W.; Pullman, G.; Sederoff, R.; Brown, C. S. (Principal Investigator)

    1999-01-01

    Agrobacterium-mediated gene transfer is the method of choice for many plant biotechnology laboratories; however, large-scale use of this organism in conifer transformation has been limited by difficult propagation of explant material, selection efficiencies and low transformation frequency. We have analyzed co-cultivation conditions and different disarmed strains of Agrobacterium to improve transformation. Additional copies of virulence genes were added to three common disarmed strains. These extra virulence genes included either a constitutively active virG or extra copies of virG and virB, both from pTiBo542. In experiments with Norway spruce, we increased transformation efficiencies 1000-fold from initial experiments where little or no transient expression was detected. Over 100 transformed lines expressing the marker gene beta-glucuronidase (GUS) were generated from rapidly dividing embryogenic suspension-cultured cells co-cultivated with Agrobacterium. GUS activity was used to monitor transient expression and to further test lines selected on kanamycin-containing medium. In loblolly pine, transient expression increased 10-fold utilizing modified Agrobacterium strains. Agrobacterium-mediated gene transfer is a useful technique for large-scale generation of transgenic Norway spruce and may prove useful for other conifer species.

  4. Understanding Antipsychotic Drug Treatment Effects: A Novel Method to Reduce Pseudospecificity of the Positive and Negative Syndrome Scale (PANSS) Factors.

    PubMed

    Hopkins, Seth C; Ogirala, Ajay; Loebel, Antony; Koblan, Kenneth S

    2017-12-01

    The Positive and Negative Syndrome Scale (PANSS) is the most widely used efficacy measure in acute treatment studies of schizophrenia. However, interpretation of the efficacy of antipsychotics in improving specific symptom domains is confounded by moderate-to-high correlations among standard (Marder) PANSS factors. The authors review the results of an uncorrelated PANSS score matrix (UPSM) transform designed to reduce pseudospecificity in assessment of symptom change in patients with schizophrenia. Based on a factor analysis of five pooled, placebo-controlled lurasidone clinical trials (N=1,710 patients), a UPSM transform was identified that generated PANSS factors with high face validity (good correlation with standard Marder PANSS factors), and high specificity/orthogonality (low levels of between-factor correlation measuring change during treatment). Between-factor correlations were low at baseline for both standard (Marder) PANSS factors and transformed PANSS factors. However, when measured change in symptom severity was measured during treatment (in a pooled 5-study analysis), there was a notable difference for standard PANSS factors, where changes across factors were found to be highly correlated (factors exhibited pseudospecificity), compared to transformed PANSS factors, where factor change scores exhibited the same low levels of between-factor correlation observed at baseline. At Week 6-endpoint, correlations among PANSS factor severity scores were moderate-to-high for standard factors (0.34-0.68), but continued to be low for the transformed factors (-0.22-0.20). As an additional validity check, we analyzed data from one of the original five pooled clinical trials that included other well-validated assessment scales (MADRS, Negative Symptom Assessment scale [NSA]). In this baseline analysis, UPSM-transformed PANSS factor severity scores (negative and depression factors) were found to correlate well with the MADRS and NSA. The availability of transformed PANSS factors with a high degree of orthogonality/specificity, but which retain a high degree of concurrent and face validity, can reduce pseudospecificity as a measurement confound, and should facilitate the drug development process, permitting a more accurate characterization of the efficacy of putative new agents in targeting specific symptom domains in patients with psychotic illness.

  5. Expression of the leukemia-associated CBF{beta}/SMMHC chimeric gene causes transformation of 3T3 cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hajra, A.; Liu, P.; Collins, E.S.

    1994-09-01

    A pericentric inversion of chromosome 16 (inv(16)(p13;q22)) is consistently seen in acute myeloid leukemia of the M4Eo subtype. This inversion fuses almost the entire coding region of the gene encoding of the {beta} subunit of the heterodimeric transcription factor CBF/PEBP2 to the region of the MYH11 gene encoding the rod domain for the smooth muscle myosin heavy chain (SMMHC). To investigate the biological properties of the CBF{beta}/SMMHC fusion protein, we have generated 3T3 cell lines that stably express the CBF{beta}/SMMHC chimeric cDNA or the normal, nonchimeric CBF{beta} and SMMHC cDNAs. 3T3 cells expressing CBF{beta}/SMMHC acquire a transformed phenotype, as indicatedmore » by altered cell morphology, formation of foci, and growth in soft agar. Cells constitutively overexpressing the normal CBF{beta} cDNA or the rod region of SMMHC remain nontransformed. Western blot analysis using antibodies to CBF{beta} and the SMMHC rod demonstrates that stably transfected cells express the appropriate chimeric or normal protein. Electrophoretic mobility shift assays reveal that cells transformed by the chimeric cDNA do not have a CBF-DNA complex of the expected mobility, but instead contain a large complex with CBF DNA-binding activity that fails to migrate out of the gel wells. In order to define the regions of CBF{beta}/SMMHC necessary for 3T3 transformation, we have stably transfected cells with mutant CBF{beta}/SMMHC cDNAs containing various deletions of the coding region. Analysis of these cell lines indicates that the transformation property of CBF{beta}/SMMHC requires regions of CBF{beta} known to be necessary for association with the DNA-binding CBF{alpha} subunit, and also requires an intact SMMHC carboxyl terminus, which is necessary for formation of the coiled coil domain of the myosin rod.« less

  6. Planet-B: Technical notes and drawings

    NASA Technical Reports Server (NTRS)

    Carignan, George R.

    1996-01-01

    The design of the transformer designated as T101 (061-0351) in the Filament/Bias module (061-0119) in the Planet-B NMS instrument was verified because of the differences from the GCMS and INMS instrument designs. A breadboard of a representation of the Hybrid 2301065, Bias Drive A driving a 2N3700 NPN transistor, with dual 75V secondaries, with loads, was used to test the circuit. The initial transformer design that was wound with bifilar secondaries was too unstable to test. The second 1408 transformer with a split bobbin and the feedback winding below the primary was also found to be unstable. (It was nearly impossible to keep the circuit from squeeging). The third transformer tested has the feedback on the outside of the resonant winding. The primary goal of the design was to have as tight a magnetic coupling as possible to the resonant winding, and as loose a coupling as possible to the primary. Further, the circuit AC ground is connected to the winding at the feedback end of the secondary winding. This transformer proved to be very stable - it is virtually impossible to make this design squeg. An emitter resistor (Rl29A) was added to this circuit, as referenced to the GCMS design, to protect Q102 from thermal runaway in the event of a turn on with a non- resonate circuit or load short. This was verified to protect Q102 for at least 30 seconds in the event of a short. Approximately 1% of the 4lmW input power is lost in this protection resistor under normal operation. The circuit was verified to operate normally when a radiated Q102 (2N3700), (low Beta) transistor was substituted for the normal 2N3700. It should be noted that the monitored drive voltage went to approximately 2.7V with this low gain transistor.

  7. String Scale Gauge Coupling Unification with Vector-Like Exotics and Noncanonical U(1)Y Normalization

    NASA Astrophysics Data System (ADS)

    Barger, V.; Jiang, Jing; Langacker, Paul; Li, Tianjun

    We use a new approach to study string scale gauge coupling unification systematically, allowing both the possibility of noncanonical U(1)Y normalization and the existence of vector-like particles whose quantum numbers are the same as those of the Standard Model (SM) fermions and their Hermitian conjugates and the SM adjoint particles. We first give all the independent sets (Yi) of particles that can be employed to achieve SU(3)C and SU(2)L string scale gauge coupling unification and calculate their masses. Second, for a noncanonical U(1)Y normalization, we obtain string scale SU(3)C ×SU(2)L ×U(1)Y gauge coupling unification by choosing suitable U(1)Y normalizations for each of the Yi sets. Alternatively, for the canonical U(1)Y normalization, we achieve string scale gauge coupling unification by considering suitable combinations of the Yi sets or by introducing additional independent sets (Zi), that do not affect the SU(3)C ×SU(2)L unification at tree level, and then choosing suitable combinations, one from the Yi sets and one from the Zi sets. We also briefly discuss string scale gauge coupling unification in models with higher Kac-Moody levels for SU(2)L or SU(3)C.

  8. It's All Relative: A Validation of Radiation Quality Comparison Metrics

    NASA Technical Reports Server (NTRS)

    Chappell, L. J.; Milder, C. M.; Elgart, S. R.; Semones, E. J.

    2017-01-01

    Historically, the relative biological effectiveness (RBE) has been calculated to quantify the difference between heavy ion and gamma ray radiation. The RBE is then applied to gamma ray data to predict the effects of heavy ions in humans. The RBE is an iso-effect dose-to-dose ratio which, due to its counterintuitive nature, has been commonly miscalculated as an iso-dose effect-to-effect ratio. A paper recently published by Shuryak et al described this second measure intentionally for the first time in 2017, referring to it as the radiation effects ratio (RER). In this study, we utilized simulations to test the ability of both the RBE and the RER to predict known heavy ion effects. RBEs and RERs were calculated using mouse data from Chang et al, and the ability of the RBE and RER to predict the heavy ion data from which they were calculated was verified. Statistical transformations often utilized during data analysis were applied to the gamma and heavy ion data to determine whether RBE and RER are each uniquely defined measures. Scale changes are expected when translating effects from mice to humans and between human populations; gamma and heavy ion data were transformed to represent potential scale changes. The ability of the RBE and RER to predict the transformed heavy ion data from the transformed gamma data was then tested. The RBE but not the RER was uniquely defined after all statistical transformations. The RBE correctly predicted the scale-transformed heavy ion data, while the RER did not. This presentation describes potential implications for both metrics in light of these findings.

  9. Investigation of scale effects in the TRF determined by VLBI

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.

  10. Instrument Line Shape Modeling and Correction for Off-Axis Detectors in Fourier Transform Spectrometry

    NASA Technical Reports Server (NTRS)

    Bowman, K.; Worden, H.; Beer, R.

    1999-01-01

    Spectra measured by off-axis detectors in a high-resolution Fourier transform spectrometer (FTS) are characterized by frequency scaling, asymmetry and broadening of their line shape, and self-apodization in the corresponding interferogram.

  11. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  12. Report on the CCT Supplementary Comparison S1 of Infrared Spectral Normal Emittance/Emissivity

    PubMed Central

    Hanssen, Leonard; Wilthan, B.; Monte, Christian; Hollandt, Jörg; Hameury, Jacques; Filtz, Jean-Remy; Girard, Ferruccio; Battuello, Mauro; Ishii, Juntaro

    2016-01-01

    The National Measurement Institutes (NMIs) of the United States, Germany, France, Italy and Japan, have joined in an inter-laboratory comparison of their infrared spectral emittance scales. This action is part of a series of supplementary inter-laboratory comparisons (including thermal conductivity and thermal diffusivity) sponsored by the Consultative Committee on Thermometry (CCT) Task Group on Thermophysical Quantities (TG-ThQ). The objective of this collaborative work is to strengthen the major operative National Measurement Institutes’ infrared spectral emittance scales and consequently the consistency of radiative properties measurements carried out worldwide. The comparison has been performed over a spectral range of 2 μm to 14 μm, and a temperature range from 23 °C to 800 °C. Artefacts included in the comparison are potential standards: oxidized inconel, boron nitride, and silicon carbide. The measurement instrumentation and techniques used for emittance scales are unique for each NMI, including the temperature ranges covered as well as the artefact sizes required. For example, all three common types of spectral instruments are represented: dispersive grating monochromator, Fourier transform and filter-based spectrometers. More than 2000 data points (combinations of material, wavelength and temperature) were compared. Ninety-eight percent (98%) of the data points were in agreement, with differences to weighted mean values less than the expanded uncertainties calculated from the individual NMI uncertainties and uncertainties related to the comparison process. PMID:28239193

  13. Dimensional scaling for impact cratering and perforation

    NASA Technical Reports Server (NTRS)

    Watts, Alan; Atkinson, Dale; Rieco, Steve

    1993-01-01

    This report summarizes the development of two physics-based scaling laws for describing crater depths and diameters caused by normal incidence impacts into aluminum and TFE Teflon. The report then describes equations for perforations in aluminum and TFE Teflon for normal impacts. Lastly, this report also studies the effects of non-normal incidence on cratering and perforation.

  14. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  15. Rectangular rotation of spherical harmonic expansion of arbitrary high degree and order

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2017-08-01

    In order to move the polar singularity of arbitrary spherical harmonic expansion to a point on the equator, we rotate the expansion around the y-axis by 90° such that the x-axis becomes a new pole. The expansion coefficients are transformed by multiplying a special value of Wigner D-matrix and a normalization factor. The transformation matrix is unchanged whether the coefficients are 4 π fully normalized or Schmidt quasi-normalized. The matrix is recursively computed by the so-called X-number formulation (Fukushima in J Geodesy 86: 271-285, 2012a). As an example, we obtained 2190× 2190 coefficients of the rectangular rotated spherical harmonic expansion of EGM2008. A proper combination of the original and the rotated expansions will be useful in (i) integrating the polar orbits of artificial satellites precisely and (ii) synthesizing/analyzing the gravitational/geomagnetic potentials and their derivatives accurately in the high latitude regions including the arctic and antarctic area.

  16. In-vivo characterization of endogenous porphyrin fluorescence from DMBA-treated Swiss Albino mice skin carcinogenesis for measuring tissue transformation

    NASA Astrophysics Data System (ADS)

    Ganesan, Singaravelu; Ebenezar, Jeyasingh; Hemamalini, Srinivasan; Aruna, Prakasa R.

    2002-05-01

    Steady state fluorescence spectroscopic characterization of endogenous porphyrin emission from DMBA treated skin carcinogenesis in Swiss albino mice was carried out. The emission of endogenous porphyrin from normal and abnormal skin tissues was studied both in the presence and absence of exogenous ALA to compare the resultant porphyrin emission characterictics. The mice skin is excited at 405nm and emission spectra are scanned from 430 to 700nm. The average fluorescence emission spectra of mice skin at normal and various tissues transformation conditions were found to be different. Two peaks around 460nm and 636nm were observed and they may be attributed to NADH, Elastin and collagen combination and endogenous porphyrin emission. The intensity at 636nm increases as the stage of the cancer increases. Although exogenous ALA enhances the PPIX level in tumor, the synthesis of PPIX was also found in normal surrounding skin, in fact, with higher concentration than that of tumor tissues.

  17. A Whole Word and Number Reading Machine Based on Two Dimensional Low Frequency Fourier Transforms

    DTIC Science & Technology

    1990-12-01

    they are energy normalized. The normalization process accounts for brightness variations and is equivalent to graphing each 2DFT onto the surface of an n...determined empirically (trial and error). Each set is energy normalized based on the number of coefficients within the set. Therefore, the actual...using the 6 font group case with the top 1000 words, where the energy has been renormalized based on the particular number of coefficients being used

  18. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    NASA Astrophysics Data System (ADS)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  19. Leadership: validation of a self-report scale: comment on Dussault, Frenette, and Fernet (2013).

    PubMed

    Chakrabarty, Subhra

    2014-10-01

    In a recent study, Dussault, Frenette, and Fernet (2013) developed a 21-item self-report instrument to measure leadership based on Bass's (1985) transformational/transactional leadership paradigm. The final specification included a third-order dimension (leadership), two second-order dimensions (transactional leadership and transformational leadership), and a first-order dimension (laissez-faire leadership). This note focuses on the need for assessing convergent and discriminant validity of the scale, and on ruling out the potential for common method bias.

  20. Nonlinear dynamic range transformation in visual communication channels.

    PubMed

    Alter-Gartenberg, R

    1996-01-01

    The article evaluates nonlinear dynamic range transformation in the context of the end-to-end continuous-input/discrete processing/continuous-display imaging process. Dynamic range transformation is required when we have the following: (i) the wide dynamic range encountered in nature is compressed into the relatively narrow dynamic range of the display, particularly for spatially varying irradiance (e.g., shadow); (ii) coarse quantization is expanded to the wider dynamic range of the display; and (iii) nonlinear tone scale transformation compensates for the correction in the camera amplifier.

Top