The use of normal forms for analysing nonlinear mechanical vibrations
Neild, Simon A.; Champneys, Alan R.; Wagg, David J.; Hill, Thomas L.; Cammarano, Andrea
2015-01-01
A historical introduction is given of the theory of normal forms for simplifying nonlinear dynamical systems close to resonances or bifurcation points. The specific focus is on mechanical vibration problems, described by finite degree-of-freedom second-order-in-time differential equations. A recent variant of the normal form method, that respects the specific structure of such models, is recalled. It is shown how this method can be placed within the context of the general theory of normal forms provided the damping and forcing terms are treated as unfolding parameters. The approach is contrasted to the alternative theory of nonlinear normal modes (NNMs) which is argued to be problematic in the presence of damping. The efficacy of the normal form method is illustrated on a model of the vibration of a taut cable, which is geometrically nonlinear. It is shown how the method is able to accurately predict NNM shapes and their bifurcations. PMID:26303917
Rapid Screening for Deleted Form of β-thalassemia by Real-Time Quantitative PCR.
Ke, Liang-Yin; Chang, Jan-Gowth; Chang, Chao-Sung; Hsieh, Li-Ling; Liu, Ta-Chih
2017-01-01
Thalassemia is the most common single gene disease in human beings. The prevalence rate of β-thalassemia in Taiwan is approximately 1-3%. Previously methods to reveal and diagnose severe deleted form of α- or β-thalassemia were insufficient and inappropriate for prenatal diagnosis. A real-time quantitative PCR method was set up for rapid screening of the deleted form of β-thalassemia. Our results show that ΔΔCt between deleted form of β-thalassemia and normal individuals were 1.0674 ± 0.0713. On the contrary, mutation form β-thalassemia showed no difference with normal healthy control. The HBB/CCR5 ratio for deleted form of β-thalassemia patients was 0.48, whether normal individuals and mutation form of β-thalassemia was 1.0. This RQ-PCR technique is an alternative rapid screening assay for deleted form of β-thalassemia. In addition, it could also identify undefined type. Our technique by using RQ-PCR to quantify gene copies is a reliable and time-saving method that can screen deleted form of β-thalassemia. © 2016 Wiley Periodicals, Inc.
On bifurcation in dynamics of hemispherical resonator gyroscope
NASA Astrophysics Data System (ADS)
Volkov, D. Yu.; Galunova, K. V.
2018-05-01
A mathematical model of wave solid-state gyro (HRG) are constructed. Wave pattern of resonant oscillations was studied applying normal form method. We calculate the Birkhoff-Gustavson normal form of unterturbed system.
NASA Astrophysics Data System (ADS)
DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.
2008-06-01
For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).
The morphological classification of normal and abnormal red blood cell using Self Organizing Map
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Wulandari, F. S.; Faza, S.; Muchtar, M. A.; Siregar, I.
2018-02-01
Blood is an essential component of living creatures in the vascular space. For possible disease identification, it can be tested through a blood test, one of which can be seen from the form of red blood cells. The normal and abnormal morphology of the red blood cells of a patient is very helpful to doctors in detecting a disease. With the advancement of digital image processing technology can be used to identify normal and abnormal blood cells of a patient. This research used self-organizing map method to classify the normal and abnormal form of red blood cells in the digital image. The use of self-organizing map neural network method can be implemented to classify the normal and abnormal form of red blood cells in the input image with 93,78% accuracy testing.
Local bifurcations in differential equations with state-dependent delay.
Sieber, Jan
2017-11-01
A common task when analysing dynamical systems is the determination of normal forms near local bifurcations of equilibria. As most of these normal forms have been classified and analysed, finding which particular class of normal form one encounters in a numerical bifurcation study guides follow-up computations. This paper builds on normal form algorithms for equilibria of delay differential equations with constant delay that were developed and implemented in DDE-Biftool recently. We show how one can extend these methods to delay-differential equations with state-dependent delay (sd-DDEs). Since higher degrees of regularity of local center manifolds are still open for sd-DDEs, we give an independent (still only partial) argument which phenomena from the truncated normal must persist in the full sd-DDE. In particular, we show that all invariant manifolds with a sufficient degree of normal hyperbolicity predicted by the normal form exist also in the full sd-DDE.
NASA Astrophysics Data System (ADS)
Li, Jing; Kou, Liying; Wang, Duo; Zhang, Wei
2017-12-01
In this paper, we mainly focus on the unique normal form for a class of three-dimensional vector fields via the method of transformation with parameters. A general explicit recursive formula is derived to compute the higher order normal form and the associated coefficients, which can be achieved easily by symbolic calculations. To illustrate the efficiency of the approach, a comparison of our result with others is also presented.
Method of fabricating composite superconducting wire
Strauss, Bruce P.; Reardon, Paul J.; Remsbottom, Robert H.
1977-01-01
An improvement in the method for preparing composite rods of superconducting alloy and normal metal from which multifilament composite superconducting wire is fabricated by bending longitudinally a strip of normal metal around a rod of superconductor alloy and welding the edges to form the composite rod. After the rods have preferably been provided with a hexagonal cross-sectional shape, a plurality of the rods are stacked into a normal metal extrusion can, sealed and worked to reduce the cross-sectional size and form multifilament wire. Diffusion barriers and high-electrical resistance barriers can easily be introduced into the wire by plating or otherwise coating the faces of the normal metal strip with appropriate materials.
Automatic identification and normalization of dosage forms in drug monographs
2012-01-01
Background Each day, millions of health consumers seek drug-related information on the Web. Despite some efforts in linking related resources, drug information is largely scattered in a wide variety of websites of different quality and credibility. Methods As a step toward providing users with integrated access to multiple trustworthy drug resources, we aim to develop a method capable of identifying drug's dosage form information in addition to drug name recognition. We developed rules and patterns for identifying dosage forms from different sections of full-text drug monographs, and subsequently normalized them to standardized RxNorm dosage forms. Results Our method represents a significant improvement compared with a baseline lookup approach, achieving overall macro-averaged Precision of 80%, Recall of 98%, and F-Measure of 85%. Conclusions We successfully developed an automatic approach for drug dosage form identification, which is critical for building links between different drug-related resources. PMID:22336431
Normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2015-01-01
The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.
Volume-preserving normal forms of Hopf-zero singularity
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh
2013-10-01
A practical method is described for computing the unique generator of the algebra of first integrals associated with a large class of Hopf-zero singularity. The set of all volume-preserving classical normal forms of this singularity is introduced via a Lie algebra description. This is a maximal vector space of classical normal forms with first integral; this is whence our approach works. Systems with a nonzero condition on their quadratic parts are considered. The algebra of all first integrals for any such system has a unique (modulo scalar multiplication) generator. The infinite level volume-preserving parametric normal forms of any nondegenerate perturbation within the Lie algebra of any such system is computed, where it can have rich dynamics. The associated unique generator of the algebra of first integrals are derived. The symmetry group of the infinite level normal forms are also discussed. Some necessary formulas are derived and applied to appropriately modified Rössler and generalized Kuramoto-Sivashinsky equations to demonstrate the applicability of our theoretical results. An approach (introduced by Iooss and Lombardi) is applied to find an optimal truncation for the first level normal forms of these examples with exponentially small remainders. The numerically suggested radius of convergence (for the first integral) associated with a hypernormalization step is discussed for the truncated first level normal forms of the examples. This is achieved by an efficient implementation of the results using Maple.
Cho, Min-Jeong; Hallac, Rami R; Ramesh, Jananie; Seaward, James R; Hermann, Nuno V; Darvann, Tron A; Lipira, Angelo; Kane, Alex A
2018-03-01
Restoring craniofacial symmetry is an important objective in the treatment of many craniofacial conditions. Normal form has been measured using anthropometry, cephalometry, and photography, yet all of these modalities have drawbacks. In this study, the authors define normal pediatric craniofacial form and craniofacial asymmetry using stereophotogrammetric images, which capture a densely sampled set of points on the form. After institutional review board approval, normal, healthy children (n = 533) with no known craniofacial abnormalities were recruited at well-child visits to undergo full head stereophotogrammetric imaging. The children's ages ranged from 0 to 18 years. A symmetric three-dimensional template was registered and scaled to each individual scan using 25 manually placed landmarks. The template was deformed to each subject's three-dimensional scan using a thin-plate spline algorithm and closest point matching. Age-based normal facial models were derived. Mean facial asymmetry and statistical characteristics of the population were calculated. The mean head asymmetry across all pediatric subjects was 1.5 ± 0.5 mm (range, 0.46 to 4.78 mm), and the mean facial asymmetry was 1.2 ± 0.6 mm (range, 0.4 to 5.4 mm). There were no significant differences in the mean head or facial asymmetry with age, sex, or race. Understanding the "normal" form and baseline distribution of asymmetry is an important anthropomorphic foundation. The authors present a method to quantify normal craniofacial form and baseline asymmetry in a large pediatric sample. The authors found that the normal pediatric craniofacial form is asymmetric, and does not change in magnitude with age, sex, or race.
Method for construction of normalized cDNA libraries
Soares, Marcelo B.; Efstratiadis, Argiris
1998-01-01
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.
Method for construction of normalized cDNA libraries
Soares, M.B.; Efstratiadis, A.
1998-11-03
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3` noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries. 19 figs.
Method for construction of normalized cDNA libraries
Soares, M.B.; Efstratiadis, A.
1996-01-09
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form. The method comprises: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3` noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. 4 figs.
Existence and stability of periodic solutions of quasi-linear Korteweg — de Vries equation
NASA Astrophysics Data System (ADS)
Glyzin, S. D.; Kolesov, A. Yu; Preobrazhenskaia, M. M.
2017-01-01
We consider the scalar nonlinear differential-difference equation with two delays, which models electrical activity of a neuron. Under some additional suppositions for this equation well known method of quasi-normal forms can be applied. Its essence lies in the formal normalization of the Poincare - Dulac obtaining quasi-normal form and the subsequent application of the theorems of conformity. In this case, the result of the application of quasi-normal forms is a countable system of differential-difference equations, which can be turned into a boundary value problem of the Korteweg - de Vries equation. The investigation of this boundary value problem allows us to draw a conclusion about the behaviour of the original equation. Namely, for a suitable choice of parameters in the framework of this equation is implemented buffer phenomenon consisting in the presence of the bifurcation mechanism for the birth of an arbitrarily large number of stable cycles.
Application of Power Geometry and Normal Form Methods to the Study of Nonlinear ODEs
NASA Astrophysics Data System (ADS)
Edneral, Victor
2018-02-01
This paper describes power transformations of degenerate autonomous polynomial systems of ordinary differential equations which reduce such systems to a non-degenerative form. Example of creating exact first integrals of motion of some planar degenerate system in a closed form is given.
Method for construction of normalized cDNA libraries
Soares, Marcelo B.; Efstratiadis, Argiris
1996-01-01
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library.
Ushakou, Dzmitryi V; Tomin, Vladimir I
2018-06-07
We report spectroscopic properties of 3-hydroxyflavone (3-HF) and 4'-N,N-dimethylamino-3-hydroxyflavone (DMA3HF) in acetonitrile and ethyl acetate at different temperatures in the range from 10 °C to about 67 °C. These compounds are characterized by excited-state intramolecular proton transfer (ESIPT) which leads to occurrence of two forms of these molecules. For this reason their fluorescence spectra have two bands which correspond to emission of normal and photoproduct (tautomer) forms. The correlation between ratio of integrated intensity of these two bands and inverse absolute temperature (the Arrhenius plot) have been applied to estimate energetic properties, such as difference between energy levels of excited states as well ground states for normal and tautomer forms for each molecule. Copyright © 2018 Elsevier B.V. All rights reserved.
Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.
Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe
2018-02-19
Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.
A Comparison of Two Methods for Boolean Query Relevancy Feedback.
ERIC Educational Resources Information Center
Salton, G.; And Others
1984-01-01
Evaluates and compares two recently proposed automatic methods for relevance feedback of Boolean queries (Dillon method, which uses probabilistic approach as basis, and disjunctive normal form method). Conclusions are drawn concerning the use of effective feedback methods in a Boolean query environment. Nineteen references are included. (EJS)
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
METHOD AND APPARATUS FOR REACTOR SAFETY CONTROL
Huston, N.E.
1961-06-01
A self-contained nuclear reactor fuse controlled device tron absorbing material, normally in a compact form but which can be expanded into an extended form presenting a large surface for neutron absorption when triggered by an increase in neutron flux, is described.
Transformer induced instability of the series resonant converter
NASA Technical Reports Server (NTRS)
King, R. J.; Stuart, T. A.
1983-01-01
It is shown that the common series resonant power converter is subject to a low frequency oscillation that can lead to the loss of cyclic stability. This oscillation is caused by a low frequency resonant circuit formed by the normal L and C components in series with the magnetizing inductance of the output transformer. Three methods for eliminating this oscillation are presented and analyzed. One of these methods requires a change in the circuit topology during the resonance cycle. This requires a new set of steady state equations which are derived and presented in a normalized form. Experimental results are included which demonstrate the nature of the low frequency oscillation before cyclic stability is lost.
Nonlinear Waves, Dynamical Systems and Other Applied Mathematics Programs
1991-10-04
present a general scheme of perturbation method for perturbed soliton systems, based on the normal form theory and the method of multiple scales. By this...dimension, and discuss possible consequences of the interplay between wavefront- interactions and curvature in two dimensions. Thursday, October 19 All ... normal speed D parametrized by the local mean surface curvature x. Its solution provides a relation D = D(x) which determines the evolution of the front
Method for producing strain tolerant multifilamentary oxide superconducting wire
Finnemore, D.K.; Miller, T.A.; Ostenson, J.E.; Schwartzkopf, L.A.; Sanders, S.C.
1994-07-19
A strain tolerant multifilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments. 6 figs.
Method for producing strain tolerant multifilamentary oxide superconducting wire
Finnemore, Douglas K.; Miller, Theodore A.; Ostenson, Jerome E.; Schwartzkopf, Louis A.; Sanders, Steven C.
1994-07-19
A strain tolerant multifilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments.
Alloy nanoparticle synthesis using ionizing radiation
Nenoff, Tina M [Sandia Park, NM; Powers, Dana A [Albuquerque, NM; Zhang, Zhenyuan [Durham, NC
2011-08-16
A method of forming stable nanoparticles comprising substantially uniform alloys of metals. A high dose of ionizing radiation is used to generate high concentrations of solvated electrons and optionally radical reducing species that rapidly reduce a mixture of metal ion source species to form alloy nanoparticles. The method can make uniform alloy nanoparticles from normally immiscible metals by overcoming the thermodynamic limitations that would preferentially produce core-shell nanoparticles.
Calvo, Natalia L; Arias, Juan M; Altabef, Aída Ben; Maggio, Rubén M; Kaufman, Teodoro S
2016-09-10
Albendazole (ALB) is a broad-spectrum anthelmintic, which exhibits two solid-state forms (Forms I and II). The Form I is the metastable crystal at room temperature, while Form II is the stable one. Because the drug has poor aqueous solubility and Form II is less soluble than Form I, it is desirable to have a method to assess the solid-state form of the drug employed for manufacturing purposes. Therefore, a Partial Least Squares (PLS) model was developed for the determination of Form I of ALB in its mixtures with Form II. For model development, both solid-state forms of ALB were prepared and characterized by microscopic (optical and with normal and polarized light), thermal (DSC) and spectroscopic (ATR-FTIR, Raman) techniques. Mixtures of solids in different ratios were prepared by weighing and mechanical mixing of the components. Their Raman spectra were acquired, and subjected to peak smoothing, normalization, standard normal variate correction and de-trending, before performing the PLS calculations. The optimal spectral region (1396-1280cm(-1)) and number of latent variables (LV=3) were obtained employing a moving window of variable size strategy. The method was internally validated by means of the leave one out procedure, providing satisfactory statistics (r(2)=0.9729 and RMSD=5.6%) and figures of merit (LOD=9.4% and MDDC=1.4). Furthermore, the method's performance was also evaluated by analysis of two validation sets. Validation set I was used for assessment of linearity and range and Validation set II, to demonstrate accuracy and precision (Recovery=101.4% and RSD=2.8%). Additionally, a third set of spiked commercial samples was evaluated, exhibiting excellent recoveries (94.2±6.4%). The results suggest that the combination of Raman spectroscopy with multivariate analysis could be applied to the assessment of the main crystal form and its quantitation in samples of ALB bulk drug, in the routine quality control laboratory. Copyright © 2016 Elsevier B.V. All rights reserved.
Hawking radiation and classical tunneling: A ray phase space approach
NASA Astrophysics Data System (ADS)
Tracy, E. R.; Zhigunov, D.
2016-01-01
Acoustic waves in fluids undergoing the transition from sub- to supersonic flow satisfy governing equations similar to those for light waves in the immediate vicinity of a black hole event horizon. This acoustic analogy has been used by Unruh and others as a conceptual model for "Hawking radiation." Here, we use variational methods, originally introduced by Brizard for the study of linearized MHD, and ray phase space methods, to analyze linearized acoustics in the presence of background flows. The variational formulation endows the evolution equations with natural Hermitian and symplectic structures that prove useful for later analysis. We derive a 2 × 2 normal form governing the wave evolution in the vicinity of the "event horizon." This shows that the acoustic model can be reduced locally (in ray phase space) to a standard (scalar) tunneling process weakly coupled to a unidirectional non-dispersive wave (the "incoming wave"). Given the normal form, the Hawking "thermal spectrum" can be derived by invoking standard tunneling theory, but only by ignoring the coupling to the incoming wave. Deriving the normal form requires a novel extension of the modular ray-based theory used previously to study tunneling and mode conversion in plasmas. We also discuss how ray phase space methods can be used to change representation, which brings the problem into a form where the wave functions are less singular than in the usual formulation, a fact that might prove useful in numerical studies.
Zhang, Lijun; Danesh, Jennifer; Tannan, Anjali; Phan, Vivian; Yu, Fei; Hamilton, D Rex
2015-10-01
To evaluate the difference in corneal biomechanical waveform parameters between manifest keratoconus, forme fruste keratoconus, and healthy eyes with a second-generation biomechanical waveform analyzer (Ocular Response Analyzer 2). Jules Stein Eye Institute, University of California, Los Angeles, California, USA. Retrospective chart review. The biomechanical waveform analyzer was used to obtain corneal hysteresis (CH), corneal resistance factor (CRF), and 37 biomechanical waveform parameters in manifest keratoconus eyes, forme fruste keratoconus eyes, and healthy eyes. Useful distinguishing parameters were found using t tests and a multivariable logistic regression model with stepwise variable selection. Potential confounders were controlled for. The study included 68 manifest keratoconus eyes, 64 forme fruste keratoconus eyes, and 249 healthy eyes. There was a statistical difference in the mean CRF between the normal group (10.2 mm Hg ± 1.7 [SD]) and keratoconus group (6.3 ± 1.9 mm Hg) (P = .003), and between the normal group and the forme fruste keratoconus group (7.8 ± 1.4 mm Hg) (P < .0001). There was no statistical difference in the mean CH between the normal group and the keratoconus group or the forme fruste keratoconus group. The CRF, height of peak 1 (P1) (P = .001), downslope of P1 (dslope1) (P = .027), upslope of peak 2 (P2) (P = .004), and downslope of P2 (P = .006) distinguished the normal group from the keratoconus groups. The CRF, downslope of P2 derived from upper 50% of applanation peak (P = .035), dslope1 (P = .014), and upslope of P1 (P = .008) distinguished the normal group from the forme fruste keratoconus group. Differences in multiple biomechanical waveform parameters can differentiate between healthy and diseased conditions and might improve early diagnosis of keratoconus and forme fruste keratoconus. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
1993-04-01
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Brief Daily Periods of Unrestricted Vision Can Prevent Form-Deprivation Amblyopia
Wensveen, Janice M.; Harwerth, Ronald S.; Hung, Li-Fang; Ramamirtham, Ramkumar; Kee, Chea-su; Smith, Earl L.
2006-01-01
PURPOSE To characterize how the mechanisms that produce unilateral form-deprivation amblyopia integrate the effects of normal and abnormal vision over time, the effects of brief daily periods of unrestricted vision on the spatial vision losses produced by monocular form deprivation were investigated in infant monkeys. METHODS Beginning at 3 weeks of age, unilateral form deprivation was initiated in 18 infant monkeys by securing a diffuser spectacle lens in front of one eye and a clear plano lens in front of the fellow eye. During the treatment period (18 weeks), three infants wore the diffusers continuously. For the other experimental infants, the diffusers were removed daily and replaced with clear, zero-powered lenses for 1 (n = 5), 2 (n = 6), or 4 (n = 4) hours. Four infants reared with binocular zero-powered lenses and four normally reared monkeys provided control data. RESULTS The degree of amblyopia varied significantly with the daily duration of unrestricted vision. Continuous form deprivation caused severe amblyopia. However, 1 hour of unrestricted vision reduced the degree of amblyopia by 65%, 2 hours reduced the deficits by 90%, and 4 hours preserved near-normal spatial contrast sensitivity. CONCLUSIONS The severely amblyogenic effects of form deprivation in infant primates are substantially reduced by relatively short daily periods of unrestricted vision. The manner in which the mechanisms responsible for amblyopia integrate the effects of normal and abnormal vision over time promotes normal visual development and has important implications for the management of human infants with conditions that potentially cause amblyopia. PMID:16723458
Measurement of aspheric mirror by nanoprofiler using normal vector tracing
NASA Astrophysics Data System (ADS)
Kitayama, Takao; Shiraji, Hiroki; Yamamura, Kazuya; Endo, Katsuyoshi
2016-09-01
Aspheric or free-form optics with high accuracy are necessary in many fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Therefore the demand of measurement method for aspherical or free-form surface with nanometer accuracy increases. Purpose of our study is to develop a non-contact measurement technology for aspheric or free-form surfaces directly with high repeatability. To achieve this purpose we have developed threedimensional Nanoprofiler which detects normal vectors of sample surface. The measurement principle is based on the straightness of laser light and the accurate motion of rotational goniometers. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and laser source. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and their coordinates by surface reconstruction algorithm. To evaluate performance of this machine we measure a concave aspheric mirror with diameter of 150 mm. As a result we achieve to measure large area of 150mm diameter. And we observe influence of systematic errors which the machine has. Then we simulated the influence and subtracted it from measurement result.
Soares, Marcelo B.; Efstratiadis, Argiris
1997-01-01
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library.
Soares, M.B.; Efstratiadis, A.
1997-06-10
This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3{prime} noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. 4 figs.
Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.
Sznitman, Sharon R; Taubman, Danielle S
2016-09-01
Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.
NASA Astrophysics Data System (ADS)
Holmes, Philip J.
1981-06-01
We study the instabilities known to aeronautical engineers as flutter and divergence. Mathematically, these states correspond to bifurcations to limit cycles and multiple equilibrium points in a differential equation. Making use of the center manifold and normal form theorems, we concentrate on the situation in which flutter and divergence become coupled, and show that there are essentially two ways in which this is likely to occur. In the first case the system can be reduced to an essential model which takes the form of a single degree of freedom nonlinear oscillator. This system, which may be analyzed by conventional phase-plane techniques, captures all the qualitative features of the full system. We discuss the reduction and show how the nonlinear terms may be simplified and put into normal form. Invariant manifold theory and the normal form theorem play a major role in this work and this paper serves as an introduction to their application in mechanics. Repeating the approach in the second case, we show that the essential model is now three dimensional and that far more complex behavior is possible, including nonperiodic and ‘chaotic’ motions. Throughout, we take a two degree of freedom system as an example, but the general methods are applicable to multi- and even infinite degree of freedom problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less
Method of making V.sub.3 Ga superconductors
Dew-Hughes, David
1980-01-01
An improved method for producing a vanadium-gallium superconductor wire having aluminum as a component thereof is disclosed, said wire being encased in a gallium bearing copper sheath. The superconductors disclosed herein may be fabricated under normal atmospheres and room temperatures by forming a tubular shaped billet having a core composed of an alloy of vanadium and aluminum and an outer sheath composed of an alloy of copper, gallium and aluminum. Thereafter the entire billet is swage reduced to form a wire therefrom and heat treated to form a layer of V.sub.3 Ga in the interior of the wire.
Method for the fabrication of three-dimensional microstructures by deep X-ray lithography
Sweatt, William C.; Christenson, Todd R.
2005-04-05
A method for the fabrication of three-dimensional microstructures by deep X-ray lithography (DXRL) comprises a masking process that uses a patterned mask with inclined mask holes and off-normal exposures with a DXRL beam aligned with the inclined mask holes. Microstructural features that are oriented in different directions can be obtained by using multiple off-normal exposures through additional mask holes having different orientations. Various methods can be used to block the non-aligned mask holes from the beam when using multiple exposures. A method for fabricating a precision 3D X-ray mask comprises forming an intermediate mask and a master mask on a common support membrane.
Normal forms for Hopf-Zero singularities with nonconservative nonlinear part
NASA Astrophysics Data System (ADS)
Gazor, Majid; Mokhtari, Fahimeh; Sanders, Jan A.
In this paper we are concerned with the simplest normal form computation of the systems x˙=2xf(x,y2+z2), y˙=z+yf(x,y2+z2), z˙=-y+zf(x,y2+z2), where f is a formal function with real coefficients and without any constant term. These are the classical normal forms of a larger family of systems with Hopf-Zero singularity. Indeed, these are defined such that this family would be a Lie subalgebra for the space of all classical normal form vector fields with Hopf-Zero singularity. The simplest normal forms and simplest orbital normal forms of this family with nonzero quadratic part are computed. We also obtain the simplest parametric normal form of any non-degenerate perturbation of this family within the Lie subalgebra. The symmetry group of the simplest normal forms is also discussed. This is a part of our results in decomposing the normal forms of Hopf-Zero singular systems into systems with a first integral and nonconservative systems.
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.
Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20
Michael, J. Robert; Volkov, Anatoliy
2015-03-01
The widely used pseudoatom formalism in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens. It was shown that the analytical form for normalization coefficients is available primarily forl ≤ 4. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7.more » In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle–Coppens method in the Wolfram Mathematicasoftware to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.« less
FACTORS INFLUENCING THE ABILITY OF ISOLATED CELL NUCLEI TO FORM GELS IN DILUTE ALKALI
Dounce, Alexander L.; Monty, Kenneth J.
1955-01-01
1. Known methods for isolating cell nuclei are divided into two classes, depending on whether or not the nuclei are capable of forming gels in dilute alkali or strong saline solutions. Methods which produce nuclei that can form gels apparently prevent the action of an intramitochondrial enzyme capable of destroying the gel-forming capacity of the nuclei. Methods in the other class are believed to permit this enzyme to act on the nuclei during the isolation procedure, causing detachment of DNA from some nuclear constituent (probably protein). 2. It is shown that heating in alkaline solution and x-irradiation can destroy nuclear gels. Heating in acid or neutral solutions can destroy the capacity of isolated nuclei to form gels. 3. Chemical and biological evidence is summarized in favor of the hypothesis that DNA is normally bound firmly to some nuclear component by non-ionic linkages. PMID:14381437
NASA Technical Reports Server (NTRS)
Goodwin, Thomas J. (Inventor); Wolf, David A. (Inventor); Spaulding, Glenn F. (Inventor); Prewett, Tracey L. (Inventor)
1996-01-01
Normal mammalian tissue and the culturing process has been developed for the three groups of organ, structural, and blood tissue. The cells are grown in vitro under microgravity culture conditions and form three dimensional cells aggregates with normal cell function. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel.
Normalization of metabolomics data with applications to correlation maps.
Jauhiainen, Alexandra; Madhu, Basetti; Narita, Masako; Narita, Masashi; Griffiths, John; Tavaré, Simon
2014-08-01
In metabolomics, the goal is to identify and measure the concentrations of different metabolites (small molecules) in a cell or a biological system. The metabolites form an important layer in the complex metabolic network, and the interactions between different metabolites are often of interest. It is crucial to perform proper normalization of metabolomics data, but current methods may not be applicable when estimating interactions in the form of correlations between metabolites. We propose a normalization approach based on a mixed model, with simultaneous estimation of a correlation matrix. We also investigate how the common use of a calibration standard in nuclear magnetic resonance (NMR) experiments affects the estimation of correlations. We show with both real and simulated data that our proposed normalization method is robust and has good performance when discovering true correlations between metabolites. The standardization of NMR data is shown in simulation studies to affect our ability to discover true correlations to a small extent. However, comparing standardized and non-standardized real data does not result in any large differences in correlation estimates. Source code is freely available at https://sourceforge.net/projects/metabnorm/ alexandra.jauhiainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Implementation of the block-Krylov boundary flexibility method of component synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Abdallah, Ayman A.; Hucklebridge, Arthur A.
1993-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based on the boundary flexibility vectors of the component. This algorithm is not load-dependent, is applicable to both fixed and free-interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. This methodology was implemented in the MSC/NASTRAN normal modes solution sequence using DMAP. The accuracy is found to be comparable to that of component synthesis based upon normal modes. The block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Money and Finance: Treasury Office of the Secretary of the Treasury TERRORISM RISK INSURANCE PROGRAM... using normal business practices, including forms and methods of communication used to communicate...
Predicting financial market crashes using ghost singularities.
Smug, Damian; Ashwin, Peter; Sornette, Didier
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of 'ghosts of finite-time singularities' is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts.
Predicting financial market crashes using ghost singularities
2018-01-01
We analyse the behaviour of a non-linear model of coupled stock and bond prices exhibiting periodically collapsing bubbles. By using the formalism of dynamical system theory, we explain what drives the bubbles and how foreshocks or aftershocks are generated. A dynamical phase space representation of that system coupled with standard multiplicative noise rationalises the log-periodic power law singularity pattern documented in many historical financial bubbles. The notion of ‘ghosts of finite-time singularities’ is introduced and used to estimate the end of an evolving bubble, using finite-time singularities of an approximate normal form near the bifurcation point. We test the forecasting skill of this method on different stochastic price realisations and compare with Monte Carlo simulations of the full system. Remarkably, the approximate normal form is significantly more precise and less biased. Moreover, the method of ghosts of singularities is less sensitive to the noise realisation, thus providing more robust forecasts. PMID:29596485
Selected retinoids: determination by isocratic normal-phase HPLC.
Klvanova, J; Brtko, J
2002-09-01
Retinol (ROL), retinal (RAL) and retinoic acid (RA) are physiologically active forms of vitamin A. All-trans retinoic acid (ATRA) can be formed by oxidation from all-trans retinal (ATRAL). Isomerization of RA is considered to be an important metabolic pathway of retinoids. RA isomers transactivate various response pathways via their cognate nuclear receptors that act as ligand inducible transcription factors. The aim of this study was to establish a rapid and simple method for determination of ATRA, 13-cis retinoic acid (13CRA) and ATRAL by HPLC. In our laboratory, we slightly modified the method of Miyagi et al. (2001) and separated ATRA, 13CRA and ATRAL by simple isocratic normal phase HPLC. Both retinoic acid isomers and ATRAL were eluted within 13 min and all components were well resolved. The coefficients of variation (C.V.) for RAs and RAL were from 3.0 to 5.4 %.
A quasiparticle-based multi-reference coupled-cluster method.
Rolik, Zoltán; Kállay, Mihály
2014-10-07
The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.
Low melting high lithia glass compositions and methods
Jantzen, Carol M.; Pickett, John B.; Cicero-Herman, Connie A.; Marra, James C.
2003-09-23
The invention relates to methods of vitrifying waste and for lowering the melting point of glass forming systems by including lithia formers in the glass forming composition in significant amounts, typically from about 0.16 wt % to about 11 wt %, based on the total glass forming oxides. The lithia is typically included as a replacement for alkali oxide glass formers that would normally be present in a particular glass forming system. Replacement can occur on a mole percent or weight percent basis, and typically results in a composition wherein lithia forms about 10 wt % to about 100 wt % of the alkali oxide glass formers present in the composition. The present invention also relates to the high lithia glass compositions formed by these methods. The invention is useful for stabilization of numerous types of waste materials, including aqueous waste uranium oxides The decrease in melting point achieved by the present invention desirably prevents volatilization of hazardous or radioactive species during vitrification.
A python module to normalize microarray data by the quantile adjustment method.
Baber, Ibrahima; Tamby, Jean Philippe; Manoukis, Nicholas C; Sangaré, Djibril; Doumbia, Seydou; Traoré, Sekou F; Maiga, Mohamed S; Dembélé, Doulaye
2011-06-01
Microarray technology is widely used for gene expression research targeting the development of new drug treatments. In the case of a two-color microarray, the process starts with labeling DNA samples with fluorescent markers (cyanine 635 or Cy5 and cyanine 532 or Cy3), then mixing and hybridizing them on a chemically treated glass printed with probes, or fragments of genes. The level of hybridization between a strand of labeled DNA and a probe present on the array is measured by scanning the fluorescence of spots in order to quantify the expression based on the quality and number of pixels for each spot. The intensity data generated from these scans are subject to errors due to differences in fluorescence efficiency between Cy5 and Cy3, as well as variation in human handling and quality of the sample. Consequently, data have to be normalized to correct for variations which are not related to the biological phenomena under investigation. Among many existing normalization procedures, we have implemented the quantile adjustment method using the python computer language, and produced a module which can be run via an HTML dynamic form. This module is composed of different functions for data files reading, intensity and ratio computations and visualization. The current version of the HTML form allows the user to visualize the data before and after normalization. It also gives the option to subtract background noise before normalizing the data. The output results of this module are in agreement with the results of other normalization tools. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Hou, Peng-Fei; Zhang, Yang
2017-09-01
Because most piezoelectric functional devices, including sensors, actuators and energy harvesters, are in the form of a piezoelectric coated structure, it is valuable to present an accurate and efficient method for obtaining the electro-mechanical coupling fields of this coated structure under mechanical and electrical loads. With this aim, the two-dimensional Green’s function for a normal line force and line charge on the surface of coated structure, which is a combination of an orthotropic piezoelectric coating and orthotropic elastic substrate, is presented in the form of elementary functions based on the general solution method. The corresponding electro-mechanical coupling fields of this coated structure under arbitrary mechanical and electrical loads can then be obtained by the superposition principle and Gauss integration. Numerical results show that the presented method has high computational precision, efficiency and stability. It can be used to design the best coating thickness in functional devices, improve the sensitivity of sensors, and improve the efficiency of actuators and energy harvesters. This method could be an efficient tool for engineers in engineering applications.
NASA Technical Reports Server (NTRS)
Cole, G. L.; Willoh, R. G.
1975-01-01
A linearized mathematical analysis is presented for determining the response of normal shock position and subsonic duct pressures to flow-field perturbations upstream of the normal shock in mixed-compression supersonic inlets. The inlet duct cross-sectional area variation is approximated by constant-area sections; this approximation results in one-dimensional wave equations. A movable normal shock separates the supersonic and subsonic flow regions, and a choked exit is assumed for the inlet exit condition. The analysis leads to a closed-form matrix solution for the shock position and pressure transfer functions. Analytical frequency response results are compared with experimental data and a method of characteristics solution.
Adaptive form-finding method for form-fixed spatial network structures
NASA Astrophysics Data System (ADS)
Lan, Cheng; Tu, Xi; Xue, Junqing; Briseghella, Bruno; Zordan, Tobia
2018-02-01
An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.
NASA Astrophysics Data System (ADS)
Xu, Ye; Sonka, Milan; McLennan, Geoffrey; Guo, Junfeng; Hoffman, Eric
2005-04-01
Lung parenchyma evaluation via multidetector-row CT (MDCT), has significantly altered clinical practice in the early detection of lung disease. Our goal is to enhance our texture-based tissue classification ability to differentiate early pathologic processes by extending our 2-D Adaptive Multiple Feature Method (AMFM) to 3-D AMFM. We performed MDCT on 34 human volunteers in five categories: emphysema in severe Chronic Obstructive Pulmonary Disease (COPD) as EC, emphysema in mild COPD (MC), normal appearing lung in COPD (NC), non-smokers with normal lung function (NN), smokers with normal function (NS). We volumetrically excluded the airway and vessel regions, calculated 24 volumetric texture features for each Volume of Interest (VOI); and used Bayesian rules for discrimination. Leave-one-out and half-half methods were used for testing. Sensitivity, specificity and accuracy were calculated. The accuracy of the leave-one-out method for the four-class classification in the form of 3-D/2-D is: EC: 84.9%/70.7%, MC: 89.8%/82.7%; NC: 87.5.0%/49.6%; NN: 100.0%/60.0%. The accuracy of the leave-one-out method for the two-class classification in the form of 3-D/2-D is: NN: 99.3%/71.6%; NS: 99.7%/74.5%. We conclude that 3-D AMFM analysis of the lung parenchyma improves discrimination compared to 2-D analysis of the same images.
Method for distinguishing normal and transformed cells using G1 kinase inhibitors
Crissman, Harry A.; Gadbois, Donna M.; Tobey, Robert A.; Bradbury, E. Morton
1993-01-01
A G.sub.1 phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G.sub.1 phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G.sub.1 cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G.sub.1 phase, suggesting that such G.sub.1 phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.
Method for distinguishing normal and transformed cells using G1 kinase inhibitors
Crissman, H.A.; Gadbois, D.M.; Tobey, R.A.; Bradbury, E.M.
1993-02-09
A G[sub 1] phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G[sub 1] phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G[sub 1] cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G[sub 1] phase, suggesting that such G[sub 1] phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals
NASA Astrophysics Data System (ADS)
Frejlich, Pedro; Mărcuț, Ioan
2018-03-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.
Frejlich, Pedro; Mărcuț, Ioan
2018-01-01
Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.
NCC-RANSAC: a fast plane extraction method for 3-D range data segmentation.
Qian, Xiangfei; Ye, Cang
2014-12-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera-SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods.
NCC-RANSAC: A Fast Plane Extraction Method for 3-D Range Data Segmentation
Qian, Xiangfei; Ye, Cang
2015-01-01
This paper presents a new plane extraction (PE) method based on the random sample consensus (RANSAC) approach. The generic RANSAC-based PE algorithm may over-extract a plane, and it may fail in case of a multistep scene where the RANSAC procedure results in multiple inlier patches that form a slant plane straddling the steps. The CC-RANSAC PE algorithm successfully overcomes the latter limitation if the inlier patches are separate. However, it fails if the inlier patches are connected. A typical scenario is a stairway with a stair wall where the RANSAC plane-fitting procedure results in inliers patches in the tread, riser, and stair wall planes. They connect together and form a plane. The proposed method, called normal-coherence CC-RANSAC (NCC-RANSAC), performs a normal coherence check to all data points of the inlier patches and removes the data points whose normal directions are contradictory to that of the fitted plane. This process results in separate inlier patches, each of which is treated as a candidate plane. A recursive plane clustering process is then executed to grow each of the candidate planes until all planes are extracted in their entireties. The RANSAC plane-fitting and the recursive plane clustering processes are repeated until no more planes are found. A probabilistic model is introduced to predict the success probability of the NCC-RANSAC algorithm and validated with real data of a 3-D time-of-flight camera–SwissRanger SR4000. Experimental results demonstrate that the proposed method extracts more accurate planes with less computational time than the existing RANSAC-based methods. PMID:24771605
Katsumata, Yuriko; Mathews, Melissa; Abner, Erin L.; Jicha, Gregory A.; Caban-Holt, Allison; Smith, Charles D.; Nelson, Peter T.; Kryscio, Richard J.; Schmitt, Frederick A.; Fardo, David W.
2015-01-01
Background The Boston Naming Test (BNT) is a commonly used neuropsychological test of confrontation naming that aids in determining the presence and severity of dysnomia. Many short versions of the original 60-item test have been developed and are routinely administered in clinical/research settings. Because of the common need to translate similar measures within and across studies, it is important to evaluate the operating characteristics and agreement of different BNT versions. Methods We analyzed longitudinal data of research volunteers (n = 681) from the University of Kentucky Alzheimer’s Disease Center longitudinal cohort. Conclusions With the notable exception of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) 15-item BNT, short forms were internally consistent and highly correlated with the full version; these measures varied by diagnosis and generally improved from normal to mild cognitive impairment (MCI) to dementia. All short forms retained the ability to discriminate between normal subjects and those with dementia. The ability to discriminate between normal and MCI subjects was less strong for the short forms than the full BNT, but they exhibited similar patterns. These results have important implications for researchers designing longitudinal studies, who must consider that the statistical properties of even closely related test forms may be quite different. PMID:25613081
Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi
2006-06-01
This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
The Existence of Periodic Orbits and Invariant Tori for Some 3-Dimensional Quadratic Systems
Jiang, Yanan; Han, Maoan; Xiao, Dongmei
2014-01-01
We use the normal form theory, averaging method, and integral manifold theorem to study the existence of limit cycles in Lotka-Volterra systems and the existence of invariant tori in quadratic systems in ℝ3. PMID:24982980
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.
Improvements in Block-Krylov Ritz Vectors and the Boundary Flexibility Method of Component Synthesis
NASA Technical Reports Server (NTRS)
Carney, Kelly Scott
1997-01-01
A method of dynamic substructuring is presented which utilizes a set of static Ritz vectors as a replacement for normal eigenvectors in component mode synthesis. This set of Ritz vectors is generated in a recurrence relationship, proposed by Wilson, which has the form of a block-Krylov subspace. The initial seed to the recurrence algorithm is based upon the boundary flexibility vectors of the component. Improvements have been made in the formulation of the initial seed to the Krylov sequence, through the use of block-filtering. A method to shift the Krylov sequence to create Ritz vectors that will represent the dynamic behavior of the component at target frequencies, the target frequency being determined by the applied forcing functions, has been developed. A method to terminate the Krylov sequence has also been developed. Various orthonormalization schemes have been developed and evaluated, including the Cholesky/QR method. Several auxiliary theorems and proofs which illustrate issues in component mode synthesis and loss of orthogonality in the Krylov sequence have also been presented. The resulting methodology is applicable to both fixed and free- interface boundary components, and results in a general component model appropriate for any type of dynamic analysis. The accuracy is found to be comparable to that of component synthesis based upon normal modes, using fewer generalized coordinates. In addition, the block-Krylov recurrence algorithm is a series of static solutions and so requires significantly less computation than solving the normal eigenspace problem. The requirement for less vectors to form the component, coupled with the lower computational expense of calculating these Ritz vectors, combine to create a method more efficient than traditional component mode synthesis.
TRASYS form factor matrix normalization
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn T.
1992-01-01
A method has been developed for adjusting a TRASYS enclosure form factor matrix to unity. This approach is not limited to closed geometries, and in fact, it is primarily intended for use with open geometries. The purpose of this approach is to prevent optimistic form factors to space. In this method, nodal form factor sums are calculated within 0.05 of unity using TRASYS, although deviations as large as 0.10 may be acceptable, and then, a process is employed to distribute the difference amongst the nodes. A specific example has been analyzed with this method, and a comparison was performed with a standard approach for calculating radiation conductors. In this comparison, hot and cold case temperatures were determined. Exterior nodes exhibited temperature differences as large as 7 C and 3 C for the hot and cold cases, respectively when compared with the standard approach, while interior nodes demonstrated temperature differences from 0 C to 5 C. These results indicate that temperature predictions can be artificially biased if the form factor computation error is lumped into the individual form factors to space.
Redox-Modulated Phenomena and Radiation Therapy: The Central Role of Superoxide Dismutases
Holley, Aaron K.; Miao, Lu; St. Clair, Daret K.
2014-01-01
Abstract Significance: Ionizing radiation is a vital component in the oncologist's arsenal for the treatment of cancer. Approximately 50% of all cancer patients will receive some form of radiation therapy as part of their treatment regimen. DNA is considered the major cellular target of ionizing radiation and can be damaged directly by radiation or indirectly through reactive oxygen species (ROS) formed from the radiolysis of water, enzyme-mediated ROS production, and ROS resulting from altered aerobic metabolism. Recent Advances: ROS are produced as a byproduct of oxygen metabolism, and superoxide dismutases (SODs) are the chief scavengers. ROS contribute to the radioresponsiveness of normal and tumor tissues, and SODs modulate the radioresponsiveness of tissues, thus affecting the efficacy of radiotherapy. Critical Issues: Despite its prevalent use, radiation therapy suffers from certain limitations that diminish its effectiveness, including tumor hypoxia and normal tissue damage. Oxygen is important for the stabilization of radiation-induced DNA damage, and tumor hypoxia dramatically decreases radiation efficacy. Therefore, auxiliary therapies are needed to increase the effectiveness of radiation therapy against tumor tissues while minimizing normal tissue injury. Future Directions: Because of the importance of ROS in the response of normal and cancer tissues to ionizing radiation, methods that differentially modulate the ROS scavenging ability of cells may prove to be an important method to increase the radiation response in cancer tissues and simultaneously mitigate the damaging effects of ionizing radiation on normal tissues. Altering the expression or activity of SODs may prove valuable in maximizing the overall effectiveness of ionizing radiation. Antioxid. Redox Signal. 20, 1567–1589. PMID:24094070
a Recursive Approach to Compute Normal Forms
NASA Astrophysics Data System (ADS)
HSU, L.; MIN, L. J.; FAVRETTO, L.
2001-06-01
Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.
Generations of orthogonal surface coordinates
NASA Technical Reports Server (NTRS)
Blottner, F. G.; Moreno, J. B.
1980-01-01
Two generation methods were developed for three dimensional flows where the computational domain normal to the surface is small. With this restriction the coordinate system requires orthogonality only at the body surface. The first method uses the orthogonal condition in finite-difference form to determine the surface coordinates with the metric coefficients and curvature of the coordinate lines calculated numerically. The second method obtains analytical expressions for the metric coefficients and for the curvature of the coordinate lines.
Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)
NASA Astrophysics Data System (ADS)
Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi
2017-03-01
The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.
Fabrication method for cores of structural sandwich materials including star shaped core cells
Christensen, Richard M.
1997-01-01
A method for fabricating structural sandwich materials having a core pattern which utilizes star and non-star shaped cells. The sheets of material are bonded together or a single folded sheet is used, and bonded or welded at specific locations, into a flat configuration, and are then mechanically pulled or expanded normal to the plane of the sheets which expand to form the cells. This method can be utilized to fabricate other geometric cell arrangements than the star/non-star shaped cells. Four sheets of material (either a pair of bonded sheets or a single folded sheet) are bonded so as to define an area therebetween, which forms the star shaped cell when expanded.
van Albada, S J; Robinson, P A
2007-04-15
Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.
Space flight and bone formation.
Doty, St B
2004-12-01
Major physiological changes which occur during spaceflight include bone loss, muscle atrophy, cardiovascular and immune response alterations. When trying to determine the reason why bone loss occurs during spaceflight, one must remember that all these other changes in physiology and metabolism may also have impact on the skeletal system. For bone, however, the role of normal weight bearing is a major concern and we have found no adequate substitute for weight bearing which can prevent bone loss. During the study of this problem, we have learned a great deal about bone physiology and increased our knowledge about how normal bone is formed and maintained. Presently, we do not have adequate ground based models which can mimic the tissue loss that occurs in spaceflight but this condition closely resembles the bone loss seen with osteoporosis. Although a normal bone structure will respond to application of mechanical force and weight bearing by forming new bone, a weakened osteoporotic bone may have a tendency to fracture. The study of the skeletal system during weightless conditions will eventually produce preventative measures and form a basis for protecting the crew during long term space flight. The added benefit from these studies will be methods to treat bone loss conditions which occur here on earth.
Space flight and bone formation
NASA Technical Reports Server (NTRS)
Doty, St B.
2004-01-01
Major physiological changes which occur during spaceflight include bone loss, muscle atrophy, cardiovascular and immune response alterations. When trying to determine the reason why bone loss occurs during spaceflight, one must remember that all these other changes in physiology and metabolism may also have impact on the skeletal system. For bone, however, the role of normal weight bearing is a major concern and we have found no adequate substitute for weight bearing which can prevent bone loss. During the study of this problem, we have learned a great deal about bone physiology and increased our knowledge about how normal bone is formed and maintained. Presently, we do not have adequate ground based models which can mimic the tissue loss that occurs in spaceflight but this condition closely resembles the bone loss seen with osteoporosis. Although a normal bone structure will respond to application of mechanical force and weight bearing by forming new bone, a weakened osteoporotic bone may have a tendency to fracture. The study of the skeletal system during weightless conditions will eventually produce preventative measures and form a basis for protecting the crew during long term space flight. The added benefit from these studies will be methods to treat bone loss conditions which occur here on earth.
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
Shamash, Jana; Rienstein, Shlomit; Wolf-Reznik, Haike; Pras, Elon; Dekel, Michal; Litmanovitch, Talia; Brengauz, Masha; Goldman, Boleslav; Yonath, Hagith; Dor, Jehoshua; Levron, Jacob; Aviram-Goldring, Ayala
2011-01-01
Preimplantation genetic diagnosis using fluorescence in-situ hybridization (PGD-FISH) is currently the most common reproductive solution for translocation carriers. However, this technique usually does not differentiate between embryos carrying the balanced form of the translocation and those carrying the homologous normal chromosomes. We developed a new application of preimplantation genetic haplotyping (PGH) that can identify and distinguish between all forms of the translocation status in cleavage stage embryos prior to implantation. Polymorphic markers were used to identify and differentiate between the alleles that carry the translocation and those that are the normal homologous chromosomes. Embryos from two families of robertsonian translocation carriers were successfully analyzed using polymorphic markers haplotyping. Our preliminary results indicate that the PGH is capable of distinguishing between normal, balanced and unbalanced translocation carrier embryos. This method will improve PGD and will enable translocation carriers to avoid transmission of the translocation and the associated medical complications to offspring.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.
Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.
Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix
Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabbro, D.E.
1988-01-01
Kinetic and immunologic techniques were developed to investigate the nature of the acid {beta}-glucosidase ({beta}-Glc) defects which results in human and canine Gaucher disease (GD). Two new affinity columns, using the potent inhibitors of {beta}-Glc (N-alkyl-deoxynojirimycins) as affinity ligands, were synthesized and methods were developed to obtain homogeneous {beta}-Glc from normal human placenta. Polyclonal and monoclonal (representing 14 different epitopes from 18 clones) antibodies were produced to the pure normal {beta}-Glc. Monospecific polyclonal IgG and tritiated-bromo-conduritol B epoxide (({sup 3}H)Br-CBE), a specific covalent active site directed inhibitor of {beta}-Glc, were used to quantitate the functional catalytic sites in normal andmore » Type 1 Ashkenazi Jewish GD (AJGD) enzyme preparations: The k{sub cat} values for several new substrates with the mutant enzymes from spleen were about 1.5-fold less than the respective normal enzyme, indicating a nearly normal catalytic capacity of the mutant enzymes. Immunoblotting studies with polyclonal or several monoclonal antibodies indicated three molecular forms of {beta}-Glc (M{sub r} = 67,000, 62,000 to 65,000 and 58,000) in fibroblast extracts from normals and Type 1 AJGD patients. In comparison, only one form of cross-reacting immunologic material (CRIM) was detected in fibroblast extracts from Types 2 and 3 or several non-Jewish Type 1 GD patients.« less
Kimura, Yoshifumi; Fukuda, Masanori; Suda, Kayo; Terazima, Masahide
2010-09-16
Fluorescence dynamics of 4'-N,N-diethylamino-3-hydroxyflavone (DEAHF) and its methoxy derivative (DEAMF) in various room temperature ionic liquids (RTILs) have been studied mainly by an optical Kerr gate method. DEAMF showed a single band fluorescence whose peak shifted with time by the solvation dynamics. The averaged solvation time determined by the fluorescence peak shift was proportional to the viscosity of the solvent except for tetradecyltrihexylphosphonium bis(trifluoromethanesulfonyl)amide. The solvation times were consistent with reported values determined with different probe molecules. DEAHF showed dual fluorescence due to the normal and tautomer forms produced by the excited state intramolecular proton transfer (ESIPT), and the relative intensities were dependent on the time and the solvent cation or anion species. By using the information of the fluorescence spectrum of DEAMF, the fluorescence spectrum of DEAHF at each delay time after the photoexcitation was decomposed into the normal and the tautomer fluorescence components, respectively. The normal component showed a very fast decay simulated by a biexponential function (2-3 and 20-30 ps) with an additional slower decay component. The tautomer component showed a rise with the time constants corresponding to the faster decay of the normal form with an additional instantaneous rise. The faster dynamics of the normal and tautomer population changes were assigned to the ESIPT process, while the slower decay of the fluorescence was attributed to the population decay from the excited state through the radiative and nonradiative processes. The average ESIPT time was much faster than the averaged solvation time of RTILs. Basically, the ESIPT kinetics in RTILs is similar to those in conventional liquid solvents like acetonitrile (Chou et al. J. Phys. Chem. A 2005, 109, 3777). The faster ESIPT is interpreted in terms of the activation barrierless process from the Franck-Condon state before the solvation of the normal state in the electronic excited state. With the advance of the solvation in the excited state, the normal form becomes relatively more stable than the tautomer form, which makes the ESIPT become an activation process.
The affect of tissue depth variation on craniofacial reconstructions.
Starbuck, John M; Ward, Richard E
2007-10-25
We examined the affect of tissue depth variation on the reconstruction of facial form, through the application of the American method, utilizing published tissue depth measurements for emaciated, normal, and obese faces. In this preliminary study, three reconstructions were created on reproductions of the same skull for each set of tissue depth measurements. The resulting morphological variation was measured quantitatively using the anthropometric craniofacial variability index (CVI). This method employs 16 standard craniofacial anthropometric measurements and the results reflect "pattern variation" or facial harmony. We report no appreciable variation in the quantitative measure of the pattern facial form obtained from the three different sets of tissue depths. Facial similarity was assessed qualitatively utilizing surveys of photographs of the three reconstructions. Surveys indicated that subjects frequently perceived the reconstructions as representing different individuals. This disagreement indicates that size of the face may blind observers to similarities in facial form. This research is significant because it illustrates the confounding effect that normal human variation contributes in the successful recognition of individuals from a representational three-dimensional facial reconstruction. Research results suggest that successful identification could be increased if multiple reconstructions were created which reflect a wide range of possible outcomes for facial form. The creation of multiple facial images, from a single skull, will be facilitated as computerized versions of facial reconstruction are further developed and refined.
Normal modes of a small gamelan gong.
Perrin, Robert; Elford, Daniel P; Chalmers, Luke; Swallowe, Gerry M; Moore, Thomas R; Hamdan, Sinin; Halkon, Benjamin J
2014-10-01
Studies have been made of the normal modes of a 20.7 cm diameter steel gamelan gong. A finite-element model has been constructed and its predictions for normal modes compared with experimental results obtained using electronic speckle pattern interferometry. Agreement was reasonable in view of the lack of precision in the manufacture of the instrument. The results agree with expectations for an axially symmetric system subject to small symmetry breaking. The extent to which the results obey Chladni's law is discussed. Comparison with vibrational and acoustical spectra enabled the identification of the small number of modes responsible for the sound output when played normally. Evidence of non-linear behavior was found, mainly in the form of subharmonics of true modes. Experiments using scanning laser Doppler vibrometry gave satisfactory agreement with the other methods.
Energy-Based Metrics for Arthroscopic Skills Assessment.
Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa
2017-08-05
Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
The way to uncover community structure with core and diversity
NASA Astrophysics Data System (ADS)
Chang, Y. F.; Han, S. K.; Wang, X. D.
2018-07-01
Communities are ubiquitous in nature and society. Individuals that share common properties often self-organize to form communities. Avoiding the shortages of computation complexity, pre-given information and unstable results in different run, in this paper, we propose a simple and efficient method to deepen our understanding of the emergence and diversity of communities in complex systems. By introducing the rational random selection, our method reveals the hidden deterministic and normal diverse community states of community structure. To demonstrate this method, we test it with real-world systems. The results show that our method could not only detect community structure with high sensitivity and reliability, but also provide instructional information about the hidden deterministic community world and the real normal diverse community world by giving out the core-community, the real-community, the tide and the diversity. Thizs is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in complex systems.
Method for hot press forming articles
Baker, Robert R.; Hartsock, Dale L.
1982-01-01
This disclosure relates to an improved method for achieving the best bond strength and for minimizing distortion and cracking of hot pressed articles. In particular, in a method for hot press forming both an outer facing circumferential surface of and an inner portion of a hub, and of bonding that so-formed outer facing circumferential surface to an inner facing circumferential surface of a pre-formed ring thereby to form an article, the following improvement is made. Normally, in this method, the outside ring is restrained by a restraining sleeve of ring-shaped cross-section having an inside diameter. A die member, used to hot press form the hub, is so-formed as to have an outside diameter sized to engage the inside diameter of the restraining sleeve in a manner permitting relative movement therebetween. The improved method is one in which several pairs of matched restraining sleeve and die member are formed with each matched pair having a predetermined diameter. The predetermined diameter of each matched pair is different from another matched pair by stepped increments. The largest inside diameter of a restraining sleeve is equal to the diameter of the outer facing circumferential surface of the hub. Each pair of the matched restraining sleeve and die member is used to form an article in which an inside hub is bonded to an outside ring. The several samples so-formed are evaluated to determine which sample has the best bond formed between the hub and the ring with the least or no cracking or distortion in the ring portion of the article. Thereafter, the matched restraining sleeve and die member which form the article having the best bonding characteristics and least distortion cracking is then used for repeated formations of articles.
Fabrication method for cores of structural sandwich materials including star shaped core cells
Christensen, R.M.
1997-07-15
A method for fabricating structural sandwich materials having a core pattern which utilizes star and non-star shaped cells is disclosed. The sheets of material are bonded together or a single folded sheet is used, and bonded or welded at specific locations, into a flat configuration, and are then mechanically pulled or expanded normal to the plane of the sheets which expand to form the cells. This method can be utilized to fabricate other geometric cell arrangements than the star/non-star shaped cells. Four sheets of material (either a pair of bonded sheets or a single folded sheet) are bonded so as to define an area therebetween, which forms the star shaped cell when expanded. 3 figs.
Wang, Lei; Zhai, Shen-Qiang; Wang, Feng-Jiao; Liu, Jun-Qi; Liu, Shu-Man; Zhuo, Ning; Zhang, Chuan-Jin; Wang, Li-Jun; Liu, Feng-Qi; Wang, Zhan-Guo
2016-12-01
The design, fabrication, and characterization of a polarization-dependent normal incident quantum cascade detector coupled via complementary split-ring metamaterial resonators in the infrared regime are presented. The metamaterial structure is designed through three-dimensional finite-difference time-domain method and fabricated on the top metal contact, which forms a double-metal waveguide together with the metallic ground plane. With normal incidence, significant enhancements of photocurrent response are obtained at the metamaterial resonances compared with the 45° polished edge coupling device. The photocurrent response enhancements exhibit clearly polarization dependence, and the largest response enhancement factor of 165% is gained for the incident light polarized parallel to the split-ring gap.
NASA Astrophysics Data System (ADS)
Li, Lanlan; Wei, Wei; Jia, Wen-Juan; Zhu, Yongchang; Zhang, Yan; Chen, Jiang-Huai; Tian, Jiaqi; Liu, Huanxiang; He, Yong-Xing; Yao, Xiaojun
2017-12-01
Conformational conversion of the normal cellular prion protein, PrPC, into the misfolded isoform, PrPSc, is considered to be a central event in the development of fatal neurodegenerative diseases. Stabilization of prion protein at the normal cellular form (PrPC) with small molecules is a rational and efficient strategy for treatment of prion related diseases. However, few compounds have been identified as potent prion inhibitors by binding to the normal conformation of prion. In this work, to rational screening of inhibitors capable of stabilizing cellular form of prion protein, multiple approaches combining docking-based virtual screening, steady-state fluorescence quenching, surface plasmon resonance and thioflavin T fluorescence assay were used to discover new compounds interrupting PrPC to PrPSc conversion. Compound 3253-0207 that can bind to PrPC with micromolar affinity and inhibit prion fibrillation was identified from small molecule databases. Molecular dynamics simulation indicated that compound 3253-0207 can bind to the hotspot residues in the binding pocket composed by β1, β2 and α2, which are significant structure moieties in conversion from PrPC to PrPSc.
Nenadić, Dane B; Pavlović, Miloš D; Motrenko, Tatjana
2015-08-01
The Nugent's score is still the gold standard in the great majority of studies dealing with the assessment of vaginal flora and the diagnosis of bacterial vaginosis (BV). The aim of this study was to show that the analysis of Gram-stained vaginal samples under microscope at the magnification of x200 (a novel microscopic method--NMM), as a fast and simple tool, easily applicable in everyday practice, better reflects complexity of vaginal microflora than the Nugent's methodology (x1000). Gram-stained vaginal smears from 394 asymptomatic pregnant women (24-28 week of pregnancy) were classified according to the Nugent's microscopic criteria (immersion, magnification x1000). The smears were then reexamined under immersion but at magnification x200. All samples were classified into 6 groups according to semiquanititative assessment of numbers (cellularity) and the ratio of rod (length < 1.5 microm) and small bacterial (< 1.5 microm) forms: hypercellular (normal full--NF), moderately cellular (normal mid-NM), hypocellular (normal empty--NE), bacterial vaginosis full (BVF), bacterial vaginosis mid (BVM), and bacterial vaginosis empty (BVE). Also yeasts, coccae, bifido and lepto bacterial forms as well polymorphonuclear (PMN) leukocytes were identified. According to the Nugent's scoring, BV was found in 78, intermediate findings in 63, and yeasts in 48 patients. By our criteria BV was confirmed in 88 patients (37 BVF, 24 BVM, and 27 BVN). Generally, both tools proved to be highly concordant for the diagnosis of BV (Lin's concordance correlation coefficient = 0.9852). In 40% of the women mixed flora was found: yeasts in 126 (32%), coccae in 145 (37%), bifido forms in 32 (8%) and lepto forms in 20 (5%). Almost a half of BV patients had also yeasts (39/88). Elevated PMN numbers were found in 102 (33%) patients with normal and in 36 (41%) women with BV. The newly described methodology is simpler to apply and much better reflects diversity of vaginal microflora. In this way it may be more valuable to molecular biologists and their attempts based on quantitative polymerase chain reaction (PCR) to define formulas for molecular diagnosis of bacterial vaginosis.
NASA Astrophysics Data System (ADS)
Shevchenko, I. I.
2008-05-01
The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.
Diagonalization and Jordan Normal Form--Motivation through "Maple"[R
ERIC Educational Resources Information Center
Glaister, P.
2009-01-01
Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package…
Test method for telescopes using a point source at a finite distance
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zissa, D. E.; Korsch, D.
1985-01-01
A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.
Moderate severity heart failure does not involve a downregulation of myocardial fatty acid oxidation
2004-10-01
malonyl-CoA-sensitive form of carnitine palmitoyltransferase is not local - ized exclusively in the outer membrane of rat liver mitochondria . J Biol...for the isolation of fresh mitochondria , both subsarcolemmal and interfibrillar. Analytic methods. Detailed analytic methods have been previously cited...populations of mitochondria , the subsarcolemmal and inter- fibrillar, were isolated from hearts of normal and HF dogs using the procedure of Palmer et al
Low melting high lithia glass compositions and methods
Jantzen, Carol M.; Pickett, John B.; Cicero-Herman, Connie A.; Marra, James C.
2004-11-02
The invention relates to methods of vitrifying waste and for lowering the melting point of glass forming systems by including lithia formers in the glass forming composition in significant amounts, typically from about 0.16 wt % to about 11 wt %, based on the total glass forming oxides. The lithia is typically included as a replacement for alkali oxide glass formers that would normally be present in a particular glass forming system. Replacement can occur on a mole percent or weight percent basis, and typically results in a composition wherein lithia forms about 10 wt % to about 100 wt % of the alkali oxide glass formers present in the composition. The present invention also relates to the high lithia glass compositions formed by these methods. The invention is useful for stabilization of numerous types of waste materials, including aqueous waste streams, sludge solids, mixtures of aqueous supernate and sludge solids, combinations of spent filter aids from waste water treatment and waste sludges, supernate alone, incinerator ash, incinerator offgas blowdown, or combinations thereof, geological mine tailings and sludges, asbestos, inorganic filter media, cement waste forms in need of remediation, spent or partially spent ion exchange resins or zeolites, contaminated soils, lead paint, etc. The decrease in melting point achieved by the present invention desirably prevents volatilization of hazardous or radioactive species during vitrification.
Low melting high lithia glass compositions and methods
Jantzen, Carol M.; Pickett, John B.; Cicero-Herman, Connie A.; Marra, James C.
2003-10-07
The invention relates to methods of vitrifying waste and for lowering the melting point of glass forming systems by including lithia formers in the glass forming composition in significant amounts, typically from about 0.16 wt % to about 11 wt %, based on the total glass forming oxides. The lithia is typically included as a replacement for alkali oxide glass formers that would normally be present in a particular glass forming system. Replacement can occur on a mole percent or weight percent basis, and typically results in a composition wherein lithia forms about 10 wt % to about 100 wt % of the alkali oxide glass formers present in the composition. The present invention also relates to the high lithia glass compositions formed by these methods. The invention is useful for stabilization of numerous types of waste materials, including aqueous waste streams, sludge solids, mixtures of aqueous supernate and sludge solids, combinations of spent filter aids from waste water treatment and waste sludges, supernate alone, incinerator ash, incinerator offgas blowdown, or combinations thereof, geological mine tailings and sludges, asbestos, inorganic filter media, cement waste forms in need of remediation, spent or partially spent ion exchange resins or zeolites, contaminated soils, lead paint, etc. The decrease in melting point achieved by the present invention desirably prevents volatilization of hazardous or radioactive species during vitrification.
Low melting high lithia glass compositions and methods
Jantzen, Carol M.; Pickett, John B.; Cicero-Herman, Connie A.; Marra, James C.
2000-01-01
The invention relates to methods of vitrifying waste and for lowering the melting point of glass forming systems by including lithia formers in the glass forming composition in significant amounts, typically from about 0.16 wt % to about 11 wt %, based on the total glass forming oxides. The lithia is typically included as a replacement for alkali oxide glass formers that would normally be present in a particular glass forming system. Replacement can occur on a mole percent or weight percent basis, and typically results in a composition wherein lithia forms about 10 wt % to about 100 wt % of the alkali oxide glass formers present in the composition. The present invention also relates to the high lithia glass compositions formed by these methods. The invention is useful for stabilization of numerous types of waste materials, including aqueous waste streams, sludge solids, mixtures of aqueous supernate and sludge solids, combinations of spent filter aids from waste water treatment and waste sludges, supernate alone, incinerator ash, incinerator offgas blowdown, or combinations thereof, geological mine tailings and sludges, asbestos, inorganic filter media, cement waste forms in need of remediation, spent or partially spent ion exchange resins or zeolites, contaminated soils, lead paint, etc. The decrease in melting point achieved by the present invention desirably prevents volatilization of hazardous or radioactive species during vitrification.
Methods of vitrifying waste with low melting high lithia glass compositions
Jantzen, Carol M.; Pickett, John B.; Cicero-Herman, Connie A.; Marra, James C.
2001-01-01
The invention relates to methods of vitrifying waste and for lowering the melting point of glass forming systems by including lithia formers in the glass forming composition in significant amounts, typically from about 0.16 wt % to about 11 wt %, based on the total glass forming oxides. The lithia is typically included as a replacement for alkali oxide glass formers that would normally be present in a particular glass forming system. Replacement can occur on a mole percent or weight percent basis, and typically results in a composition wherein lithia forms about 10 wt % to about 100 wt % of the alkali oxide glass formers present in the composition. The present invention also relates to the high lithia glass compositions formed by these methods. The invention is useful for stabilization of numerous types of waste materials, including aqueous waste streams, sludge solids, mixtures of aqueous supernate and sludge solids, combinations of spent filter aids from waste water treatment and waste sludges, supernate alone, incinerator ash, incinerator offgas blowdown, or combinations thereof, geological mine tailings and sludges, asbestos, inorganic filter media, cement waste forms in need of remediation, spent or partially spent ion exchange resins or zeolites, contaminated soils, lead paint, etc. The decrease in melting point achieved by the present invention desirably prevents volatilization of hazardous or radioactive species during vitrification.
How Social Network Position Relates to Knowledge Building in Online Learning Communities
ERIC Educational Resources Information Center
Wang, Lu
2010-01-01
Social Network Analysis, Statistical Analysis, Content Analysis and other research methods were used to research online learning communities at Capital Normal University, Beijing. Analysis of the two online courses resulted in the following conclusions: (1) Social networks of the two online courses form typical core-periphery structures; (2)…
Volz-Köster, S; Volz, J; Kiefer, A; Biesalski, H K
2000-01-01
The appearance of the cervical mucosa is regulated by different factors including retinoic acid. Hormone-dependent alteration of the cervix uteri mucosa is accompanied by a decrease or increase of cytoplasmatic retinoic-acid-binding protein (CRABP). To elucidate whether this hormone-dependent alteration of CRABP is preserved in the case of neoplasms of the cervix uteri, we measured the level of total and apo-CRABP in normal and neoplastically transformed cervical cells. In a prospective pilot study, standardised biopsies of normal epithelium and cervical intra-epithelial neoplasm grade 3 (CIN III) were taken from 24 patients. A newly developed method was used to determine the intra-epithelial level of apo- and total CRABP. The concentration of total CRABP in normal squamous epithelium compared with that in intra-epithelial neoplasm grade 3 is very significantly lower in the CIN III areas (normal: 3.66 +/- 1.46 pmol/ mg wet weight +/- SD; CIN III 1.43 +/- 0.59 pmol/mg P < 0.01). In addition CRABP in the apo form is lower in normal than in neoplastic epithelium (Wilcoxon test for paired non-parametric values: P < 0.05; mean for all patients: normal: 1.65 + 0.82 pmol/mg; CIN III: 1.14 +/- 0.23 pmol/mg). From our results we conclude that, in neoplastically transformed cells, the hormone-dependent CRABP cycle is interrupted. Whether this has consequences for the further development of the neoplastic cells has to be elucidated.
Huang, Juan; Hung, Li-Fang
2011-01-01
Purpose. The purpose of this study was to determine whether visual signals from the fovea contribute to the changes in the pattern of peripheral refractions associated with form deprivation myopia in monkeys. Methods. Monocular form-deprivation was produced in 18 rhesus monkeys by securing diffusers in front of their treated eyes between 22 ± 2 and 155 ± 17 days of age. In eight of these form-deprived monkeys, the fovea and most of the perifovea of the treated eye were ablated by laser photocoagulation at the start of the diffuser-rearing period. Each eye's refractive status was measured by retinoscopy along the pupillary axis and at 15° intervals along the horizontal meridian to eccentricities of 45°. Control data were obtained from 12 normal monkeys and five monkeys that had monocular foveal ablations and were subsequently reared with unrestricted vision. Results. Foveal ablation, by itself, did not produce systematic alterations in either the central or peripheral refractive errors of the treated eyes. In addition, foveal ablation did not alter the patterns of peripheral refractions in monkeys with form-deprivation myopia. The patterns of peripheral refractive errors in the two groups of form-deprived monkeys, either with or without foveal ablation, were qualitatively similar (treated eyes: F = 0.31, P = 0.74; anisometropia: F = 0.61, P = 0.59), but significantly different from those found in the normal monkeys (F = 8.46 and 9.38 respectively, P < 0.05). Conclusions. Central retinal signals do not contribute in an essential way to the alterations in eye shape that occur during the development of vision-induced axial myopia. PMID:21693598
NASA Astrophysics Data System (ADS)
George, Koshy
2017-02-01
Context. Star-forming blue early-type galaxies at low redshift can give insight to the stellar mass growth of L⋆ elliptical galaxies in the local Universe. Aims: We wish to understand the reason for star formation in these otherwise passively evolving red and dead stellar systems. The fuel for star formation can be acquired through recent accretion events such as mergers or flyby. The signatures of such events should be evident from a structural analysis of the galaxy image. Methods: We carried out structural analysis on SDSS r-band imaging data of 55 star-forming blue elliptical galaxies, derived the structural parameters, analysed the residuals from best-fit to surface brightness distribution, and constructed the galaxy scaling relations. Results: We found that star-forming blue early-type galaxies are bulge-dominated systems with axial ratio >0.5 and surface brightness profiles fitted by Sérsic profiles with index (n) mostly >2. Twenty-three galaxies are found to have n< 2; these could be hosting a disc component. The residual images of the 32 galaxy surface brightness profile fits show structural features indicative of recent interactions. The star-forming blue elliptical galaxies follow the Kormendy relation and show the characteristics of normal elliptical galaxies as far as structural analysis is concerned. There is a general trend for high-luminosity galaxies to display interaction signatures and high star formation rates. Conclusions: The star-forming population of blue early-type galaxies at low redshifts could be normal ellipticals that might have undergone a recent gas-rich minor merger event. The star formation in these galaxies will shut down once the recently acquired fuel is consumed, following which the galaxy will evolve to a normal early-type galaxy.
Effects of Foveal Ablation on Emmetropization and Form-Deprivation Myopia
Smith, Earl L.; Ramamirtham, Ramkumar; Qiao-Grider, Ying; Hung, Li-Fang; Huang, Juan; Kee, Chea-su; Coats, David; Paysse, Evelyn
2009-01-01
Purpose Because of the prominence of central vision in primates, it has generally been assumed that signals from the fovea dominate refractive development. To test this assumption, the authors determined whether an intact fovea was essential for either normal emmetropization or the vision-induced myopic errors produced by form deprivation. Methods In 13 rhesus monkeys at 3 weeks of age, the fovea and most of the perifovea in one eye were ablated by laser photocoagulation. Five of these animals were subsequently allowed unrestricted vision. For the other eight monkeys with foveal ablations, a diffuser lens was secured in front of the treated eyes to produce form deprivation. Refractive development was assessed along the pupillary axis by retinoscopy, keratometry, and A-scan ultrasonography. Control data were obtained from 21 normal monkeys and three infants reared with plano lenses in front of both eyes. Results Foveal ablations had no apparent effect on emmetropization. Refractive errors for both eyes of the treated infants allowed unrestricted vision were within the control range throughout the observation period, and there were no systematic interocular differences in refractive error or axial length. In addition, foveal ablation did not prevent form deprivation myopia; six of the eight infants that experienced monocular form deprivation developed myopic axial anisometropias outside the control range. Conclusions Visual signals from the fovea are not essential for normal refractive development or the vision-induced alterations in ocular growth produced by form deprivation. Conversely, the peripheral retina, in isolation, can regulate emmetropizing responses and produce anomalous refractive errors in response to abnormal visual experience. These results indicate that peripheral vision should be considered when assessing the effects of visual experience on refractive development. PMID:17724167
Loudness growth in 1/2-octave bands (LGOB)--a procedure for the assessment of loudness.
Allen, J B; Hall, J L; Jeng, P S
1990-08-01
In this paper, a method that has been developed for the assessment and quantification of loudness perception in normal-hearing and hearing-impaired persons is described. The method has been named LGOB, which stands for loudness growth in 1/2-octave bands. The method uses 1/2-octave bands of noise, centered at 0.25, 0.5, 1.0, 2.0, and 4.0 kHz, with subjective levels between a subject's threshold of hearing and the "too loud" level. The noise bands are presented to the subject, randomized over frequency and level, and the subject is asked to respond with a loudness rating (one of: VERY SOFT, SOFT, OK, LOUD, VERY LOUD, TOO LOUD). Subject responses (normal and hearing-impaired) are then compared to the average responses of a group of normal-hearing subjects. This procedure allows one to estimate the subject's loudness growth relative to normals, as a function of frequency and level. The results may be displayed either as isoloudness contours or as recruitment curves. In its present form, the measurements take less than 30 min. The signal presentation and analysis is done using a PC and a PC plug-in board having a digital to analog converter.
Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...
2013-01-01
A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less
Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20.
Michael, J Robert; Volkov, Anatoliy
2015-03-01
The widely used pseudoatom formalism [Stewart (1976). Acta Cryst. A32, 565-574; Hansen & Coppens (1978). Acta Cryst. A34, 909-921] in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens [Acta Cryst. (1988), A44, 6-7]. It was shown that the analytical form for normalization coefficients is available primarily for l ≤ 4 [Hansen & Coppens, 1978; Paturle & Coppens, 1988; Coppens (1992). International Tables for Crystallography, Vol. B, Reciprocal space, 1st ed., edited by U. Shmueli, ch. 1.2. Dordrecht: Kluwer Academic Publishers; Coppens (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press]. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7 (Paturle & Coppens, 1988). In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle-Coppens (Paturle & Coppens, 1988) method in the Wolfram Mathematica software to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.
A Novel Method of Preparation of Inorganic Glasses by Microwave Irradiation
NASA Astrophysics Data System (ADS)
Vaidhyanathan, B.; Ganguli, Munia; Rao, K. J.
1994-12-01
Microwave heating is shown to provide an extremely facile and automatically temperature-controlled route to the synthesis of glasses. Glass-forming compositions of several traditional and novel glasses were melted in a kitchen microwave oven, typically within 5 min and quenched into glasses. This is only a fraction of the time required in normal glass preparation methods. The rapidity of melting minimizes undesirable features such as loss of components of the glass, variation of oxidation states of metal ions, and oxygen loss leading to reduced products in the glass such as metal particles. This novel procedure of preparation is applicable when at least one of the components of the glass-forming mixture absorbs microwaves.
Method for making a monolithic integrated high-T.sub.c superconductor-semiconductor structure
NASA Technical Reports Server (NTRS)
Burns, Michael J. (Inventor); de la Houssaye, Paul R. (Inventor); Russell, Stephen D. (Inventor); Garcia, Graham A. (Inventor); Barfknecht, Andrew T. (Inventor); Clayton, Stanley R. (Inventor)
2000-01-01
A method for the fabrication of active semiconductor and high-temperature perconducting devices on the same substrate to form a monolithically integrated semiconductor-superconductor (MISS) structure is disclosed. A common insulating substrate, preferably sapphire or yttria-stabilized zirconia, is used for deposition of semiconductor and high-temperature superconductor substructures. Both substructures are capable of operation at a common temperature of at least 77 K. The separate semiconductor and superconductive regions may be electrically interconnected by normal metals, refractory metal silicides, or superconductors. Circuits and devices formed in the resulting MISS structures display operating characteristics which are equivalent to those of circuits and devices prepared on separate substrates.
Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang
2017-01-01
Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.
Beattie, Louise; Espie, Colin A; Kyle, Simon D; Biello, Stephany M
2015-06-01
There appears to be some inconsistency in how normal sleepers (controls) are selected and screened for participation in research studies for comparison with insomnia patients. The purpose of the current study is to assess and compare methods of identifying normal sleepers in insomnia studies, with reference to published standards. We systematically reviewed the literature on insomnia patients, which included control subjects. The resulting 37 articles were systematically reviewed with reference to the five criteria for normal sleep specified by Edinger et al. In summary, these criteria are as follows: evidence of sleep disruption, sleep scheduling, general health, substance/medication use, and other sleep disorders. We found sleep diaries, polysomnography (PSG), and clinical screening examinations to be widely used with both control subjects and insomnia participants. However, there are differences between research groups in the precise definitions applied to the components of normal sleep. We found that none of the reviewed studies applied all of the Edinger et al. criteria, and 16% met four criteria. In general, screening is applied most rigorously at the level of a clinical disorder, whether physical, psychiatric, or sleep. While the Edinger et al. criteria seem to be applied in some form by most researchers, there is scope to improve standards and definitions in this area. Ideally, different methods such as sleep diaries and questionnaires would be used concurrently with objective measures to ensure normal sleepers are identified, and descriptive information for control subjects would be reported. Here, we have devised working criteria and methods to be used for the assessment of normal sleepers. This would help clarify the nature of the control group, in contrast to insomnia subjects and other patient groups. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.
2018-05-01
Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.
Methods for Scaling to Doubly Stochastic Form,
1981-06-26
Frobenius -Konig Theorem (MARCUS and MINC [1964],p 97) A nonnegative n xn matrix without support contains an s x t zero subma- trix where: s +t =n + -3...that YA(k) has row sums 1. Then normalize the columns by a diagonal similarity transform defined as follows: Let x = (zx , • z,,) be a left Perron vector
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
NASA Technical Reports Server (NTRS)
Penskiy, Ivan (Inventor); Charalambides, Alexandros (Inventor); Bergbreiter, Sarah (Inventor)
2018-01-01
At least one tactile sensor includes an insulating layer and a conductive layer formed on the surface of the insulating layer. The conductive layer defines at least one group of flexible projections extending orthogonally from the surface of the insulating layer. The flexible projections include a major projection extending a distance orthogonally from the surface and at least one minor projection that is adjacent to and separate from the major projection wherein the major projection extends a distance orthogonally that is greater than the distance that the minor projection extends orthogonally. Upon a compressive force normal to, or a shear force parallel to, the surface, the major projection and the minor projection flex such that an electrical contact resistance is formed between the major projection and the minor projection. A capacitive tactile sensor is also disclosed that responds to the normal and shear forces.
Perturbative Normal Form Theory for the 2D Random-Field Ising Model
NASA Astrophysics Data System (ADS)
Hayden, Lorien; Raju, Archishman; Sethna, James
Bifurcation theory is important to explain scaling in many systems. For the equilibrium random-field Ising model (RFIM) in 2D, the exponentially diverging correlation length can be derived directly from the RG flows which form a pitchfork bifurcation: dw/dl = -ɛ/2 w +w3 (Bray and Moore 1985). Our perturbative normal form theory (PNFT) predicts a term w5 to be critical in describing the behavior - it cannot be removed through an analytic change of coordinates. The new form of the correlation length produced has been observed to occur in leading order without explanation (Meinke and Middleton 2005). Performing simulations of the non-equilibrium RFIM on a Voronoi lattice uncovers a transcritical bifurcation of the form dw/dl = -ɛ/2 w +w2 + Bw3 . The RG flows determined by PNFT in this case lead directly to a form for the appropriate invariant scaling combination: s exp (- 1 / σνw) (1/w + B) C + B / σν . Using this scaling combination yields a collapse which was not possible to achieve using standard methods such as Widom scaling arguments. Further, the scaling extends over a decade in the magnitude of the disorder and explains behavior down to avalanche sizes of three, the edge of complexity. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153 and a Cornell Fellowship.
NASA Astrophysics Data System (ADS)
Alcolea Palafox, M.; Kattan, D.; Afseth, N. K.
2018-04-01
A theoretical and experimental vibrational study of the anti-HIV d4T (stavudine or Zerit) nucleoside analogue was carried out. The predicted spectra in the three most stable conformers in the biological active anti-form of the isolated state were compared. Comparison of the conformers with those of the natural nucleoside thymidine was carried out. The calculated spectra were scaled by using different scaling procedures and three DFT methods. The TLSE procedure leads to the lowest error and is thus recommended for scaling. With the population of these conformers the IR gas-phase spectra were predicted. The crystal unit cell of the different polymorphism forms of d4T were simulated through dimer forms by using DFT methods. The scaled spectra of these dimer forms were compared. The FT-IR spectrum was recorded in the solid state in the 400-4000 cm-1 range. The respective vibrational bands were analyzed and assigned to different normal modes of vibration by comparison with the scaled vibrational values of the different dimer forms. Through this comparison, the polymorphous form of the solid state sample was identified. The study indicates that d4T exist only in the ketonic form in the solid state. The results obtained were in agreement with those determined in related anti-HIV nucleoside analogues.
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Method for heat treating and sintering metal oxides with microwave radiation
Holcombe, Cressie E.; Dykes, Norman L.; Meek, Thomas T.
1989-01-01
A method for microwave sintering materials, primarily metal oxides, is described. Metal oxides do not normally absorb microwave radiation at temperatures ranging from about room temperature to several hundred degrees centrigrade are sintered with microwave radiation without the use of the heretofore required sintering aids. This sintering is achieved by enclosing a compact of the oxide material in a housing or capsule formed of a oxide which has microwave coupling properties at room temprature up to at least the microwave coupling temperature of the oxide material forming the compact. The heating of the housing effects the initial heating of the oxide material forming the compact by heat transference and then functions as a thermal insulator for the encased oxide material after the oxide material reaches a sufficient temperature to adequately absorb or couple with microwave radiation for heating thereof to sintering temperature.
Method of making a modular off-axis solar concentrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plesniak, Adam P.; Hall, John C.
A method of making a solar concentrator may include forming a receiving wall having an elongated wall, a first side wall and a second side wall; attaching the first side wall and the second side wall to a reflecting wall to form a housing having an internal volume with an opening; forming a lip on the receiving wall and the reflecting wall; attaching a cover to the receiving wall and the reflecting wall at the lip to seal the opening into the internal volume, thereby creating a rigid structure; and mounting at least one receiver having at least one photovoltaicmore » cell on the elongated wall to receive solar radiation entering the housing and reflected by the receiving wall, the receiver having an axis parallel with a surface normal of the photovoltaic cell, such that the axis is disposed at a non-zero angle relative to the vertical axis of the opening.« less
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Abd El-Rahman, Mohamed K.
2015-03-01
Normalized spectra have a great power in resolving spectral overlap of challenging Orphenadrine (ORP) and Paracetamol (PAR) binary mixture, four smart techniques utilizing the normalized spectra were used in this work, namely, amplitude modulation (AM), simultaneous area ratio subtraction (SARS), simultaneous derivative spectrophotometry (S1DD) and ratio H-point standard addition method (RHPSAM). In AM, peak amplitude at 221.6 nm of the division spectra was measured for both ORP and PAR determination, while in SARS, concentration of ORP was determined using the area under the curve from 215 nm to 222 nm of the regenerated ORP zero order absorption spectra, in S1DD, concentration of ORP was determined using the peak amplitude at 224 nm of the first derivative ratio spectra. PAR concentration was determined directly at 288 nm in the division spectra obtained during the manipulation steps in the previous three methods. The last RHPSAM is a dual wavelength method in which two calibrations were plotted at 216 nm and 226 nm. RH point is the intersection of the two calibration lines, where ORP and PAR concentrations were directly determined from coordinates of RH point. The proposed methods were applied successfully for the determination of ORP and PAR in their dosage form.
Normalization in Lie algebras via mould calculus and applications
NASA Astrophysics Data System (ADS)
Paul, Thierry; Sauzin, David
2017-11-01
We establish Écalle's mould calculus in an abstract Lie-theoretic setting and use it to solve a normalization problem, which covers several formal normal form problems in the theory of dynamical systems. The mould formalism allows us to reduce the Lie-theoretic problem to a mould equation, the solutions of which are remarkably explicit and can be fully described by means of a gauge transformation group. The dynamical applications include the construction of Poincaré-Dulac formal normal forms for a vector field around an equilibrium point, a formal infinite-order multiphase averaging procedure for vector fields with fast angular variables (Hamiltonian or not), or the construction of Birkhoff normal forms both in classical and quantum situations. As a by-product we obtain, in the case of harmonic oscillators, the convergence of the quantum Birkhoff form to the classical one, without any Diophantine hypothesis on the frequencies of the unperturbed Hamiltonians.
Nakkeeran, K
2001-10-01
We consider a family of N coupled nonlinear Schrödinger equations which govern the simultaneous propagation of N fields in the normal dispersion regime of an optical fiber with various important physical effects. The linear eigenvalue problem associated with the integrable form of all the equations is constructed with the help of the Ablowitz-Kaup-Newell-Segur method. Using the Hirota bilinear method, exact dark soliton solutions are explicitly derived.
Cosmological perturbations in the DGP braneworld: Numeric solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Antonio; Koyama, Kazuya; Silva, Fabio P.
2008-04-15
We solve for the behavior of cosmological perturbations in the Dvali-Gabadadze-Porrati (DGP) braneworld model using a new numerical method. Unlike some other approaches in the literature, our method uses no approximations other than linear theory and is valid on large scales. We examine the behavior of late-universe density perturbations for both the self-accelerating and normal branches of DGP cosmology. Our numerical results can form the basis of a detailed comparison between the DGP model and cosmological observations.
NASA Astrophysics Data System (ADS)
Eichner, J. F.; Steuer, M.; Loew, P.
2016-12-01
Past natural catastrophes offer valuable information for present-day risk assessment. To make use of historic loss data one has to find a setting that enables comparison (over place and time) of historic events happening under today's conditions. By means of loss data normalization the influence of socio-economic development, as the fundamental driver in this context, can be eliminated and the data gives way to the deduction of risk-relevant information and allows the study of other driving factors such as influences from climate variability and climate change or changes of vulnerability. Munich Re's NatCatSERVICE database includes for each historic loss event the geographic coordinates of all locations and regions that were affected in a relevant way. These locations form the basis for what is known as the loss footprint of an event. Here we introduce a state of the art and robust method for global loss data normalization. The presented peril-specific loss footprint normalization method adjusts direct economic loss data to the influence of economic growth within each loss footprint (by using gross cell product data as proxy for local economic growth) and makes loss data comparable over time. To achieve a comparative setting for supra-regional economic differences, we categorize the normalized loss values (together with information on fatalities) based on the World Bank income groups into five catastrophe classes, from minor to catastrophic. The data treated in such way allows (a) for studying the influence of improved reporting of small scale loss events over time and (b) for application of standard (stationary) extreme value statistics (here: peaks over threshold method) to compile estimates for extreme and extrapolated loss magnitudes such as a "100 year event" on global scale. Examples of such results will be shown.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Some new exact solitary wave solutions of the van der Waals model arising in nature
NASA Astrophysics Data System (ADS)
Bibi, Sadaf; Ahmed, Naveed; Khan, Umar; Mohyud-Din, Syed Tauseef
2018-06-01
This work proposes two well-known methods, namely, Exponential rational function method (ERFM) and Generalized Kudryashov method (GKM) to seek new exact solutions of the van der Waals normal form for the fluidized granular matter, linked with natural phenomena and industrial applications. New soliton solutions such as kink, periodic and solitary wave solutions are established coupled with 2D and 3D graphical patterns for clarity of physical features. Our comparison reveals that the said methods excel several existing methods. The worked-out solutions show that the suggested methods are simple and reliable as compared to many other approaches which tackle nonlinear equations stemming from applied sciences.
An energy-saving nonlinear position control strategy for electro-hydraulic servo systems.
Baghestan, Keivan; Rezaei, Seyed Mehdi; Talebi, Heidar Ali; Zareinejad, Mohammad
2015-11-01
The electro-hydraulic servo system (EHSS) demonstrates numerous advantages in size and performance compared to other actuation methods. Oftentimes, its utilization in industrial and machinery settings is limited by its inferior efficiency. In this paper, a nonlinear backstepping control algorithm with an energy-saving approach is proposed for position control in the EHSS. To achieve improved efficiency, two control valves including a proportional directional valve (PDV) and a proportional relief valve (PRV) are used to achieve the control objectives. To design the control algorithm, the state space model equations of the system are transformed to their normal form and the control law through the PDV is designed using a backstepping approach for position tracking. Then, another nonlinear set of laws is derived to achieve energy-saving through the PRV input. This control design method, based on the normal form representation, imposes internal dynamics on the closed-loop system. The stability of the internal dynamics is analyzed in special cases of operation. Experimental results verify that both tracking and energy-saving objectives are satisfied for the closed-loop system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Antioxidant and hypolipidemic activity of Kumbhajatu in hypercholesterolemic rats
Ghosh, Rumi; Kadam, Parag P.; Kadam, Vilasrao J.
2010-01-01
Objective: To study the efficacy of Kumbhajatu in reducing the cholesterol levels and as an antioxidant in hypercholesterolemic rats. Materials and Methods: Hypercholesterolemia was induced in normal rats by including 2% w/w cholesterol, 1% w/w sodium cholate and 2.5% w/w coconut oil in the normal diet. Powdered form of Kumbhajatu was administered as feed supplement at 250 and 500 mg/kg dose levels to the hypercholesterolemic rats. Plasma lipid profile, hepatic superoxide dismutase (SOD) activity, catalase activity, reduced glutathione and extent of lipid peroxidation in the form of malondialdehyde were estimated using standard methods. Results: Feed supplementation with 250 and 500 mg/kg of Kumbhajatu resulted in a significant decline in plasma lipid profiles. The feed supplementation increased the concentration of catalase, SOD, glutathione and HDL-c significantly in both the experimental groups (250 and 500 mg/kg). On the other hand, the concentration of malondialdehyde, cholesterol, triglycerides, LDL-c and VLDL in these groups (250 and 500 mg/kg) were decreased significantly. Conclusion: The present study demonstrates that addition of Kumbhajatu powder at 250 and 500 mg/kg level as a feed supplement reduces the plasma lipid levels and also decreases lipid peroxidation. PMID:21170207
The measurement of an aspherical mirror by three-dimensional nanoprofiler
NASA Astrophysics Data System (ADS)
Tokuta, Yusuke; Okita, Kenya; Okuda, Kohei; Kitayama, Takao; Nakano, Motohiro; Nakatani, Shun; Kudo, Ryota; Yamamura, Kazuya; Endo, Katsuyoshi
2015-09-01
Aspherical optical elements with high accuracy are important in several fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Then the demand of measurement method for aspherical or free-form surface with nanometer resolution is rising. Our purpose is to develop a non-contact profiler to measure free-form surfaces directly with repeatability of figure error of less than 1 nm PV. To achieve this purpose we have developed three-dimensional Nanoprofiler which traces normal vectors of sample surface. The measurement principle is based on the straightness of LASER light and the accuracy of a rotational goniometer. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and LASER head at optically equal position. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and the coordinates by a reconstruction algorithm. To evaluate performance of this machine we measure a concave aspherical mirror ten times. From ten results we calculate measurement repeatability, and we evaluate measurement uncertainty to compare the result with that measured by an interferometer. In consequence, the repeatability of measurement was 2.90 nm (σ) and the difference between the two profiles was +/-20 nm. We conclude that the two profiles was correspondent considering systematic errors of each machine.
Parse Completion: A Study of an Inductive Domain
1987-07-01
for Right Linear and Chomsky Normal Form grammars in detail. These two grammar classes were chosen as they can capture the classes of Regular and...Linear and Chomsky Normal Form grammars the allowed RHS formats could be divided into those which introduced new non-terminals and those which reused... Chomsky Normal Form grammars can both be shown to define a partial order over the set of grammars consistent with the examples. (Note that this is a
2006-09-01
is that it is universally applicable. That is, it can be used to parse an instance of any Chomsky Normal Form context-free grammar . This relative... Chomsky -Normal-Form grammar corresponding to the vehicle-specific data format, use of the Cocke-Younger- Kasami algorithm to generate a parse tree...05). The productions of a Chomsky Normal Form context-free grammar have three significant characteristics: • There are no useless symbols (i.e
Retinal degeneration increases susceptibility to myopia in mice
Park, Hanna; Tan, Christopher C.; Faulkner, Amanda; Jabbar, Seema B.; Schmid, Gregor; Abey, Jane; Iuvone, P. Michael
2013-01-01
Purpose Retinal diseases are often associated with refractive errors, suggesting the importance of normal retinal signaling during emmetropization. For instance, retinitis pigmentosa, a disease characterized by severe photoreceptor degeneration, is associated with myopia; however, the underlying link between these conditions is not known. This study examines the influence of photoreceptor degeneration on refractive development by testing two mouse models of retinitis pigmentosa under normal and form deprivation visual conditions. Dopamine, a potential stop signal for refractive eye growth, was assessed as a potential underlying mechanism. Methods Refractive eye growth in mice that were homozygous for a mutation in Pde6b, Pde6brd1/rd1 (rd1), or Pde6brd10/rd10 (rd10) was measured weekly from 4 to 12 weeks of age and compared to age-matched wild-type (WT) mice. Refractive error was measured using an eccentric infrared photorefractor, and axial length was measured with partial coherence interferometry or spectral domain ocular coherence tomography. A cohort of mice received head-mounted diffuser goggles to induce form deprivation from 4 to 6 weeks of age. Dopamine and 3,4-dihydroxyphenylacetic acid (DOPAC) levels were measured with high-performance liquid chromatography in each strain after exposure to normal or form deprivation conditions. Results The rd1 and rd10 mice had significantly greater hyperopia relative to the WT controls throughout normal development; however, axial length became significantly longer only in WT mice starting at 7 weeks of age. After 2 weeks of form deprivation, the rd1 and rd10 mice demonstrated a faster and larger myopic shift (−6.14±0.62 and −7.38±1.46 diopter, respectively) compared to the WT mice (−2.41±0.47 diopter). Under normal visual conditions, the DOPAC levels and DOPAC/dopamine ratios, a measure of dopamine turnover, were significantly lower in the rd1 and rd10 mice compared to the WT mice, while the dopamine levels were similar or higher than WT in the rd10 mice. Lower basal levels of DOPAC were highly correlated with increasing myopic shifts. Conclusions Refractive development under normal visual conditions was disrupted toward greater hyperopia from 4 to 12 weeks of age in these photoreceptor degeneration models, despite significantly lower DOPAC levels. However, the retinal degeneration models with low basal levels of DOPAC had increased susceptibility to form deprivation myopia. These results indicate that photoreceptor degeneration may alter dopamine metabolism, leading to increased susceptibility to myopia with an environmental visual challenge. PMID:24146540
... people experience while drifting off to sleep. These simple forms of myoclonus occur in normal, healthy persons ... people experience while drifting off to sleep. These simple forms of myoclonus occur in normal, healthy persons ...
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Method for preparing hydride configurations and reactive metal surfaces
Silver, Gary L.
1988-08-16
A method for preparing highly hydrogen-reactive surfaces on metals which normally require substantial heating, high pressures, or an extended induction period, which involves pretreatment of said surfaces with either a non-oxidizing acid or hydrogen gas to form a hydrogen-bearing coating on said surfaces, and subsequently heating said coated metal in the absence of moisture and oxygen for a period sufficient to decompose said coating and cooling said metal to room temperature. Surfaces so treated will react almost instantaneously with hydrogen gas at room temperature and low pressure. The method is particularly applicable to uranium, thorium, and lanthanide metals.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
29 CFR 4044.73 - Lump sums and other alternative forms of distribution in lieu of annuities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... distribution is the present value of the normal form of benefit provided by the plan payable at normal... 29 Labor 9 2010-07-01 2010-07-01 false Lump sums and other alternative forms of distribution in... Benefits and Assets Non-Trusteed Plans § 4044.73 Lump sums and other alternative forms of distribution in...
NASA Astrophysics Data System (ADS)
Ünaldı, Tevfik; Mızrak, İbrahim; Kadir, Selahattin
2013-12-01
Physicochemical characterisation of natural K-clinoptilolite and heavy-metal (Ag+, Cd2+, Cr3+ and Co3+) forms was accomplished through ion exchange by batch, X-ray diffractometric (XRD), X-ray fluorescence (XRF), infrared-spectral (FT-IR), differential thermal analysis-thermal gravimetric (DTA-TG) and scanning-electron microscopic (SEM) methods. Increasing the normality in the cases of heavy-metal forms resulted in decrease in crystallinity and increases in unit-cell volume, rate of ion exchange, and percentage of ion selectivity. In this study, the order of ion-selectivity percentages (rather than ion selectivity) of heavy-metal forms was determined to be Ag+ > Cd2+ > Cr3+ > Co3+. This finding is consistent with the results of worldwide research on the order of ion selectivity in modified clinoptilolite.
Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results.
1981-06-01
present discussion we will assume that phrases have one or two daughters, or more formally, that the grammar is in Chomsky Normal Form [1].) This... grammar point of view, these pairs contrast Chomsky Normal Form [1] with Categorial Grammars [2], and from a representational point of view, these pairs...chart(i, k) * chart(k, j) bottom-up ( Chomsky Normal Form) (9) chart(k, j) = chart(i, ) top-down (Categorial Grammars )chart(i, k) Earley’s Algorithm [8
NASA Technical Reports Server (NTRS)
Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)
2001-01-01
A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Graves, R. A., Jr.
1973-01-01
A method for the rapid calculation of the inviscid shock layer about blunt axisymmetric bodies at an angle of attack of 0 deg has been developed. The procedure is of an inverse nature, that is, a shock wave is assumed and calculations proceed along rays normal to the shock. The solution is iterated until the given body is computed. The flow field solution procedure is programed at the Langley Research Center for the Control Data 6600 computer. The geometries specified in the program are sphores, ellipsoids, paraboloids, and hyperboloids which may conical afterbodies. The normal momentum equation is replaced with an approximate algebraic expression. This simplification significantly reduces machine computation time. Comparisons of the present results with shock shapes and surface pressure distributions obtained by the more exact methods indicate that the program provides reasonably accurate results for smooth bodies in axisymmetric flow. However, further research is required to establish the proper approximate form of the normal momentum equation for the two-dimensional case.
Prnjavorac, Besim; Irejiz, Nedzada; Kurbasic, Zahid; Krajina, Katarina; Deljkic, Amina; Sinanovic, Albina; Fejzic, Jasmin
2015-04-01
Appropriate vitamin D turnover is essential for many physiological function. Knowledge of it's function was improved in last two decades with enlargement of scientific confirmation and understanding of overall importance. In addition to classical (skeletal) roles of vitamin D, many other function (no classical), out of bone and calcium-phosphate metabolism, are well defined today. To analyze vitamin D level in the blood in dialysis and pre dialysis patients and evaluate efficacy of supplementation therapy with vitamin D supplements. Vitamin D3 level in form of 25-hydroxivitamin D3 was measured in dialysis and pre dialysis patients, using combination of enzyme immunoassay competition method with final fluorescent detection (ELFA). Parathormone was measured by ELISA method. Other parameters were measured by colorimetric methods. Statistical analysis was done by nonparametric methods, because of dispersion of results of Vitamin D and parathormone. In group of dialysis patients 38 were analyzed. Among them 35 (92%) presented vitamin D deficiency, whether they took supplementation or not. In only 3 patients vitamin D deficiency was not so severe. Vitamin D form were evaluated in 42 pre dialysis patients. Out of all 19 patients (45 %) have satisfied level, more than 30 ng/ml. Moderate deficiency have 16 patients (38%), 5 of all (12%) have severe deficiency, and two patients (5%) have very severe deficiency, less than 5 ng/ml. Parathormone was within normal range (9.5-75 pg/mL) in 13 patients (34 %), below normal range (2 %) in one subject, and in above normal range in 24 (63 %). Vitamin D3 deficiency was registered in most hemodialysis patients; nevertheless supplemental therapy was given regularly or not. It is to be considered more appropriate supplementation of Vitamin D3 for dialyzed patients as well as for pre dialysis ones. In pre dialysis patient moderate deficiency is shown in half of patients but sever in only two.
Calculation of singlet oxygen formation from one photon absorbing photosensitizers used in PDT
NASA Astrophysics Data System (ADS)
Potasek, M.; Parilov, Evgueni; Beeson, K.
2013-03-01
Advances in biophotonic medicine require new information on photodynamic mechanisms. In photodynamic therapy (PDT), a photosensitizer (PS) is injected into the body and accumulates at higher concentrations in diseased tissue compared to normal tissue. The PS absorbs light from a light source and generates excited-state triplet states of the PS. The excited triplet states of the PS can then react with ground state molecular oxygen to form excited singlet - state oxygen or form other highly reactive species. The reactive species react with living cells, resulting in cel l death. This treatment is used in many forms of cancer including those in the prostrate, head and neck, lungs, bladder, esophagus and certain skin cancers. We developed a novel numerical method to model the photophysical and photochemical processes in the PS and the subsequent energy transfer to O2, improving the understanding of these processes at a molecular level. Our numerical method simulates light propagation and photo-physics in PS using methods that build on techniques previously developed for optical communications and nonlinear optics applications.
Three-Dimensional Model of the Scatterer Distribution in Cirrhotic Liver
NASA Astrophysics Data System (ADS)
Yamaguchi, Tadashi; Nakamura, Keigo; Hachiya, Hiroyuki
2003-05-01
Ultrasonic B-mode images are affected by changes in scatterer distribution. It is hard to estimate the relationship between the ultrasonic image and the tissue structure quantitatively because we cannot observe the continuous stages of liver cirrhosis tissue clinically, particularly the beginning stage. In this paper, we propose a three-dimensional modeling method of scatterer distribution for normal and cirrhotic livers to confirm the influence of the change in the form of scatterer distribution on echo information. The algorithm of the method includes parameters which determine the expansion of nodules and fibers. Using the B-mode images which are obtained from these scatterer distributions, we analyze the relationship between the changes in the form of biological tissue and the changes in the B-mode images during progressive liver cirrhosis.
Method of forming and starting a sodium sulfur battery
Paquette, David G.
1981-01-01
A method of forming a sodium sulfur battery and of starting the reactive capability of that battery when heated to a temperature suitable for battery operation is disclosed. An anodic reaction zone is constructed in a manner that sodium is hermetically sealed therein, part of the hermetic seal including fusible material which closes up openings through the container of the anodic reaction zone. The hermetically sealed anodic reaction zone is assembled under normal atmospheric conditions with a suitable cathodic reaction zone and a cation-permeable barrier. When the entire battery is heated to an operational temperature, the fusible material of the hermetically sealed anodic reaction zone is fused, thereby allowing molten sodium to flow from the anodic reaction zone into reactive engagement with the cation-permeable barrier.
Expression of SET Protein in the Ovaries of Patients with Polycystic Ovary Syndrome
Boqun, Xu; Xiaonan, Dai; YuGui, Cui; Lingling, Gao; Xue, Dai; Gao, Chao; Feiyang, Diao; Jiayin, Liu; Gao, Li; Li, Mei; Zhang, Yuan; Ma, Xiang
2013-01-01
Background. We previously found that expression of SET gene was up-regulated in polycystic ovaries by using microarray. It suggested that SET may be an attractive candidate regulator involved in the pathophysiology of polycystic ovary syndrome (PCOS). In this study, expression and cellular localization of SET protein were investigated in human polycystic and normal ovaries. Method. Ovarian tissues, six normal ovaries and six polycystic ovaries, were collected during transsexual operation and surgical treatment with the signed consent form. The cellular localization of SET protein was observed by immunohistochemistry. The expression levels of SET protein were analyzed by Western Blot. Result. SET protein was expressed predominantly in the theca cells and oocytes of human ovarian follicles in both PCOS ovarian tissues and normal ovarian tissues. The level of SET protein expression in polycystic ovaries was triple higher than that in normal ovaries (P < 0.05). Conclusion. SET was overexpressed in polycystic ovaries more than that in normal ovaries. Combined with its localization in theca cells, SET may participate in regulating ovarian androgen biosynthesis and the pathophysiology of hyperandrogenism in PCOS. PMID:23861679
Expression of SET Protein in the Ovaries of Patients with Polycystic Ovary Syndrome.
Boqun, Xu; Xiaonan, Dai; Yugui, Cui; Lingling, Gao; Xue, Dai; Gao, Chao; Feiyang, Diao; Jiayin, Liu; Gao, Li; Li, Mei; Zhang, Yuan; Ma, Xiang
2013-01-01
Background. We previously found that expression of SET gene was up-regulated in polycystic ovaries by using microarray. It suggested that SET may be an attractive candidate regulator involved in the pathophysiology of polycystic ovary syndrome (PCOS). In this study, expression and cellular localization of SET protein were investigated in human polycystic and normal ovaries. Method. Ovarian tissues, six normal ovaries and six polycystic ovaries, were collected during transsexual operation and surgical treatment with the signed consent form. The cellular localization of SET protein was observed by immunohistochemistry. The expression levels of SET protein were analyzed by Western Blot. Result. SET protein was expressed predominantly in the theca cells and oocytes of human ovarian follicles in both PCOS ovarian tissues and normal ovarian tissues. The level of SET protein expression in polycystic ovaries was triple higher than that in normal ovaries (P < 0.05). Conclusion. SET was overexpressed in polycystic ovaries more than that in normal ovaries. Combined with its localization in theca cells, SET may participate in regulating ovarian androgen biosynthesis and the pathophysiology of hyperandrogenism in PCOS.
Trojan dynamics well approximated by a new Hamiltonian normal form
NASA Astrophysics Data System (ADS)
Páez, Rocío Isabel; Locatelli, Ugo
2015-10-01
We revisit a classical perturbative approach to the Hamiltonian related to the motions of Trojan bodies, in the framework of the planar circular restricted three-body problem, by introducing a number of key new ideas in the formulation. In some sense, we adapt the approach of Garfinkel to the context of the normal form theory and its modern techniques. First, we make use of Delaunay variables for a physically accurate representation of the system. Therefore, we introduce a novel manipulation of the variables so as to respect the natural behaviour of the model. We develop a normalization procedure over the fast angle which exploits the fact that singularities in this model are essentially related to the slow angle. Thus, we produce a new normal form, i.e. an integrable approximation to the Hamiltonian. We emphasize some practical examples of the applicability of our normalizing scheme, e.g. the estimation of the stable libration region. Finally, we compare the level curves produced by our normal form with surfaces of section provided by the integration of the non-normalized Hamiltonian, with very good agreement. Further precision tests are also provided. In addition, we give a step-by-step description of the algorithm, allowing for extensions to more complicated models.
NASA Astrophysics Data System (ADS)
Stewart, L. K.
1997-11-01
An analytical method for determining amounts of cleavage-normal dissolution and cleavage-parallel shear movement that occurred between adjacent microlithons during crenulation cleavage seam formation within a deformed slate is developed for the progressive bulk inhomogeneous shortening (PBIS) mechanism of crenulation cleavage formation. The method utilises structural information obtained from samples where a diverging bed and vein are offset by a crenulation cleavage seam. Several samples analysed using this method produced ratios of relative, cleavage-parallel movement of microlithons to the material thickness removed by dissolution typically in the range of 1.1-3.4:1. The mean amount of solution shortening attributed to the formation of the cleavage seams examined is 24%. The results indicate that a relationship may exist between the width of microlithons and the amount of cleavage-parallel intermicrolithon-movement. The method presented here has the potential to help determine whether crenulation cleavage seams formed by the progressive bulk inhomogeneous shortening mechanism or by that involving cleavage-normal pressure solution alone.
Exact Delaunay normalization of the perturbed Keplerian Hamiltonian with tesseral harmonics
NASA Astrophysics Data System (ADS)
Mahajan, Bharat; Vadali, Srinivas R.; Alfriend, Kyle T.
2018-03-01
A novel approach for the exact Delaunay normalization of the perturbed Keplerian Hamiltonian with tesseral and sectorial spherical harmonics is presented in this work. It is shown that the exact solution for the Delaunay normalization can be reduced to quadratures by the application of Deprit's Lie-transform-based perturbation method. Two different series representations of the quadratures, one in powers of the eccentricity and the other in powers of the ratio of the Earth's angular velocity to the satellite's mean motion, are derived. The latter series representation produces expressions for the short-period variations that are similar to those obtained from the conventional method of relegation. Alternatively, the quadratures can be evaluated numerically, resulting in more compact expressions for the short-period variations that are valid for an elliptic orbit with an arbitrary value of the eccentricity. Using the proposed methodology for the Delaunay normalization, generalized expressions for the short-period variations of the equinoctial orbital elements, valid for an arbitrary tesseral or sectorial harmonic, are derived. The result is a compact unified artificial satellite theory for the sub-synchronous and super-synchronous orbit regimes, which is nonsingular for the resonant orbits, and is closed-form in the eccentricity as well. The accuracy of the proposed theory is validated by comparison with numerical orbit propagations.
Emadzadeh, Ehsan; Sarker, Abeed; Nikfarjam, Azadeh; Gonzalez, Graciela
2017-01-01
Social networks, such as Twitter, have become important sources for active monitoring of user-reported adverse drug reactions (ADRs). Automatic extraction of ADR information can be crucial for healthcare providers, drug manufacturers, and consumers. However, because of the non-standard nature of social media language, automatically extracted ADR mentions need to be mapped to standard forms before they can be used by operational pharmacovigilance systems. We propose a modular natural language processing pipeline for mapping (normalizing) colloquial mentions of ADRs to their corresponding standardized identifiers. We seek to accomplish this task and enable customization of the pipeline so that distinct unlabeled free text resources can be incorporated to use the system for other normalization tasks. Our approach, which we call Hybrid Semantic Analysis (HSA), sequentially employs rule-based and semantic matching algorithms for mapping user-generated mentions to concept IDs in the Unified Medical Language System vocabulary. The semantic matching component of HSA is adaptive in nature and uses a regression model to combine various measures of semantic relatedness and resources to optimize normalization performance on the selected data source. On a publicly available corpus, our normalization method achieves 0.502 recall and 0.823 precision (F-measure: 0.624). Our proposed method outperforms a baseline based on latent semantic analysis and another that uses MetaMap.
Neural image analysis in the process of quality assessment: domestic pig oocytes
NASA Astrophysics Data System (ADS)
Boniecki, P.; Przybył, J.; Kuzimska, T.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.
2014-04-01
The questions related to quality classification of animal oocytes are explored by numerous scientific and research centres. This research is important, particularly in the context of improving the breeding value of farm animals. The methods leading to the stimulation of normal development of a larger number of fertilised animal oocytes in extracorporeal conditions are of special importance. Growing interest in the techniques of supported reproduction resulted in searching for new, increasingly effective methods for quality assessment of mammalian gametes and embryos. Progress in the production of in vitro animal embryos in fact depends on proper classification of obtained oocytes. The aim of this paper was the development of an original method for quality assessment of oocytes, performed on the basis of their graphical presentation in the form of microscopic digital images. The classification process was implemented on the basis of the information coded in the form of microphotographic pictures of the oocytes of domestic pig, using the modern methods of neural image analysis.
Nikolaeva, S S; Chkhol, K Z; Bykov, V A; Roshchina, A A; Iakovleva, L V; Koroleva, O A; Omel'ianenko, N P; Rebrov, L B
2000-01-01
The content of different forms of tissue water was studied in the normal articular cartilage and osteoarthrosis cartilage and its structural components: collagen, potassium hyaluronate, sodium chondroitinsulphate and its complexes. In the components of cartilage matrix a few of fractions of bound water different in the strength of binding are present. At the maximal humidity, all water in collagen binds with the active groups of biopolymers and in the glycosaminoglycans, in addition to bound water, are present, two crystal forms of freezing water (free water) at least. The quantity of free water in the collagen-chondroitin sulphat membrane, is increased with the increase of chondroitin sulphate. In the collagen-hyaluronate complex, fraction of free water is found only at the low concentration of hyaluronate kalium. It was shown that in the hyalin cartilage, in different from the other connective tissue (skin, achilles tendon), the most part of water is free water and its quantity is increased in the osteoarthrosis. It is supposed that the rearrangement of binding and free-water fractions in the osteoarthrosis is the result of deficiency of hyaluronic acid and therefore this may be regarded in the improvement of methods of treatment. This scientific and methodical approach allow to receive information on the forms and binding energy of water in the biological tissues, which is absorbed from fluids and steam phase and determine characters of the pathological changes.
Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Weng, Yi-Hsin
2015-01-01
Purpose. We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [99mTc]-TRODAT with SPECT imaging. Procedures. A normal [99mTc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. Results. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R 2 = 0.84. Conclusions. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients. PMID:26366413
Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Yen, Tzu-Chen; Weng, Yi-Hsin
2015-01-01
We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [(99m)Tc]-TRODAT with SPECT imaging. A normal [(99m)Tc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R (2) = 0.84. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients.
Diagnosing a Strong-Fault Model by Conflict and Consistency
Zhou, Gan; Feng, Wenquan
2018-01-01
The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods. PMID:29596302
ERIC Educational Resources Information Center
Koskela, Anne; Vehkalahti, Kaisa
2017-01-01
This article shows the importance of paying attention to the role of professional devices, such as standardised forms, as producers of normality and deviance in the history of education. Our case study focused on the standardised forms used by teachers during child guidance clinic referrals and transfers to special education in northern Finland,…
Feedback linearization of singularly perturbed systems based on canonical similarity transformations
NASA Astrophysics Data System (ADS)
Kabanov, A. A.
2018-05-01
This paper discusses the problem of feedback linearization of a singularly perturbed system in a state-dependent coefficient form. The result is based on the introduction of a canonical similarity transformation. The transformation matrix is constructed from separate blocks for fast and slow part of an original singularly perturbed system. The transformed singular perturbed system has a linear canonical form that significantly simplifies a control design problem. Proposed similarity transformation allows accomplishing linearization of the system without considering the virtual output (as it is needed for normal form method), a technique of a transition from phase coordinates of the transformed system to state variables of the original system is simpler. The application of the proposed approach is illustrated through example.
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.
Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2015-10-01
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.
NASA Astrophysics Data System (ADS)
Ahmadov, A. I.; Naeem, Maria; Qocayeva, M. V.; Tarverdiyeva, V. A.
2018-01-01
In this paper, the bound-state solution of the modified radial Schrödinger equation is obtained for the Manning-Rosen plus Hulthén potential by using new developed scheme to overcome the centrifugal part. The energy eigenvalues and corresponding radial wave functions are defined for any l≠0 angular momentum case via the Nikiforov-Uvarov (NU) and supersymmetric quantum mechanics (SUSY QM) methods. Thanks to both methods, equivalent expressions are obtained for the energy eigenvalues, and the expression of radial wave functions transformations to each other is presented. The energy levels and the corresponding normalized eigenfunctions are represented in terms of the Jacobi polynomials for arbitrary l states. A closed form of the normalization constant of the wave functions is also found. It is shown that, the energy eigenvalues and eigenfunctions are sensitive to nr radial and l orbital quantum numbers.
Methods for implementing microbeam radiation therapy
Dilmanian, F. Avraham; Morris, Gerard M.; Hainfeld, James F.
2007-03-20
A method of performing radiation therapy includes delivering a therapeutic dose such as X-ray only to a target (e.g., tumor) with continuous broad beam (or in-effect continuous) using arrays of parallel planes of radiation (microbeams/microplanar beams). Microbeams spare normal tissues, and when interlaced at a tumor, form a broad-beam for tumor ablation. Bidirectional interlaced microbeam radiation therapy (BIMRT) uses two orthogonal arrays with inter-beam spacing equal to beam thickness. Multidirectional interlaced MRT (MIMRT) includes irradiations of arrays from several angles, which interleave at the target. Contrast agents, such as tungsten and gold, are administered to preferentially increase the target dose relative to the dose in normal tissue. Lighter elements, such as iodine and gadolinium, are used as scattering agents in conjunction with non-interleaving geometries of array(s) (e.g., unidirectional or cross-fired (intersecting) to generate a broad beam effect only within the target by preferentially increasing the valley dose within the tumor.
Ahn, Sung Hee; Bae, Yong Jin; Moon, Jeong Hee; Kim, Myung Soo
2013-09-17
We propose to divide matrix suppression in matrix-assisted laser desorption ionization into two parts, normal and anomalous. In quantification of peptides, the normal effect can be accounted for by constructing the calibration curve in the form of peptide-to-matrix ion abundance ratio versus concentration. The anomalous effect forbids reliable quantification and is noticeable when matrix suppression is larger than 70%. With this 70% rule, matrix suppression becomes a guideline for reliable quantification, rather than a nuisance. A peptide in a complex mixture can be quantified even in the presence of large amounts of contaminants, as long as matrix suppression is below 70%. The theoretical basis for the quantification method using a peptide as an internal standard is presented together with its weaknesses. A systematic method to improve quantification of high concentration analytes has also been developed.
Network Analysis: Applications for the Developing Brain
Chu-Shore, Catherine J.; Kramer, Mark A.; Bianchi, Matt T.; Caviness, Verne S.; Cash, Sydney S.
2011-01-01
Development of the human brain follows a complex trajectory of age-specific anatomical and physiological changes. The application of network analysis provides an illuminating perspective on the dynamic interregional and global properties of this intricate and complex system. Here, we provide a critical synopsis of methods of network analysis with a focus on developing brain networks. After discussing basic concepts and approaches to network analysis, we explore the primary events of anatomical cortical development from gestation through adolescence. Upon this framework, we describe early work revealing the evolution of age-specific functional brain networks in normal neurodevelopment. Finally, we review how these relationships can be altered in disease and perhaps even rectified with treatment. While this method of description and inquiry remains in early form, there is already substantial evidence that the application of network models and analysis to understanding normal and abnormal human neural development holds tremendous promise for future discovery. PMID:21303762
Evaluation of normalization methods in mammalian microRNA-Seq data
Garmire, Lana Xia; Subramaniam, Shankar
2012-01-01
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701
Application of agglomerative clustering for analyzing phylogenetically on bacterium of saliva
NASA Astrophysics Data System (ADS)
Bustamam, A.; Fitria, I.; Umam, K.
2017-07-01
Analyzing population of Streptococcus bacteria is important since these species can cause dental caries, periodontal, halitosis (bad breath) and more problems. This paper will discuss the phylogenetically relation between the bacterium Streptococcus in saliva using a phylogenetic tree of agglomerative clustering methods. Starting with the bacterium Streptococcus DNA sequence obtained from the GenBank, then performed characteristic extraction of DNA sequences. The characteristic extraction result is matrix form, then performed normalization using min-max normalization and calculate genetic distance using Manhattan distance. Agglomerative clustering technique consisting of single linkage, complete linkage and average linkage. In this agglomerative algorithm number of group is started with the number of individual species. The most similar species is grouped until the similarity decreases and then formed a single group. Results of grouping is a phylogenetic tree and branches that join an established level of distance, that the smaller the distance the more the similarity of the larger species implementation is using R, an open source program.
Human evaluation in association to the mathematical analysis of arch forms: Two-dimensional study.
Zabidin, Nurwahidah; Mohamed, Alizae Marny; Zaharim, Azami; Marizan Nor, Murshida; Rosli, Tanti Irawati
2018-03-01
To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation. This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated. Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable. The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.
Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca
2002-09-01
In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.
Rowan, Neil J.; Anderson, John G.
1998-01-01
The thermotolerances of two different cell forms of Listeria monocytogenes (serotype 4b) grown at 37 and 42.8°C in commercially pasteurized and laboratory-tyndallized whole milk (WM) were investigated. Test strains, after growth at 37 or 42.8°C, were suspended in WM at concentrations of approximately 1.5 × 108 to 3.0 × 108 cells/ml and were then heated at 56, 60, and 63°C for various exposure times. Survival was determined by enumeration on tryptone-soya-yeast extract agar and Listeria selective agar, and D values (decimal reduction times) and Z values (numbers of degrees Celsius required to cause a 10-fold change in the D value) were calculated. Higher average recovery and higher D values (i.e., seen as a 2.5- to 3-fold increase in thermotolerance) were obtained when cells were grown at 42.8°C prior to heat treatment. A relationship was observed between thermotolerance and cell morphology of L. monocytogenes. Atypical Listeria cell types (consisting predominantly of long cell chains measuring up to 60 μm in length) associated with rough (R) culture variants were shown to be 1.2-fold more thermotolerant than the typical dispersed cell form associated with normal smooth (S) cultures (P ≤ 0.001). The thermal death-time (TDT) curves of R-cell forms contained a tail section in addition to the shoulder section characteristic of TDT curves of normal single to paired cells (i.e., S form). The factors shown to influence the thermoresistance of suspended Listeria cells (P ≤ 0.001) were as follows: growth and heating temperatures, type of plating medium, recovery method, and cell morphology. Regression analysis of nonlinear data can underestimate survival of L. monocytogenes; the end point recovery method was shown to be a better method for determining thermotolerance because it takes both shoulders and tails into consideration. Despite their enhanced heat resistance, atypical R-cell forms of L. monocytogenes were unable to survive the low-temperature, long-time pasteurization process when freely suspended and heated in WM. PMID:9603815
Wang, Xiaofan; Zhao, Xu; Gu, Liqiang; Lv, Chunxiao; He, Bosai; Liu, Zhenzhen; Hou, Pengyi; Bi, Kaishun; Chen, Xiaohui
2014-03-15
A simple and rapid ultra-high performance liquid chromatography-tandem mass spectrometry (uHPLC-MS/MS) method has been developed for the simultaneous determination of five free flavonoids (amentoflavone, isorhamnetin, naringenin, kaempferol and quercetin) and their total (free and conjugated) forms, and to compare the pharmacokinetics of these active ingredients in normal and hyperlipidemic rats. The free and total forms of these flavonoids were extracted by liquid-liquid extraction with ethyl acetate. The conjugated flavonoids were deconjugated by the enzyme β-Glucuronidase and Sulfatase. Chromatographic separation was accomplished on a ZORBAX Eclipse XDB-C8 USP L7 column using gradient elution. Detection was performed on a 4000Q uHPLC-MS/MS system from AB Sciex using negative ion mode in the multiple reaction monitoring (MRM) mode. The lower limits of quantification were 2.0-5.0ng/mL for all the analytes. Intra-day and inter-day precision were less than 15% and accuracy ranged from -9.3% to 11.0%, and the mean extraction recoveries of analytes and internal standard (IS) from rat plasma were all more than 81.7%. The validated method was successfully applied to a comparative pharmacokinetic study of five free and total analytes in rat plasma. The results indicated that the absorption of five total flavonoids in hyperlipidemia group were significantly higher than those in normal group with similar concentration-time curves. Copyright © 2014 Elsevier B.V. All rights reserved.
Hydrodynamics based transfection in normal and fibrotic rats
Yeikilis, Rita; Gal, Shunit; Kopeiko, Natalia; Paizi, Melia; Pines, Mark; Braet, Filip; Spira, Gadi
2006-01-01
AIM: Hydrodynamics based transfection (HBT), the injection of a large volume of naked plasmid DNA in a short time is a relatively simple, efficient and safe method for in vivo transfection of liver cells. Though used for quite some time, the mechanism of gene transfection has not yet been elucidated. METHODS: A luciferase encoding plasmid was injected using the hydrodynamics based procedure into normal and thioacetamide-induced fibrotic Sprague Dawley rats. Scanning and transmission electron microscopy images were taken. The consequence of a dual injection of Ringer solution and luciferase pDNA was followed. Halofuginone, an anti collagen type I inhibitor was used to reduce ECM load in fibrotic rats prior to the hydrodynamic injection. RESULTS: Large endothelial gaps formed as soon as 10’ following hydrodynamic injection; these gradually returned to normal 10 d post injection. Hydrodynamic administration of Ringer 10 or 30 m prior to moderate injection of plasmid did not result in efficient transfection suggesting that endothelial gaps by themselves are not sufficient for gene expression. Gene transfection following hydrodynamic injection in thioacetamide induced fibrotic rats was diminished coinciding with the level of fibrosis. Halofuginone, a specific collagen typeIinhibitor, alleviated this effect. CONCLUSION: The hydrodynamic pressure formed following HBT results in the formation of large endothelial gaps. These gaps, though important in the transfer of DNA molecules from the blood to the space of Disse are not enough to provide the appropriate conditions for hepatocyte transfection. Hydrodynamics based injection is applicable in fibrotic rats provided that ECM load is reduced. PMID:17036386
Reconstruction of normal forms by learning informed observation geometries from data.
Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G
2017-09-19
The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.
Partial differential equations of 3D boundary layer and their numerical solutions in turbomachinery
NASA Astrophysics Data System (ADS)
Zhang, Guoqing; Hua, Yaonan; Wu, Chung-Hua
1991-08-01
This paper studies the 3D boundary layer equations (3DBLE) and their numerical solutions in turbomachinery: (1) the general form of 3DBLE in turbomachines with rotational and curvature effects are derived under the semiorthogonal coordinate system, in which the normal pressure gradient is not equal to zero; (2) the method of solution of the 3DBLE is discussed; (3) the 3D boundary layers on the rotating blade surface, IGV endwall, rotor endwall (with a relatively moving boundary) are numerically solved, and the predicted data correlates well with the measured data; and (4) the comparison is made between the numerical results of 3DBLE with and without normal pressure gradient.
NASA Astrophysics Data System (ADS)
Shimanovskii, A. V.
A method for calculating the plane bending of elastic-plastic filaments of finite stiffness is proposed on the basis of plastic flow theory. The problem considered is shown to reduce to relations similar to Kirchhoff equations for elastic work. Expressions are obtained for determining the normalized stiffness characteristics for the cross section of a filament with plastic regions containing beam theory equations as a particular case. A study is made of the effect of the plastic region size on the position of the elastic deformation-unloading interface and on the normalized stiffness of the filament cross section. Calculation results are presented in graphic form.
Yang, Dong-Ping; Robinson, P A
2017-04-01
A physiologically based corticothalamic model of large-scale brain activity is used to analyze critical dynamics of transitions from normal arousal states to epileptic seizures, which correspond to Hopf bifurcations. This relates an abstract normal form quantitatively to underlying physiology that includes neural dynamics, axonal propagation, and time delays. Thus, a bridge is constructed that enables normal forms to be used to interpret quantitative data. The normal form of the Hopf bifurcations with delays is derived using Hale's theory, the center manifold theorem, and normal form analysis, and it is found to be explicitly expressed in terms of transfer functions and the sensitivity matrix of a reduced open-loop system. It can be applied to understand the effect of each physiological parameter on the critical dynamics and determine whether the Hopf bifurcation is supercritical or subcritical in instabilities that lead to absence and tonic-clonic seizures. Furthermore, the effects of thalamic and cortical nonlinearities on the bifurcation type are investigated, with implications for the roles of underlying physiology. The theoretical predictions about the bifurcation type and the onset dynamics are confirmed by numerical simulations and provide physiologically based criteria for determining bifurcation types from first principles. The results are consistent with experimental data from previous studies, imply that new regimes of seizure transitions may exist in clinical settings, and provide a simplified basis for control-systems interventions. Using the normal form, and the full equations from which it is derived, more complex dynamics, such as quasiperiodic cycles and saddle cycles, are discovered near the critical points of the subcritical Hopf bifurcations.
NASA Astrophysics Data System (ADS)
Yang, Dong-Ping; Robinson, P. A.
2017-04-01
A physiologically based corticothalamic model of large-scale brain activity is used to analyze critical dynamics of transitions from normal arousal states to epileptic seizures, which correspond to Hopf bifurcations. This relates an abstract normal form quantitatively to underlying physiology that includes neural dynamics, axonal propagation, and time delays. Thus, a bridge is constructed that enables normal forms to be used to interpret quantitative data. The normal form of the Hopf bifurcations with delays is derived using Hale's theory, the center manifold theorem, and normal form analysis, and it is found to be explicitly expressed in terms of transfer functions and the sensitivity matrix of a reduced open-loop system. It can be applied to understand the effect of each physiological parameter on the critical dynamics and determine whether the Hopf bifurcation is supercritical or subcritical in instabilities that lead to absence and tonic-clonic seizures. Furthermore, the effects of thalamic and cortical nonlinearities on the bifurcation type are investigated, with implications for the roles of underlying physiology. The theoretical predictions about the bifurcation type and the onset dynamics are confirmed by numerical simulations and provide physiologically based criteria for determining bifurcation types from first principles. The results are consistent with experimental data from previous studies, imply that new regimes of seizure transitions may exist in clinical settings, and provide a simplified basis for control-systems interventions. Using the normal form, and the full equations from which it is derived, more complex dynamics, such as quasiperiodic cycles and saddle cycles, are discovered near the critical points of the subcritical Hopf bifurcations.
[Investigation of the recrystallization of trehalose as a good glass-former excipient].
Katona, Gábor; Orsolya, Jójártné Laczkovich; Szabóné, Révész Piroska
2014-01-01
An amorphous form of trehalose is easy to prepare by using a solvent method. The recrystallization kinetics can be followed well, which is important because of the occurrence of polymorphic forms of trehalose. This is especially significant in the case of dry powder inhalers. Spray-drying was used as a preparation method this being one of the most efficient technologies with which to obtain an amorphous form. This method can result in the required particle size and a monodisperse distribution with excellent flowability and with moreover considerable amorphization. In our work, trehalose was applied as a technological auxiliary agent, and literature data relating to the spray-drying technology of trehalose were collected. Studies were made of the influence of the spraying process on the amorphization of trehalose and on the recrystallization of amorphous trehalose during storage. Amorphous samples were investigated under 3 different conditions during 3 months. The recrystallization process was followed by differential scanning calorimetry and X-ray powder diffraction. The results demonstrated the perfect amorphization of trehalose during the spray-drying process. The glass transition temperature was well measurable in the samples and proved to be the same as the literature data. Recrystallization under normal conditions was very slow but at high relative humidity the process was accelerated greatly. Amorphous trehalose gave rise to dihydrate forms (gamma- and h-trehaloses) during recrystallization, and beta-trehalose was also identified as an anhydrous form.
Paleologos, E K; Kontominas, M G
2005-06-10
A method using normal phase high performance liquid chromatography (NP-HPLC) with UV detection was developed for the analysis of acrylamide and methacrylamide. The method relies on the chromatographic separation of these analytes on a polar HPLC column designed for the separation of organic acids. Identification of acrylamide and methacrylamide is approached dually, that is directly in their protonated forms and as their hydrolysis products acrylic and methacrylic acid respectively, for confirmation. Detection and quantification is performed at 200 nm. The method is simple allowing for clear resolution of the target peaks from any interfering substances. Detection limits of 10 microg L(-1) were obtained for both analytes with the inter- and intra-day RSD for standard analysis lying below 1.0%. Use of acetonitrile in the elution solvent lowers detection limits and retention times, without impairing resolution of peaks. The method was applied for the determination of acrylamide and methacrylamide in spiked food samples without native acrylamide yielding recoveries between 95 and 103%. Finally, commercial samples of french and roasted fries, cookies, cocoa and coffee were analyzed to assess applicability of the method towards acrylamide, giving results similar with those reported in the literature.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Genomic Changes in Normal Breast Tissue in Women at Normal Risk or at High Risk for Breast Cancer
Danforth, David N.
2016-01-01
Sporadic breast cancer develops through the accumulation of molecular abnormalities in normal breast tissue, resulting from exposure to estrogens and other carcinogens beginning at adolescence and continuing throughout life. These molecular changes may take a variety of forms, including numerical and structural chromosomal abnormalities, epigenetic changes, and gene expression alterations. To characterize these abnormalities, a review of the literature has been conducted to define the molecular changes in each of the above major genomic categories in normal breast tissue considered to be either at normal risk or at high risk for sporadic breast cancer. This review indicates that normal risk breast tissues (such as reduction mammoplasty) contain evidence of early breast carcinogenesis including loss of heterozygosity, DNA methylation of tumor suppressor and other genes, and telomere shortening. In normal tissues at high risk for breast cancer (such as normal breast tissue adjacent to breast cancer or the contralateral breast), these changes persist, and are increased and accompanied by aneuploidy, increased genomic instability, a wide range of gene expression differences, development of large cancerized fields, and increased proliferation. These changes are consistent with early and long-standing exposure to carcinogens, especially estrogens. A model for the breast carcinogenic pathway in normal risk and high-risk breast tissues is proposed. These findings should clarify our understanding of breast carcinogenesis in normal breast tissue and promote development of improved methods for risk assessment and breast cancer prevention in women. PMID:27559297
Sauer, G
1976-04-01
Mandibular movements of test persons with eugnathic and dysgnathic dentitions were compared with each other by kinematographic tests and evaluation of single photographs. No essential differences in coarse movements are discernible. Neither form nor extent of the masticatory movements are significantly influenced by any dysgnathia or the type of food chewed. The testing method applied does not furnish any information on movements in the occlusion phase.
Lamie, Nesrine T; Yehia, Ali M
2015-01-01
Simultaneous determination of Dimenhydrinate (DIM) and Cinnarizine (CIN) binary mixture with simple procedures were applied. Three ratio manipulating spectrophotometric methods were proposed. Normalized spectrum was utilized as a divisor for simultaneous determination of both drugs with minimum manipulation steps. The proposed methods were simultaneous constant center (SCC), simultaneous derivative ratio spectrophotometry (S(1)DD) and ratio H-point standard addition method (RHPSAM). Peak amplitudes at isoabsorptive point in ratio spectra were measured for determination of total concentrations of DIM and CIN. For subsequent determination of DIM concentration, difference between peak amplitudes at 250 nm and 267 nm were used in SCC. While the peak amplitude at 275 nm of the first derivative ratio spectra were used in S(1)DD; then subtraction of DIM concentration from the total one provided the CIN concentration. The last RHPSAM was a dual wavelength method in which two calibrations were plotted at 220 nm and 230 nm. The coordinates of intersection point between the two calibration lines were corresponding to DIM and CIN concentrations. The proposed methods were successfully applied for combined dosage form analysis, Moreover statistical comparison between the proposed and reported spectrophotometric methods was applied. Copyright © 2015 Elsevier B.V. All rights reserved.
Mosier-Boss, P A; Lieberman, S H
2003-09-01
The use of normal Raman spectroscopy and surface-enhanced Raman spectroscopy (SERS) of cationic-coated silver and gold substrates to detect polyatomic anions in aqueous environments is examined. For normal Raman spectroscopy, using near-infrared excitation, linear concentration responses were observed. Detection limits varied from 84 ppm for perchlorate to 2600 ppm for phosphate. In general, detection limits in the ppb to ppm concentration range for the polyatomic anions were achieved using cationic-coated SERS substrates. Adsorption of the polyatomic anions on the cationic-coated SERS substrates was described by a Frumkin isotherm. The SERS technique could not be used to detect dichromate, as this anion reacted with the coatings to form thiol esters. A competitive complexation method was used to evaluate the interaction of chloride ion with the cationic coatings. Hydrogen bonding and pi-pi interactions play significant roles in the selectivity of the cationic coatings.
One-step fabrication of multifunctional micromotors
NASA Astrophysics Data System (ADS)
Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y.
2015-08-01
Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications. Electronic supplementary information (ESI) available: Videos S1-S4 and Fig. S1-S3. See DOI: 10.1039/c5nr03574k
An Integrable Approximation for the Fermi Pasta Ulam Lattice
NASA Astrophysics Data System (ADS)
Rink, Bob
This contribution presents a review of results obtained from computations of approximate equations of motion for the Fermi-Pasta-Ulam lattice. These approximate equations are obtained as a finite-dimensional Birkhoff normal form. It turns out that in many cases, the Birkhoff normal form is suitable for application of the KAM theorem. In particular, this proves Nishida's 1971 conjecture stating that almost all low-energetic motions of the anharmonic Fermi-Pasta-Ulam lattice with fixed endpoints are quasi-periodic. The proof is based on the formal Birkhoff normal form computations of Nishida, the KAM theorem and discrete symmetry considerations.
Barnett, Allen M.; Masi, James V.; Hall, Robert B.
1980-12-16
A solar cell having a copper-bearing absorber is provided with a composite transparent encapsulating layer specifically designed to prevent oxidation of the copper sulfide. In a preferred embodiment, the absorber is a layer of copper sulfide and the composite layer comprises a thin layer of copper oxide formed on the copper sulfide and a layer of encapsulating glass formed on the oxide. It is anticipated that such devices, when exposed to normal operating conditions of various terrestrial applications, can be maintained at energy conversion efficiencies greater than one-half the original conversion efficiency for periods as long as thirty years.
Monolithic integrated high-T.sub.c superconductor-semiconductor structure
NASA Technical Reports Server (NTRS)
Barfknecht, Andrew T. (Inventor); Garcia, Graham A. (Inventor); Russell, Stephen D. (Inventor); Burns, Michael J. (Inventor); de la Houssaye, Paul R. (Inventor); Clayton, Stanley R. (Inventor)
2000-01-01
A method for the fabrication of active semiconductor and high-temperature superconducting device of the same substrate to form a monolithically integrated semiconductor-superconductor (MISS) structure is disclosed. A common insulating substrate, preferably sapphire or yttria-stabilized zirconia, is used for deposition of semiconductor and high-temperature superconductor substructures. Both substructures are capable of operation at a common temperature of at least 77 K. The separate semiconductor and superconductive regions may be electrically interconnected by normal metals, refractory metal silicides, or superconductors. Circuits and devices formed in the resulting MISS structures display operating characteristics which are equivalent to those of circuits and devices prepared on separate substrates.
Bonded ultrasonic transducer and method for making
Dixon, Raymond D.; Roe, Lawrence H.; Migliori, Albert
1995-01-01
An ultrasonic transducer is formed as a diffusion bonded assembly of piezoelectric crystal, backing material, and, optionally, a ceramic wear surface. The mating surfaces of each component are silver films that are diffusion bonded together under the application of pressure and heat. Each mating surface may also be coated with a reactive metal, such as hafnium, to increase the adhesion of the silver films to the component surfaces. Only thin silver films are deposited, e.g., a thickness of about 0.00635 mm, to form a substantially non-compliant bond between surfaces. The resulting transducer assembly is substantially free of self-resonances over normal operating ranges for taking resonant ultrasound measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg
Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less
Fuchsia : A tool for reducing differential equations for Feynman master integrals to epsilon form
NASA Astrophysics Data System (ADS)
Gituliar, Oleksandr; Magerya, Vitaly
2017-10-01
We present Fuchsia - an implementation of the Lee algorithm, which for a given system of ordinary differential equations with rational coefficients ∂x J(x , ɛ) = A(x , ɛ) J(x , ɛ) finds a basis transformation T(x , ɛ) , i.e., J(x , ɛ) = T(x , ɛ) J‧(x , ɛ) , such that the system turns into the epsilon form : ∂xJ‧(x , ɛ) = ɛ S(x) J‧(x , ɛ) , where S(x) is a Fuchsian matrix. A system of this form can be trivially solved in terms of polylogarithms as a Laurent series in the dimensional regulator ɛ. That makes the construction of the transformation T(x , ɛ) crucial for obtaining solutions of the initial system. In principle, Fuchsia can deal with any regular systems, however its primary task is to reduce differential equations for Feynman master integrals. It ensures that solutions contain only regular singularities due to the properties of Feynman integrals. Program Files doi:http://dx.doi.org/10.17632/zj6zn9vfkh.1 Licensing provisions: MIT Programming language:Python 2.7 Nature of problem: Feynman master integrals may be calculated from solutions of a linear system of differential equations with rational coefficients. Such a system can be easily solved as an ɛ-series when its epsilon form is known. Hence, a tool which is able to find the epsilon form transformations can be used to evaluate Feynman master integrals. Solution method: The solution method is based on the Lee algorithm (Lee, 2015) which consists of three main steps: fuchsification, normalization, and factorization. During the fuchsification step a given system of differential equations is transformed into the Fuchsian form with the help of the Moser method (Moser, 1959). Next, during the normalization step the system is transformed to the form where eigenvalues of all residues are proportional to the dimensional regulator ɛ. Finally, the system is factorized to the epsilon form by finding an unknown transformation which satisfies a system of linear equations. Additional comments including Restrictions and Unusual features: Systems of single-variable differential equations are considered. A system needs to be reducible to Fuchsian form and eigenvalues of its residues must be of the form n + m ɛ, where n is integer. Performance depends upon the input matrix, its size, number of singular points and their degrees. It takes around an hour to reduce an example 74 × 74 matrix with 20 singular points on a PC with a 1.7 GHz Intel Core i5 CPU. An additional slowdown is to be expected for matrices with complex and/or irrational singular point locations, as these are particularly difficult for symbolic algebra software to handle.
NASA Astrophysics Data System (ADS)
Hu, Xuanyu
2017-11-01
We propose a definition for the normal gravity fields and normal figures of small objects in the solar system, such as asteroids, cometary nuclei, and planetary moons. Their gravity fields are represented as series of ellipsoidal harmonics, ensuring more robust field evaluation in the proximity of an arbitrary, convex shape than using spherical harmonics. The normal gravity field, approximate to the actual field, can be described by a finite series of three terms, that is, degree zero, and the zonal and sectoral harmonics of degree two. The normal gravity is that of an equipotential ellipsoid, defined as the normal ellipsoid of the body. The normal ellipsoid may be distinct from the actual figure. We present a rationale for specifying and a numerical method for determining the parameters of the normal ellipsoid. The definition presented here generalizes the convention of the normal spheroid of a large, hydrostatically equilibrated planet, such as Earth. Modeling the normal gravity and the normal ellipsoid is relevant to studying the formation of the “rubble pile” objects, which may have been accreted, or reorganized after disruption, under self-gravitation. While the proposed methodology applies to convex, approximately ellipsoidal objects, those bi-lobed objects can be treated as contact binaries comprising individual convex subunits. We study an exemplary case of the nearly ellipsoidal Martian moon, Phobos, subject to strong tidal influence in its present orbit around Mars. The results allude to the formation of Phobos via gravitational accretion at some further distance from Mars.
Fundamentals of Research Data and Variables: The Devil Is in the Details.
Vetter, Thomas R
2017-10-01
Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
NASA Astrophysics Data System (ADS)
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less
Compensation for Phase Anisotropy of a Metal Reflector
NASA Technical Reports Server (NTRS)
Hong, John
2007-01-01
A method of compensation for the polarization- dependent phase anisotropy of a metal reflector has been proposed. The essence of the method is to coat the reflector with multiple thin alternating layers of two dielectrics that have different indices of refraction, so as to introduce an opposing polarization-dependent phase anisotropy. The anisotropy in question is a phenomenon that occurs in reflection of light at other than normal incidence: For a given plane wave having components polarized parallel (p) and perpendicular (s) to the plane of incidence, the phase of s-polarized reflected light differs from the phase p-polarized light by an amount that depends on the angle of incidence and the complex index of refraction of the metal. The magnitude of the phase difference is zero at zero angle of incidence (normal incidence) and increases with the angle of incidence. This anisotropy is analogous to a phase anisotropy that occurs in propagation of light through a uniaxial dielectric crystal. In such a case, another uniaxial crystal that has the same orientation but opposite birefringence can be used to cancel the phase anisotropy. Although it would be difficult to prepare a birefringent material in a form suitable for application to the curved surface of a typical metal reflector in an optical instrument, it should be possible to effect the desired cancellation of phase anisotropy by exploiting the form birefringence of multiple thin dielectric layers. (The term "form birefringence" can be defined loosely as birefringence arising, in part, from a regular array of alternating subwavelength regions having different indices of refraction.)
The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud
NASA Astrophysics Data System (ADS)
Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.
An inkjet-printed microfluidic device for liquid-liquid extraction.
Watanabe, Masashi
2011-04-07
A microfluidic device for liquid-liquid extraction was quickly produced using an office inkjet printer. An advantage of this method is that normal end users, who are not familiar with microfabrication, can produce their original microfluidic devices by themselves. In this method, the printer draws a line on a hydrophobic and oil repellent surface using hydrophilic ink. This line directs a fluid, such as water or xylene, to form a microchannel along the printed line. Using such channels, liquid-liquid extraction was successfully performed under concurrent and countercurrent flow conditions. © The Royal Society of Chemistry 2011
Controllable assembly and disassembly of nanoparticle systems via protein and DNA agents
Lee, Soo-Kwan; Gang, Oleg; van der Lelie, Daniel
2014-05-20
The invention relates to the use of peptides, proteins, and other oligomers to provide a means by which normally quenched nanoparticle fluorescence may be recovered upon detection of a target molecule. Further, the inventive technology provides a structure and method to carry out detection of target molecules without the need to label the target molecules before detection. In another aspect, a method for forming arbitrarily shaped two- and three-dimensional protein-mediated nanoparticle structures and the resulting structures are described. Proteins mediating structure formation may themselves be functionalized with a variety of useful moieties, including catalytic functional groups.
[Ultrasonic methods and semiotics in patients with vasculogenic erectile dysfunction].
Zhukov, O B; Zubarev, A R
2001-01-01
The authors have developed criteria for ultrasonic assessment of cavernous bodies, arterial and venous circulation in normal penile vessels and in erectile dysfunction in 125 patients; describe modern ultrasound modalities in differential diagnosis of various forms of vasculogenic erectile dysfunction basing on the experience with 92 patients; validate hydrodynamic role of the tunica albuginea in pathogenesis of venocorporal dysfunction and pathological venous drainage. Early ischemic signs of arterial insufficiency were revealed.
Confidence regions of planar cardiac vectors
NASA Technical Reports Server (NTRS)
Dubin, S.; Herr, A.; Hunt, P.
1980-01-01
A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.
The Shock and Vibration Digest, Volume 16, Number 10
1984-10-01
shaped, arid a general polygonal-shaped membrane Fourier expansion -collocation method and the finite without symmetry They also derived, with the help...geometry is not applicable; therefore, a Fourier sine series expansion technique. The meth- not much work on the dynamic behavior of skew- od was applied...particular m.-de are obtained. This normal mode expansion form of deflection surface. The stability of motion approach has recently been used in a series of
Higher Order Thinking in the Australian Army Suite of Logistic Officer Courses
2006-12-15
normal curriculum. They can target subject-specific learning such as science, mathematics, geography ; or they can be infused across the curriculum by...some form of didactic , explicit, or direct instruction. On the other hand, if the focus is on procedural knowledge, it is likely that modeling and...socialization and the teaching method of cooperative learning. Learning the process of critical thinking might be best facilitated by a combination of didactic
NASA Astrophysics Data System (ADS)
Muthuvelu, K.; Shanmugam, Sivabalan; Koteeswaran, Dornadula; Srinivasan, S.; Venkatesan, P.; Aruna, Prakasarao; Ganesan, Singaravelu
2011-03-01
In this study the diagnostic potential of synchronous luminescence spectroscopy (SLS) technique for the characterization of normal and different pathological condition of cervix viz., moderately differentiated squamous cell carcinoma (MDSCC), poorly differentiated squamous cell carcinoma (PDSCC) and well differentiated squamous cell carcinoma (WDSSC). Synchronous fluorescence spectra were measured for 70 abnormal cases and 30 normal subjects. Characteristic, highly resolved peaks and significant spectral differences between normal and MDSCC, PDSCC and WDSCC cervical blood formed elements were obtained. The synchronous luminescence spectra of formed elements of normal and abnormal cervical cancer patients were subjected to statistical analysis. Synchronous luminescence spectroscopy provides 90% sensitivity and 92.6% specificity.
Rapid Fabrication of Cell-Laden Alginate Hydrogel 3D Structures by Micro Dip-Coating.
Ghanizadeh Tabriz, Atabak; Mills, Christopher G; Mullins, John J; Davies, Jamie A; Shu, Wenmiao
2017-01-01
Development of a simple, straightforward 3D fabrication method to culture cells in 3D, without relying on any complex fabrication methods, remains a challenge. In this paper, we describe a new technique that allows fabrication of scalable 3D cell-laden hydrogel structures easily, without complex machinery: the technique can be done using only apparatus already available in a typical cell biology laboratory. The fabrication method involves micro dip-coating of cell-laden hydrogels covering the surface of a metal bar, into the cross-linking reagents calcium chloride or barium chloride to form hollow tubular structures. This method can be used to form single layers with thickness ranging from 126 to 220 µm or multilayered tubular structures. This fabrication method uses alginate hydrogel as the primary biomaterial and a secondary biomaterial can be added depending on the desired application. We demonstrate the feasibility of this method, with survival rate over 75% immediately after fabrication and normal responsiveness of cells within these tubular structures using mouse dermal embryonic fibroblast cells and human embryonic kidney 293 cells containing a tetracycline-responsive, red fluorescent protein (tHEK cells).
Sharma, S C; Tsai, C
1991-01-01
In normal goldfish, optic axons innervate only the contralateral optic tectum. When one eye was enucleated and the optic nerve of the other eye crushed, the regenerating optic axons innervated both optic tecta. We studied the presence of bilaterally projecting retinal ganglion cells by double retrograde cell labeling methods using Nuclear Yellow and True Blue dyes. About 10% of the retinal ganglion cells were double labeled and these cells were found throughout the retina. In addition, HRP application to the ipsilateral tectum revealed retrogradely-labeled retinal ganglion cells of all morphological types. These results suggest that induced ipsilateral projections are formed by regenerating axon collaterals and that all cell types are involved in the generation of normal mirror image typography.
Chattopadhyay, Saurabh; Kessler, Sean P; Colucci, Juliana Almada; Yamashita, Michifumi; Senanayake, Preenie deS; Sen, Ganes C
2014-01-01
Angiotensin-converting enzyme (ACE) regulates normal blood pressure and fluid homeostasis through its action in the renin-angiotensin-system (RAS). Ace-/- mice are smaller in size, have low blood pressure and defective kidney structure and functions. All of these defects are cured by transgenic expression of somatic ACE (sACE) in vascular endothelial cells of Ace-/- mice. sACE is expressed on the surface of vascular endothelial cells and undergoes a natural cleavage secretion process to generate a soluble form in the body fluids. Both the tissue-bound and the soluble forms of ACE are enzymatically active, and generate the vasoactive octapeptide Angiotensin II (Ang II) with equal efficiency. To assess the relative physiological roles of the secreted and the cell-bound forms of ACE, we expressed, in the vascular endothelial cells of Ace-/- mice, the ectodomain of sACE, which corresponded to only the secreted form of ACE. Our results demonstrated that the secreted form of ACE could normalize kidney functions and RAS integrity, growth and development of Ace-/- mice, but not their blood pressure. This study clearly demonstrates that the secreted form of ACE cannot replace the tissue-bound ACE for maintaining normal blood pressure; a suitable balance between the tissue-bound and the soluble forms of ACE is essential for maintaining all physiological functions of ACE.
Chattopadhyay, Saurabh; Kessler, Sean P.; Colucci, Juliana Almada; Yamashita, Michifumi; Senanayake, Preenie deS; Sen, Ganes C.
2014-01-01
Angiotensin-converting enzyme (ACE) regulates normal blood pressure and fluid homeostasis through its action in the renin-angiotensin-system (RAS). Ace-/- mice are smaller in size, have low blood pressure and defective kidney structure and functions. All of these defects are cured by transgenic expression of somatic ACE (sACE) in vascular endothelial cells of Ace-/- mice. sACE is expressed on the surface of vascular endothelial cells and undergoes a natural cleavage secretion process to generate a soluble form in the body fluids. Both the tissue-bound and the soluble forms of ACE are enzymatically active, and generate the vasoactive octapeptide Angiotensin II (Ang II) with equal efficiency. To assess the relative physiological roles of the secreted and the cell-bound forms of ACE, we expressed, in the vascular endothelial cells of Ace-/- mice, the ectodomain of sACE, which corresponded to only the secreted form of ACE. Our results demonstrated that the secreted form of ACE could normalize kidney functions and RAS integrity, growth and development of Ace-/- mice, but not their blood pressure. This study clearly demonstrates that the secreted form of ACE cannot replace the tissue-bound ACE for maintaining normal blood pressure; a suitable balance between the tissue-bound and the soluble forms of ACE is essential for maintaining all physiological functions of ACE. PMID:24475296
Overlap junctions for high coherence superconducting qubits
NASA Astrophysics Data System (ADS)
Wu, X.; Long, J. L.; Ku, H. S.; Lake, R. E.; Bal, M.; Pappas, D. P.
2017-07-01
Fabrication of sub-micron Josephson junctions is demonstrated using standard processing techniques for high-coherence, superconducting qubits. These junctions are made in two separate lithography steps with normal-angle evaporation. Most significantly, this work demonstrates that it is possible to achieve high coherence with junctions formed on aluminum surfaces cleaned in situ by Ar plasma before junction oxidation. This method eliminates the angle-dependent shadow masks typically used for small junctions. Therefore, this is conducive to the implementation of typical methods for improving margins and yield using conventional CMOS processing. The current method uses electron-beam lithography and an additive process to define the top and bottom electrodes. Extension of this work to optical lithography and subtractive processes is discussed.
Zou, Leilei; Liu, Rui; Zhang, Xiaohui; Chu, Renyuan; Dai, Jinhui; Zhou, Hao
2014-01-01
Purpose Scleral remodeling is an important mechanism underlying the development of myopia. Atropine, an antagonist of G protein-coupled muscarinic receptors, is currently used as an off-label treatment for myopia. Regulator of G-protein signaling 2 (RGS2) functions as an intracellular selective inhibitor of muscarinic receptors. In this study we measured scleral RGS2 expression and scleral remodeling in an animal model of myopia in the presence or absence of atropine treatment. Methods Guinea pigs were assigned to four groups: normal (free of form deprivation), form deprivation myopia (FDM) for 4 weeks, FDM treated with saline, and FDM treated with atropine. Biometric measurements were then performed. RGS2 expression levels and scleral remodeling, including scleral thickness and collagen type I expression, were compared among the four groups. Results Compared with normal eyes and contralateral control eyes, the FDM eyes had the most prominent changes in refraction, axial length, and scleral remodeling, indicating myopia. There was no significant difference between control and normal eyes. Hematoxylin and eosin staining showed that the scleral thickness was significantly thinner in the posterior pole region of FDM eyes compared to normal eyes. Real-time PCR and western blot analysis showed a significant decrease in posterior scleral collagen type I mRNA and protein expression in the FDM eyes compared to the normal eyes. The FDM eyes also had increased levels of RGS2 mRNA and protein expression in the sclera. Atropine treatment attenuated the FDM-induced changes in refraction, axial length, and scleral remodeling. Interestingly, atropine treatment significantly increased collagen type I mRNA expression but decreased RGS2 mRNA and protein expression in the sclera of the FDM eyes. Conclusions We identified a significant RGS2 upregulation and collagen type I downregulation in the sclera of FDM eyes, which could be partially attenuated by atropine treatment. Our data suggest that targeting dysregulated RGS2 may provide a novel strategy for development of therapeutic agents to suppress myopia progression. PMID:25018620
A systematic evaluation of normalization methods in quantitative label-free proteomics.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2018-01-01
To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.
ERIC Educational Resources Information Center
Carpenter, Donald A.
2008-01-01
Confusion exists among database textbooks as to the goal of normalization as well as to which normal form a designer should aspire. This article discusses such discrepancies with the intention of simplifying normalization for both teacher and student. This author's industry and classroom experiences indicate such simplification yields quicker…
Method for detection and imaging over a broad spectral range
Yefremenko, Volodymyr; Gordiyenko, Eduard; Pishko, legal representative, Olga; Novosad, Valentyn; Pishko, deceased; Vitalii
2007-09-25
A method of controlling the coordinate sensitivity in a superconducting microbolometer employs localized light, heating or magnetic field effects to form normal or mixed state regions on a superconducting film and to control the spatial location. Electron beam lithography and wet chemical etching were applied as pattern transfer processes in epitaxial Y--Ba--Cu--O films. Two different sensor designs were tested: (i) a 3 millimeter long and 40 micrometer wide stripe and (ii) a 1.25 millimeters long, and 50 micron wide meandering-like structure. Scanning the laser beam along the stripe leads to physical displacement of the sensitive area, and, therefore, may be used as a basis for imaging over a broad spectral range. Forming the superconducting film as a meandering structure provides the equivalent of a two-dimensional detector array. Advantages of this approach are simplicity of detector fabrication, and simplicity of the read-out process requiring only two electrical terminals.
Foam and gel methods for the decontamination of metallic surfaces
Nunez, Luis; Kaminski, Michael Donald
2007-01-23
Decontamination of nuclear facilities is necessary to reduce the radiation field during normal operations and decommissioning of complex equipment. In this invention, we discuss gel and foam based diphosphonic acid (HEDPA) chemical solutions that are unique in that these solutions can be applied at room temperature; provide protection to the base metal for continued applications of the equipment; and reduce the final waste form production to one step. The HEDPA gels and foams are formulated with benign chemicals, including various solvents, such as ionic liquids and reducing and complexing agents such as hydroxamic acids, and formaldehyde sulfoxylate. Gel and foam based HEDPA processes allow for decontamination of difficult to reach surfaces that are unmanageable with traditional aqueous process methods. Also, the gel and foam components are optimized to maximize the dissolution rate and assist in the chemical transformation of the gel and foam to a stable waste form.
Theory of strong turbulence by renormalization
NASA Technical Reports Server (NTRS)
Tchen, C. M.
1981-01-01
The hydrodynamical equations of turbulent motions are inhomogeneous and nonlinear in their inertia and force terms and will generate a hierarchy. A kinetic method was developed to transform the hydrodynamic equations into a master equation governing the velocity distribution, as a function of the time, the position and the velocity as an independent variable. The master equation presents the advantage of being homogeneous and having fewer nonlinear terms and is therefore simpler for the investigation of closure. After the closure by means of a cascade scaling procedure, the kinetic equation is derived and possesses a memory which represents the nonMarkovian character of turbulence. The kinetic equation is transformed back to the hydrodynamical form to yield an energy balance in the cascade form. Normal and anomalous transports are analyzed. The theory is described for incompressible, compressible and plasma turbulence. Applications of the method to problems relating to sound generation and the propagation of light in a nonfrozen turbulence are considered.
Decorin and biglycan of normal and pathologic human corneas
NASA Technical Reports Server (NTRS)
Funderburgh, J. L.; Hevelone, N. D.; Roth, M. R.; Funderburgh, M. L.; Rodrigues, M. R.; Nirankari, V. S.; Conrad, G. W.
1998-01-01
PURPOSE: Corneas with scars and certain chronic pathologic conditions contain highly sulfated dermatan sulfate, but little is known of the core proteins that carry these atypical glycosaminoglycans. In this study the proteoglycan proteins attached to dermatan sulfate in normal and pathologic human corneas were examined to identify primary genes involved in the pathobiology of corneal scarring. METHODS: Proteoglycans from human corneas with chronic edema, bullous keratopathy, and keratoconus and from normal corneas were analyzed using sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), quantitative immunoblotting, and immunohistology with peptide antibodies to decorin and biglycan. RESULTS: Proteoglycans from pathologic corneas exhibit increased size heterogeneity and binding of the cationic dye alcian blue compared with those in normal corneas. Decorin and biglycan extracted from normal and diseased corneas exhibited similar molecular size distribution patterns. In approximately half of the pathologic corneas, the level of biglycan was elevated an average of seven times above normal, and decorin was elevated approximately three times above normal. The increases were associated with highly charged molecular forms of decorin and biglycan, indicating modification of the proteins with dermatan sulfate chains of increased sulfation. Immunostaining of corneal sections showed an abnormal stromal localization of biglycan in pathologic corneas. CONCLUSIONS: The increased dermatan sulfate associated with chronic corneal pathologic conditions results from stromal accumulation of decorin and particularly of biglycan in the affected corneas. These proteins bear dermatan sulfate chains with increased sulfation compared with normal stromal proteoglycans.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-02-01
The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.
Lawlor, Shawn P [Bellevue, WA; Novaresi, Mark A [San Diego, CA; Cornelius, Charles C [Kirkland, WA
2008-02-26
A gas compressor based on the use of a driven rotor having an axially oriented compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which forms a supersonic shockwave axially, between adjacent strakes. In using this method to compress inlet gas, the supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdyanamic flow path formed between the gas compression ramp on a strake, the shock capture lip on the adjacent strake, and captures the resultant pressure within the stationary external housing while providing a diffuser downstream of the compression ramp.
Bonded ultrasonic transducer and method for making
Dixon, R.D.; Roe, L.H.; Migliori, A.
1995-11-14
An ultrasonic transducer is formed as a diffusion bonded assembly of piezoelectric crystal, backing material, and, optionally, a ceramic wear surface. The mating surfaces of each component are silver films that are diffusion bonded together under the application of pressure and heat. Each mating surface may also be coated with a reactive metal, such as hafnium, to increase the adhesion of the silver films to the component surfaces. Only thin silver films are deposited, e.g., a thickness of about 0.00635 mm, to form a substantially non-compliant bond between surfaces. The resulting transducer assembly is substantially free of self-resonances over normal operating ranges for taking resonant ultrasound measurements. 12 figs.
Wu, D.; Lei, J.; Zheng, B.; Tang, X.; Wang, M.; Hu, Jiawen; Li, S.; Wang, B.; Finkelman, R.B.
2011-01-01
Three hundred and six coal samples were taken from main coal mines of twenty-six provinces, autonomous regions, and municipalities in China, according to the resource distribution and coal-forming periods as well as the coal ranks and coal yields. Nitrogen was determined by using the Kjeldahl method at U. S. Geological Survey (USGS), which exhibit a normal frequency distribution. The nitrogen contents of over 90% Chinese coal vary from 0.52% to 1.41% and the average nitrogen content is recommended to be 0.98%. Nitrogen in coal exists primarily in organic form. There is a slight positive relationship between nitrogen content and coal ranking. ?? 2011 Science Press, Institute of Geochemistry, CAS and Springer Berlin Heidelberg.
Lai, J.-S.; Nowinski, C.J.; Victorson, D.; Peterman, A.; Miller, D.; Bethoux, F.; Heinemann, A.; Rubin, S.; Cavazos, J.E.; Reder, A.T.; Sufit, R.; Simuni, T.; Holmes, G.L.; Siderowf, A.; Wojna, V.; Bode, R.; McKinney, N.; Podrabsky, T.; Wortman, K.; Choi, S.; Gershon, R.; Rothrock, N.; Moy, C.
2012-01-01
Objective: To address the need for brief, reliable, valid, and standardized quality of life (QOL) assessment applicable across neurologic conditions. Methods: Drawing from larger calibrated item banks, we developed short measures (8–9 items each) of 13 different QOL domains across physical, mental, and social health and evaluated their validity and reliability. Three samples were utilized during short form development: general population (Internet-based, n = 2,113); clinical panel (Internet-based, n = 553); and clinical outpatient (clinic-based, n = 581). All short forms are expressed as T scores with a mean of 50 and SD of 10. Results: Internal consistency (Cronbach α) of the 13 short forms ranged from 0.85 to 0.97. Correlations between short form and full-length item bank scores ranged from 0.88 to 0.99 (0.82–0.96 after removing common items from banks). Online respondents were asked whether they had any of 19 different chronic health conditions, and whether or not those reported conditions interfered with ability to function normally. All short forms, across physical, mental, and social health, were able to separate people who reported no health condition from those who reported 1–2 or 3 or more. In addition, scores on all 13 domains were worse for people who acknowledged being limited by the health conditions they reported, compared to those who reported conditions but were not limited by them. Conclusion: These 13 brief measures of self-reported QOL are reliable and show preliminary evidence of concurrent validity inasmuch as they differentiate people based upon number of reported health conditions and whether those reported conditions impede normal function. PMID:22573626
Time dependent inflow-outflow boundary conditions for 2D acoustic systems
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Myers, Michael K.
1989-01-01
An analysis of the number and form of the required inflow-outflow boundary conditions for the full two-dimensional time-dependent nonlinear acoustic system in subsonic mean flow is performed. The explicit predictor-corrector method of MacCormack (1969) is used. The methodology is tested on both uniform and sheared mean flows with plane and nonplanar sources. Results show that the acoustic system requires three physical boundary conditions on the inflow and one on the outflow boundary. The most natural choice for the inflow boundary conditions is judged to be a specification of the vorticity, the normal acoustic impedance, and a pressure gradient-density gradient relationship normal to the boundary. Specification of the acoustic pressure at the outflow boundary along with these inflow boundary conditions is found to give consistent reliable results. A set of boundary conditions developed earlier, which were intended to be nonreflecting is tested using the current method and is shown to yield unstable results for nonplanar acoustic waves.
Taylor, Adam G.
2018-01-01
New solutions of potential functions for the bilinear vertical traction boundary condition are derived and presented. The discretization and interpolation of higher-order tractions and the superposition of the bilinear solutions provide a method of forming approximate and continuous solutions for the equilibrium state of a homogeneous and isotropic elastic half-space subjected to arbitrary normal surface tractions. Past experimental measurements of contact pressure distributions in granular media are reviewed in conjunction with the application of the proposed solution method to analysis of elastic settlement in shallow foundations. A numerical example is presented for an empirical ‘saddle-shaped’ traction distribution at the contact interface between a rigid square footing and a supporting soil medium. Non-dimensional soil resistance is computed as the reciprocal of normalized surface displacements under this empirical traction boundary condition, and the resulting internal stresses are compared to classical solutions to uniform traction boundary conditions. PMID:29892456
NASA Astrophysics Data System (ADS)
Ahmadov, A. I.; Naeem, Maria; Qocayeva, M. V.; Tarverdiyeva, V. A.
2018-02-01
In this paper, the bound state solution of the modified radial Schrödinger equation is obtained for the Manning-Rosen plus Hulthén potential by implementing the novel improved scheme to surmount the centrifugal term. The energy eigenvalues and corresponding radial wave functions are defined for any l ≠ 0 angular momentum case via the Nikiforov-Uvarov (NU) and supersymmetric quantum mechanics (SUSYQM) methods. By using these two different methods, equivalent expressions are obtained for the energy eigenvalues, and the expression of radial wave functions transformations to each other is demonstrated. The energy levels are worked out and the corresponding normalized eigenfunctions are represented in terms of the Jacobi polynomials for arbitrary l states. A closed form of the normalization constant of the wave functions is also found. It is shown that, the energy eigenvalues and eigenfunctions are sensitive to nr radial and l orbital quantum numbers.
Zinc-chlorine battery plant system and method
Whittlesey, Curtis C.; Mashikian, Matthew S.
1981-01-01
A zinc-chlorine battery plant system and method of redirecting the electrical current around a failed battery module. The battery plant includes a power conditioning unit, a plurality of battery modules connected electrically in series to form battery strings, a plurality of battery strings electrically connected in parallel to the power conditioning unit, and a bypass switch for each battery module in the battery plant. The bypass switch includes a normally open main contact across the power terminals of the battery module, and a set of normally closed auxiliary contacts for controlling the supply of reactants electrochemically transformed in the cells of the battery module. Upon the determination of a failure condition, the bypass switch for the failed battery module is energized to close the main contact and open the auxiliary contacts. Within a short time, the electrical current through the battery module will substantially decrease due to the cutoff of the supply of reactants, and the electrical current flow through the battery string will be redirected through the main contact of the bypass switch.
NASA Astrophysics Data System (ADS)
Arı, Hatice; Özpozan, Talat; Büyükmumcu, Zeki; Kabacalı, Yiğit; Saçmaci, Mustafa
2016-10-01
A carbamate compound having tricarbonyl groups, methyl-2-(4-methoxybenzoyl)-3-(4-methoxyphenyl)-3-oxopropanoylcarbamate (BPOC) was investigated from theoretical and vibrational spectroscopic point of view employing quantum chemical methods. Hybrid Density Functionals (B3LYP, X3LYP and B3PW91) with 6-311 G(d,p) basis set were used for the calculations. Rotational barrier and conformational analyses were performed to find the most stable conformers of keto and enol forms of the molecule. Three transition states for keto-enol tautomerism in gas phase were determined. The results of the calculations show that enol-1 form of BPOC is more stable than keto and enol-2 forms. Hydrogen bonding investigation including Natural bond orbital analysis (NBO) for all the tautomeric structures was employed to compare intra-molecular interactions. The energies of HOMO and LUMO molecular orbitals for all tautomeric forms of BPOC were predicted. Normal Coordinate Analysis (NCA) was carried out for the enol-1 to assign vibrational bands of IR and Raman spectra. The scaling factors were calculated as 0.9721, 0.9697 and 0.9685 for B3LYP, X3LYP and B3PW91 methods, respectively. The correlation graphs of experimental versus calculated vibrational wavenumbers were plotted and X3LYP method gave better frequency agreement than the others.
Graphite pellicles, methods of formation and properties
NASA Astrophysics Data System (ADS)
Topala, P.; Marin, L.; Besliu, V.; Stoicev, P.; Ojegov, A.; Cosovschii, P.
2015-11-01
The paper presents the results of experimental investigations aimed at the establishing the composition and the functional properties of the graphite pellicles formed on the metal surfaces by the action of plasma in the air media at normal pressure applying electrical discharges in impulse (EDI). It shows that they have the same behavior characteristics as fullerene, avoiding the stick effect between metal surfaces and between metal and liquid glass at temperatures of the order of 400-1200 °C.
Oscillations of manometric tubular springs with rigid end
NASA Astrophysics Data System (ADS)
Cherentsov, D. A.; Pirogov, S. P.; Dorofeev, S. M.; Ryabova, Y. S.
2018-05-01
The paper presents a mathematical model of attenuating oscillations of manometric tubular springs (MTS) taking into account the rigid tip. The dynamic MTS model is presented in the form of a thin-walled curved rod oscillating in the plane of curvature of the central axis. Equations for MTS oscillations are obtained in accordance with the d’Alembert principle in projections onto the normal and tangential. The Bubnov-Galerkin method is used to solve the equations obtained.
Thin wing corrections for phase-change heat-transfer data.
NASA Technical Reports Server (NTRS)
Hunt, J. L.; Pitts, J. I.
1971-01-01
Since no methods are available for determining the magnitude of the errors incurred when the semiinfinite slab assumption is violated, a computer program was developed to calculate the heat-transfer coefficients to both sides of a finite, one-dimensional slab subject to the boundary conditions ascribed to the phase-change coating technique. The results have been correlated in the form of correction factors to the semiinfinite slab solutions in terms of parameters normally used with the technique.
NASA Astrophysics Data System (ADS)
Yin, Ruiyuan; Li, Yue; Sun, Yu; Wen, Cheng P.; Hao, Yilong; Wang, Maojun
2018-06-01
We report the effect of the gate recess process and the surface of as-etched GaN on the gate oxide quality and first reveal the correlation between border traps and exposed surface properties in normally-off Al2O3/GaN MOSFET. The inductively coupled plasma (ICP) dry etching gate recess with large damage presents a rough and active surface that is prone to form detrimental GaxO validated by atomic force microscopy and X-ray photoelectron spectroscopy. Lower drain current noise spectral density of the 1/f form and less dispersive ac transconductance are observed in GaN MOSFETs fabricated with oxygen assisted wet etching compared with devices based on ICP dry etching. One decade lower density of border traps is extracted in devices with wet etching according to the carrier number fluctuation model, which is consistent with the result from the ac transconductance method. Both methods show that the density of border traps is skewed towards the interface, indicating that GaxO is of higher trap density than the bulk gate oxide. GaxO located close to the interface is the major location of border traps. The damage-free oxidation assisted wet etching gate recess technique presents a relatively smooth and stable surface, resulting in lower border trap density, which would lead to better MOS channel quality and improved device reliability.
Birkhoff Normal Form for Some Nonlinear PDEs
NASA Astrophysics Data System (ADS)
Bambusi, Dario
We consider the problem of extending to PDEs Birkhoff normal form theorem on Hamiltonian systems close to nonresonant elliptic equilibria. As a model problem we take the nonlinear wave equation
Closed-form solutions of performability. [modeling of a degradable buffer/multiprocessor system
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1981-01-01
Methods which yield closed form performability solutions for continuous valued variables are developed. The models are similar to those employed in performance modeling (i.e., Markovian queueing models) but are extended so as to account for variations in structure due to faults. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. To avoid known difficulties associated with exact transient solutions, an approximate decomposition of the model is employed permitting certain submodels to be solved in equilibrium. These solutions are then incorporated in a model with fewer transient states and by solving the latter, a closed form solution of the system's performability is obtained. In conclusion, some applications of this solution are discussed and illustrated, including an example of design optimization.
An initial study of void formation during solidification of aluminum in normal and reduced-gravity
NASA Technical Reports Server (NTRS)
Chiaramonte, Francis P.; Foerster, George; Gotti, Daniel J.; Neumann, Eric S.; Johnston, J. C.; De Witt, Kenneth J.
1992-01-01
Void formation due to volumetric shrinkage during aluminum solidification was observed in real time using a radiographic viewing system in normal and reduced gravity. An end chill directional solidification furnace with water quench was developed to solidify aluminum samples during the approximately 16 seconds of reduced gravity (+/- 0.02g) achieved by flying an aircraft through a parabolic trajectory. Void formation was recorded for two cases: first a nonwetting system; and second, a wetting system where wetting occurs between the aluminum and crucible lid. The void formation in the nonwetting case is similar in normal and reduced gravity, with a single vapor cavity forming at the top of the crucible. In the wetting case in reduced gravity, surface tension causes two voids to form in the top corners of the crucible, but in normal gravity only one large voids forms across the top.
[The clinic skill in fixed appliance based on characteristics of Chinese normal occlusion].
Bai, Ding; Luo, Song-jiao; Chen, Yang-xi; Xiao, Li-wei
2005-02-01
To study the bracket placement and arch wire bending based on ethnic differences and individual differences of normal occlusion. The prominence, tip, torque, upper first molar offset of crown and arch form between Chinese and Caucasian normal occlusion were compared. The results showed the ethnic differences of prominence, tip, torque, upper first molar offset of crown and arch form between Chinese and Caucasian normal occlusion. The placement of bracket was influenced by the crown morphology. The adjustments of the bracket placement and arch wire bending with Edgewise and pre-adjusted appliance are necessary to adapt to ethnic difference and individual difference.
Normal forms for reduced stochastic climate models
Majda, Andrew J.; Franzke, Christian; Crommelin, Daan
2009-01-01
The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive high-dimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from applied mathematics are utilized to systematically derive normal forms for reduced stochastic climate models for low-frequency variables. The use of a few Empirical Orthogonal Functions (EOFs) (also known as Principal Component Analysis, Karhunen–Loéve and Proper Orthogonal Decomposition) depending on observational data to span the low-frequency subspace requires the assessment of dyad interactions besides the more familiar triads in the interaction between the low- and high-frequency subspaces of the dynamics. It is shown below that the dyad and multiplicative triad interactions combine with the climatological linear operator interactions to simultaneously produce both strong nonlinear dissipation and Correlated Additive and Multiplicative (CAM) stochastic noise. For a single low-frequency variable the dyad interactions and climatological linear operator alone produce a normal form with CAM noise from advection of the large scales by the small scales and simultaneously strong cubic damping. These normal forms should prove useful for developing systematic strategies for the estimation of stochastic models from climate data. As an illustrative example the one-dimensional normal form is applied below to low-frequency patterns such as the North Atlantic Oscillation (NAO) in a climate model. The results here also illustrate the short comings of a recent linear scalar CAM noise model proposed elsewhere for low-frequency variability. PMID:19228943
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
1996-01-01
We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.
Zhou, Xiangtian; Ji, Fengtao; An, Jianhong; Zhao, Fuxin; Shi, Fanjun; Huang, Furong; Li, Yuan; Jiao, Shiming; Yan, Dongsheng; Chen, Xiaoyan; Chen, JiangFan
2012-01-01
Purpose To investigate whether myopia development is associated with changes of scleral DNA methylation in cytosine-phosphate-guanine (CpG) sites in the collagen 1A1 (COL1A1) promoter and messenger RNA (mRNA) levels following murine form deprivation myopia. Methods Fifty-seven C57BL/6 mice (postnatal day 23) were randomly assigned to four groups: (1) monocular form deprivation (MD) in which a diffuser lens was placed over one eye for 28 days; (2) normal controls without MD; (3) MD recovery in which the diffuser lens was removed for seven days; and (4) MD recovery normal controls. The DNA methylation pattern in COL1A1 promoter and exon 1 was determined by bisulfite DNA sequencing, and the COL1A1 mRNA level in sclera was determined by quantitative PCR. Results MD was found to induce myopia in the treated eyes. Six CpG sites in the promoter and exon 1 region of COL1A1 were methylated with significantly higher frequency in the treated eyes than normal control eyes (p<0.05), with CpG island methylation in MD-contralateral eyes being intermediate. Consistent with the CpG methylation, scleral COL1A1 mRNA was reduced by 57% in the MD-treated eyes compared to normal controls (p<0.05). After seven days of MD recovery, CpG methylation was significantly reduced (p=0.01). The methylation patterns returned to near normal level in five CpG sites, but the sixth was hypomethylated compared to normal controls. Conclusions In parallel with the development of myopia and the reduced COL1A1 mRNA, the frequency of methylation in CpG sites of the COL1A1 promoter/exon 1 increased during MD and returned to near normal during recovery. Thus, hypermethylation of CpG sites in the promoter/exon 1 of COL1A1 may underlie reduced collagen synthesis at the transcriptional level in myopic scleras. PMID:22690110
Nahajevszky, Sarolta; Andrikovics, Hajnalka; Batai, Arpad; Adam, Emma; Bors, Andras; Csomor, Judit; Gopcsa, Laszlo; Koszarska, Magdalena; Kozma, Andras; Lovas, Nora; Lueff, Sandor; Matrai, Zoltan; Meggyesi, Nora; Sinko, Janos; Sipos, Andrea; Varkonyi, Andrea; Fekete, Sandor; Tordai, Attila; Masszi, Tamas
2011-01-01
Background Prognostic risk stratification according to acquired or inherited genetic alterations has received increasing attention in acute myeloid leukemia in recent years. A germline Janus kinase 2 haplotype designated as the 46/1 haplotype has been reported to be associated with an inherited predisposition to myeloproliferative neoplasms, and also to acute myeloid leukemia with normal karyotype. The aim of this study was to assess the prognostic impact of the 46/1 haplotype on disease characteristics and treatment outcome in acute myeloid leukemia. Design and Methods Janus kinase 2 rs12343867 single nucleotide polymorphism tagging the 46/1 haplotype was genotyped by LightCycler technology applying melting curve analysis with the hybridization probe detection format in 176 patients with acute myeloid leukemia under 60 years diagnosed consecutively and treated with curative intent. Results The morphological subtype of acute myeloid leukemia with maturation was less frequent among 46/1 carriers than among non-carriers (5.6% versus 17.2%, P=0.018, cytogenetically normal subgroup: 4.3% versus 20.6%, P=0.031), while the morphological distribution shifted towards the myelomonocytoid form in 46/1 haplotype carriers (28.1% versus 14.9%, P=0.044, cytogenetically normal subgroup: 34.0% versus 11.8%, P=0.035). In cytogenetically normal cases of acute myeloid leukemia, the 46/1 carriers had a considerably lower remission rate (78.7% versus 94.1%, P=0.064) and more deaths in remission or in aplasia caused by infections (46.8% versus 23.5%, P=0.038), resulting in the 46/1 carriers having shorter disease-free survival and overall survival compared to the 46/1 non-carriers. In multivariate analysis, the 46/1 haplotype was an independent adverse prognostic factor for disease-free survival (P=0.024) and overall survival (P=0.024) in patients with a normal karyotype. Janus kinase 2 46/1 haplotype had no impact on prognosis in the subgroup with abnormal karyotype. Conclusions Janus kinase 2 46/1 haplotype influences morphological distribution, increasing the predisposition towards an acute myelomonocytoid form. It may be a novel, independent unfavorable risk factor in acute myeloid leukemia with a normal karyotype. PMID:21791467
Determination of foveal location using scanning laser polarimetry.
VanNasdale, Dean A; Elsner, Ann E; Weber, Anke; Miura, Masahiro; Haggerty, Bryan P
2009-03-25
The fovea is the retinal location responsible for our most acute vision. There are several methods used to localize the fovea, but the fovea is not always easily identifiable. Landmarks used to determine the foveal location are variable in normal subjects and localization becomes even more difficult in instances of retinal disease. In normal subjects, the photoreceptor axons that make up the Henle fiber layer are cylindrical and the radial orientation of these fibers is centered on the fovea. The Henle fiber layer exhibits form birefringence, which predictably changes polarized light in scanning laser polarimetry imaging. In this study 3 graders were able to repeatably identify the fovea in 35 normal subjects using near infrared image types with differing polarization content. There was little intra-grader, inter-grader, and inter-image variability in the graded foveal position for 5 of the 6 image types examined, with accuracy sufficient for clinical purposes. This study demonstrates that scanning laser polarimetry imaging can localize the fovea by using structural properties inherent in the central macula.
Normalized Index of Synergy for Evaluating the Coordination of Motor Commands
Togo, Shunta; Imamizu, Hiroshi
2015-01-01
Humans perform various motor tasks by coordinating the redundant motor elements in their bodies. The coordination of motor outputs is produced by motor commands, as well properties of the musculoskeletal system. The aim of this study was to dissociate the coordination of motor commands from motor outputs. First, we conducted simulation experiments where the total elbow torque was generated by a model of a simple human right and left elbow with redundant muscles. The results demonstrated that muscle tension with signal-dependent noise formed a coordinated structure of trial-to-trial variability of muscle tension. Therefore, the removal of signal-dependent noise effects was required to evaluate the coordination of motor commands. We proposed a method to evaluate the coordination of motor commands, which removed signal-dependent noise from the measured variability of muscle tension. We used uncontrolled manifold analysis to calculate a normalized index of synergy. Simulation experiments confirmed that the proposed method could appropriately represent the coordinated structure of the variability of motor commands. We also conducted experiments in which subjects performed the same task as in the simulation experiments. The normalized index of synergy revealed that the subjects coordinated their motor commands to achieve the task. Finally, the normalized index of synergy was applied to a motor learning task to determine the utility of the proposed method. We hypothesized that a large part of the change in the coordination of motor outputs through learning was because of changes in motor commands. In a motor learning task, subjects tracked a target trajectory of the total torque. The change in the coordination of muscle tension through learning was dominated by that of motor commands, which supported the hypothesis. We conclude that the normalized index of synergy can be used to evaluate the coordination of motor commands independently from the properties of the musculoskeletal system. PMID:26474043
NASA Astrophysics Data System (ADS)
Luo, G. W.; Chu, Y. D.; Zhang, Y. L.; Zhang, J. G.
2006-11-01
A multidegree-of-freedom system having symmetrically placed rigid stops and subjected to periodic excitation is considered. The system consists of linear components, but the maximum displacement of one of the masses is limited to a threshold value by the symmetrical rigid stops. Repeated impacts usually occur in the vibratory system due to the rigid amplitude constraints. Such models play an important role in the studies of mechanical systems with clearances or gaps. Double Neimark-Sacker bifurcation of the system is analyzed by using the center manifold and normal form method of maps. The period-one double-impact symmetrical motion and homologous disturbed map of the system are derived analytically. A center manifold theorem technique is applied to reduce the Poincaré map to a four-dimensional one, and the normal form map associated with double Neimark-Sacker bifurcation is obtained. The bifurcation sets for the normal-form map are illustrated in detail. Local behavior of the vibratory systems with symmetrical rigid stops, near the points of double Neimark-Sacker bifurcations, is reported by the presentation of results for a three-degree-of-freedom vibratory system with symmetrical stops. The existence and stability of period-one double-impact symmetrical motion are analyzed explicitly. Also, local bifurcations at the points of change in stability are analyzed, thus giving some information on dynamical behavior near the points of double Neimark-Sacker bifurcations. Near the value of double Neimark-Sacker bifurcation there exist period-one double-impact symmetrical motion and quasi-periodic impact motions. The quasi-periodic impact motions are represented by the closed circle and "tire-like" attractor in projected Poincaré sections. With change of system parameters, the quasi-periodic impact motions usually lead to chaos via "tire-like" torus doubling.
Negative Lens–Induced Myopia in Infant Monkeys: Effects of High Ambient Lighting
Smith, Earl L.; Hung, Li-Fang; Arumugam, Baskar; Huang, Juan
2013-01-01
Purpose. To determine whether high light levels, which have a protective effect against form-deprivation myopia, also retard the development of lens-induced myopia in primates. Methods. Hyperopic defocus was imposed on 27 monkeys by securing −3 diopter (D) lenses in front of one eye. The lens-rearing procedures were initiated at 24 days of age and continued for periods ranging from 50 to 123 days. Fifteen of the treated monkeys were exposed to normal laboratory light levels (∼350 lux). For the other 12 lens-reared monkeys, auxiliary lighting increased the illuminance to 25,000 lux for 6 hours during the middle of the daily 12 hour light cycle. Refractive development, corneal power, and axial dimensions were assessed by retinoscopy, keratometry, and ultrasonography, respectively. Data were also obtained from 37 control monkeys, four of which were exposed to high ambient lighting. Results. In normal- and high-light-reared monkeys, hyperopic defocus accelerated vitreous chamber elongation and produced myopic shifts in refractive error. The high light regimen did not alter the degree of myopia (high light: −1.69 ± 0.84 D versus normal light: −2.08 ± 1.12 D; P = 0.40) or the rate at which the treated eyes compensated for the imposed defocus. Following lens removal, the high light monkeys recovered from the induced myopia. The recovery process was not affected by the high lighting regimen. Conclusions. In contrast to the protective effects that high ambient lighting has against form-deprivation myopia, high artificial lighting did not alter the course of compensation to imposed defocus. These results indicate that the mechanisms responsible for form-deprivation myopia and lens-induced myopia are not identical. PMID:23557736
2014-01-01
Purpose Universal design (UD) is oriented to creating products, buildings, outdoor spaces and services for use by all people to the fullest extent possible according to principles of enabling equal citizenship. Nevertheless its theoretical basis has been under-explored, a critique that has also been leveled at rehabilitation. This commentary explores parallels between UD and dominant rehabilitation discourses that risk privileging or discrediting particular ways of being and doing. Methods Commentary. Results Drawing from examples that explore the intersection of bodies, places and technologies with disabled people, I examined how practices of normalization risk reproducing the universalized body and legitimated forms of mobility, and in so doing perpetuates the “othering” of difference. To address these limitations, I explored the postmodern notion of multiple creative “assemblages” that are continually made and broken over time and space. Assemblages resist normalization tendencies by acknowledging and fostering multiple productive dependencies between human and non-human elements that include diverse bodies, not just those labeled disabled. Conclusion In exploring the potential of enhancing creative assemblages and multiple dependencies, space opens up in UD and rehabilitation for acknowledging, developing, and promoting a multiplicity of bodily forms and modes of mobility. Implications for Rehabilitation Universal design and rehabilitation both risk perpetuating particular ideas about what disabled people should be, do, and value, that privilege a limited range of particular bodily forms. The notion of “assemblages” provides a conceptual tool for rethinking negative views of dependence and taken for granted independence goals. In exploring the potential of enhancing various dependencies, space opens up for reconsidering disability, mobility and multiple ways of “doing-in-the-world”. PMID:24564357
Detection of Alzheimer's disease using group lasso SVM-based region selection
NASA Astrophysics Data System (ADS)
Sun, Zhuo; Fan, Yong; Lelieveldt, Boudewijn P. F.; van de Giessen, Martijn
2015-03-01
Alzheimer's disease (AD) is one of the most frequent forms of dementia and an increasing challenging public health problem. In the last two decades, structural magnetic resonance imaging (MRI) has shown potential in distinguishing patients with Alzheimer's disease and elderly controls (CN). To obtain AD-specific biomarkers, previous research used either statistical testing to find statistically significant different regions between the two clinical groups, or l1 sparse learning to select isolated features in the image domain. In this paper, we propose a new framework that uses structural MRI to simultaneously distinguish the two clinical groups and find the bio-markers of AD, using a group lasso support vector machine (SVM). The group lasso term (mixed l1- l2 norm) introduces anatomical information from the image domain into the feature domain, such that the resulting set of selected voxels are more meaningful than the l1 sparse SVM. Because of large inter-structure size variation, we introduce a group specific normalization factor to deal with the structure size bias. Experiments have been performed on a well-designed AD vs. CN dataset1 to validate our method. Comparing to the l1 sparse SVM approach, our method achieved better classification performance and a more meaningful biomarker selection. When we vary the training set, the selected regions by our method were more stable than the l1 sparse SVM. Classification experiments showed that our group normalization lead to higher classification accuracy with fewer selected regions than the non-normalized method. Comparing to the state-of-art AD vs. CN classification methods, our approach not only obtains a high accuracy with the same dataset, but more importantly, we simultaneously find the brain anatomies that are closely related to the disease.
Turner, Richard; Joseph, Adrian; Titchener-Hooker, Nigel; Bender, Jean
2017-08-04
Cell harvesting is the separation or retention of cells and cellular debris from the supernatant containing the target molecule Selection of harvest method strongly depends on the type of cells, mode of bioreactor operation, process scale, and characteristics of the product and cell culture fluid. Most traditional harvesting methods use some form of filtration, centrifugation, or a combination of both for cell separation and/or retention. Filtration methods include normal flow depth filtration and tangential flow microfiltration. The ability to scale down predictably the selected harvest method helps to ensure successful production and is critical for conducting small-scale characterization studies for confirming parameter targets and ranges. In this chapter we describe centrifugation and depth filtration harvesting methods, share strategies for harvest optimization, present recent developments in centrifugation scale-down models, and review alternative harvesting technologies.
NASA Astrophysics Data System (ADS)
Coghlan, Leslie; Singleton, H. R.; Dell'Italia, L. J.; Linderholm, C. E.; Pohost, G. M.
1995-05-01
We have developed a method for measuring the detailed in vivo three dimensional geometry of the left and right ventricles using cine-magnetic resonance imaging. From data in the form of digitized short axis outlines, the normal vectors, principal curvatures and directions, and wall thickness were computed. The method was evaluated on simulated ellipsoids and on human MRI data. Measurements of normal vectors and of wall thickness were very accurate in simulated data and appeared appropriate in patient data. On simulated data, measurements of the principal curvature k1 (corresponding approximately to the short axis direction of the left ventricle) and of principal directions were quite accurate, but measurements of the other principal curvature (k2) were less accurate. The reasons behind this are considered. We expect improvements in the accuracy with thinner slices and improved representation of the surface data. Gradient echo images were acquired from 8 dogs with a 1.5T system (Philips Gyroscan) at baseline and four months after closed chest experimentally produced mitral regurgitation (MR). The product (k1 + k2) X wall thickness averaged over all slices at end-diastole was significantly lower after surgery (n equals 8, p < 0.005). These geometry changes were consistent with the expected increase in wall stress after MR.
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Huckle, Thomas
1989-01-01
In recent years, a number of results on the relationships between the inertias of Hermitian matrices and the inertias of their principal submatrices appeared in the literature. We study restricted congruence transformation of Hermitian matrices M which, at the same time, induce a congruence transformation of a given principal submatrix A of M. Such transformations lead to concept of the restricted signature normal form of M. In particular, by means of this normal form, we obtain short proofs of most of the known inertia theorems and also derive some new results of this type. For some applications, a special class of almost unitary restricted congruence transformations turns out to be useful. We show that, with such transformations, M can be reduced to a quasi-diagonal form which, in particular, displays the eigenvalues of A. Finally, applications of this quasi-spectral decomposition to generalize inverses and Hermitian matrix pencils are discussed.
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Zeyneloglu, H B; Baltaci, V; Ege, S; Haberal, A; Batioglu, S
2000-04-01
If randomly selected immotile spermatozoa are used for intracytoplasmic sperm injection (ICSI), pregnancy rates are significantly decreased. The hypo-osmotic swelling test (HOST) is the only method available to detect the viable, but immotile spermatozoa for ICSI. However, evidence is still lacking for the chromosomal abnormalities for the normal-looking, but immotile spermatozoa positive for HOST. Sperm samples from 20 infertile men with normal chromosomal constitution were obtained. After Percoll separation, morphologically normal but immotile spermatozoa were transported individually into HOST solution for 1 min using micropipettes. Cells that showed tail curling with swelling in HOST were then transferred back into human tubal fluid solution to allow reversal of swelling. These sperm cells were fixed and processed for the multi-colour fluorescence in-situ hybridization (FISH) for chromosomes X, Y and 18. The same FISH procedure was applied for the motile spermatozoa from the same cohort, which formed the control group. The average aneuploidy rates were 1.70 and 1.54% in 1000 HOST positive immotile and motile spermatozoa respectively detected by FISH for each patient. Our results indicate that morphologically normal, immotile but viable spermatozoa have an aneuploidy rate similar to that of normal motile spermatozoa.
Earle A. Cross; John C. Moser
1975-01-01
Two male and 2 female forms of a new, dimorphic species of Pyemotes from the scolytid Phleosinus canadensis Swaine are described and life history notes are presented. Only one type of female was found to be phoretic. Normal and phoretomorphic females can produce both normal and phoretomorphic daughters. Two species groups in
Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements
NASA Astrophysics Data System (ADS)
Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.
2014-11-01
We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.
Parametric Stiffness Control of Flexible Structures
NASA Technical Reports Server (NTRS)
Moon, F. C.; Rand, R. H.
1985-01-01
An unconventional method for control of flexible space structures using feedback control of certain elements of the stiffness matrix is discussed. The advantage of using this method of configuration control is that it can be accomplished in practical structures by changing the initial stress state in the structure. The initial stress state can be controlled hydraulically or by cables. The method leads, however, to nonlinear control equations. In particular, a long slender truss structure under cable induced initial compression is examined. both analytical and numerical analyses are presented. Nonlinear analysis using center manifold theory and normal form theory is used to determine criteria on the nonlinear control gains for stable or unstable operation. The analysis is made possible by the use of the exact computer algebra system MACSYMA.
Concentration analysis of breast tissue phantoms with terahertz spectroscopy
Truong, Bao C. Q.; Fitzgerald, Anthony J.; Fan, Shuting; Wallace, Vincent P.
2018-01-01
Terahertz imaging has been previously shown to be capable of distinguishing normal breast tissue from its cancerous form, indicating its applicability to breast conserving surgery. The heterogeneous composition of breast tissue is among the main challenges to progressing this potential research towards a practical application. In this paper, two concentration analysis methods are proposed for analyzing phantoms mimicking breast tissue. The dielectric properties and the double Debye parameters were used to determine the phantom composition. The first method is wholly based on the conventional effective medium theory while the second one combines this theoretical model with empirical polynomial models. Through assessing the accuracy of these methods, their potential for application to quantifying breast tissue pathology was confirmed. PMID:29541525
NASA Technical Reports Server (NTRS)
Friedrich, R.; Drewelow, W.
1978-01-01
An algorithm is described that is based on the method of breaking the Laplace transform down into partial fractions which are then inverse-transformed separately. The sum of the resulting partial functions is the wanted time function. Any problems caused by equation system forms are largely limited by appropriate normalization using an auxiliary parameter. The practical limits of program application are reached when the degree of the denominator of the Laplace transform is seven to eight.
Stability and Hopf bifurcation for a delayed SLBRS computer virus model.
Zhang, Zizhen; Yang, Huizhong
2014-01-01
By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
Contact resistance and normal zone formation in coated yttrium barium copper oxide superconductors
NASA Astrophysics Data System (ADS)
Duckworth, Robert Calvin
2001-11-01
This project presents a systematic study of contact resistance and normal zone formation in silver coated YBa2CU3Ox (YBCO) superconductors. A unique opportunity exists in YBCO superconductors because of the ability to use oxygen annealing to influence the interfacial properties and the planar geometry of this type of superconductor to characterize the contact resistance between the silver and YBCO. The interface represents a region that current must cross when normal zones form in the superconductor and a high contact resistance could impede the current transfer or produce excess Joule heating that would result in premature quench or damage of the sample. While it has been shown in single-crystalline YBCO processing methods that the contact resistance of the silver/YBCO interface can be influenced by post-process oxygen annealing, this has not previously been confirmed for high-density films, nor for samples with complete layers of silver deposited on top of the YBCO. Both the influence of contact resistance and the knowledge of normal zone formation on conductor sized samples is essential for their successful implementation into superconducting applications such as transmission lines and magnets. While normal zone formation and propagation have been studied in other high temperature superconductors, the amount of information with respect to YBCO has been very limited. This study establishes that the processing method for the YBCO does not affect the contact resistance and mirrors the dependence of contact resistance on oxygen annealing temperature observed in earlier work. It has also been experimentally confirmed that the current transfer length provides an effective representation of the contact resistance when compared to more direct measurements using the traditional four-wire method. Finally for samples with low contact resistance, a combination of experiments and modeling demonstrate an accurate understanding of the key role of silver thickness and substrate thickness on the stability of silver-coated YBCO Rolling Assisted Bi-Axially Textured Substrates conductors. Both the experimental measurements and the one-dimensional model show that increasing the silver thickness results in an increased thermal runaway current; that is, the current above which normal zones continue to grow due to insufficient local cooling.
a Weighted Closed-Form Solution for Rgb-D Data Registration
NASA Astrophysics Data System (ADS)
Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.
2016-06-01
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
[CONTEMPORARY MOLECULAR-GENETIC METHODS USED FOR ETIOLOGIC DIAGNOSTICS OF SEPSIS].
Gavrilov, S N; Skachkova, T S; Shipulina, O Yu; Savochkina, Yu A; Shipulin, G A; Maleev, V V
2016-01-01
Etiologic diagnostics of sepsis is one of the most difficult problems of contemporary medicine due to a wide variety of sepsis causative agents, many of which are components of normal human microflora. Disadvantages of contemporary "golden standard" of microbiologic diagnostics of sepsis etiology by seeding of blood for sterility are duration of cultivation, limitation in detection of non-cultivable forms of microorganisms, significant effect of preliminary empiric antibiotics therapy on results of the analysis. Methods of molecular diagnostics that are being actively developed and integrated during the last decade are deprived of these disadvantages. Main contemporary methods of molecular-biological diagnostics are examined in the review, actualdata on their diagnostic characteristic are provided. Special attention is given to methods of PCR-diagnostics, including novel Russian developments. Methods of nucleic acid hybridization and proteomic analysis are examined in comparative aspect. Evaluation of application and perspectives of development of methods of molecular diagnostics of sepsis is given.
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Schönbichler, S A; Bittner, L K H; Weiss, A K H; Griesser, U J; Pallua, J D; Huck, C W
2013-08-01
The aim of this study was to evaluate the ability of near-infrared chemical imaging (NIR-CI), near-infrared (NIR), Raman and attenuated-total-reflectance infrared (ATR-IR) spectroscopy to quantify three polymorphic forms (I, II, III) of furosemide in ternary powder mixtures. For this purpose, partial least-squares (PLS) regression models were developed, and different data preprocessing algorithms such as normalization, standard normal variate (SNV), multiplicative scatter correction (MSC) and 1st to 3rd derivatives were applied to reduce the influence of systematic disturbances. The performance of the methods was evaluated by comparison of the standard error of cross-validation (SECV), R(2), and the ratio performance deviation (RPD). Limits of detection (LOD) and limits of quantification (LOQ) of all methods were determined. For NIR-CI, a SECVcorr-spec and a SECVsingle-pixel corrected were calculated to assess the loss of accuracy by taking advantage of the spatial information. NIR-CI showed a SECVcorr-spec (SECVsingle-pixel corrected) of 2.82% (3.71%), 3.49% (4.65%), and 4.10% (5.06%) for form I, II, III. NIR had a SECV of 2.98%, 3.62%, and 2.75%, and Raman reached 3.25%, 3.08%, and 3.18%. The SECV of the ATR-IR models were 7.46%, 7.18%, and 12.08%. This study proves that NIR-CI, NIR, and Raman are well suited to quantify forms I-III of furosemide in ternary mixtures. Because of the pressure-dependent conversion of form II to form I, ATR-IR was found to be less appropriate for an accurate quantification of the mixtures. In this study, the capability of NIR-CI for the quantification of polymorphic ternary mixtures was compared with conventional spectroscopic techniques for the first time. For this purpose, a new way of spectra selection was chosen, and two kinds of SECVs were calculated to achieve a better comparability of NIR-CI to NIR, Raman, and ATR-IR. Copyright © 2013 Elsevier B.V. All rights reserved.
MRI and clinical features of maple syrup urine disease: preliminary results in 10 cases
Cheng, Ailan; Han, Lianshu; Feng, Yun; Li, Huimin; Yao, Rong; Wang, Dengbin; Jin, Biao
2017-01-01
PURPOSE We aimed to evaluate the magnetic resonance imaging (MRI) and clinical features of maple syrup urine disease (MSUD). METHODS This retrospective study consisted of 10 MSUD patients confirmed by genetic testing. All patients underwent brain MRI. Phenotype, genotype, and areas of brain injury on MRI were retrospectively reviewed. RESULTS Six patients (60%) had the classic form of MSUD with BCKDHB mutation, three patients (30%) had the intermittent form (two with BCKDHA mutations and one with DBT mutation), and one patient (10%) had the thiamine-responsive form with DBT mutation. On diffusion-weighted imaging, nine cases presented restricted diffusion in myelinated areas, and one intermittent case with DBT mutation was normal. The classic form of MSUD involved the basal ganglia in six cases; the cerebellum, mesencephalon, pons, and supratentorial area in five cases; and the thalamus in four cases, respectively. The intermittent form involved the cerebellum, pons, and supratentorial area in two cases. The thiamine-responsive form involved the basal ganglia and supratentorial area. CONCLUSION Our preliminary results indicate that patients with MSUD presented more commonly in classic form with BCKDHB mutation and displayed extensive brain injury on MRI. PMID:28830848
Najafian, Aida; Fallahi, Soghra; Khorgoei, Tahereh; Ghahiri, Ataollah; Alavi, Azin; Rajaei, Minoo; Eftekhaari, Tasnim Eqbal
2015-01-01
The incidence of cesarean section is increased. About 3-30% of the women who undergo cesarean experience surgical site infections (SSIs). Many methods, have been used to decrease the incidence of SSIs, but despite much effort, no definite efficacious method has been suggested. In this parallel, single-blinded, randomized control trial, 56 women with post-surgical superficial wound dehiscence were divided into two groups in a 1:1 ratio. One group was irrigated with normal saline for irrigation and Firooz® baby soapand the other with normal saline for irrigation and povidone-iodine. Formation of granulation tissue was monitored in both groups. Also, the reason for surgery, length of wound dehiscence, and duration of hospitalization and wound union after were compared in both group's. The soap group patients were irrigated for 4.18 ± 1.96 days compared to 5.36 ± 2.83 days for the patients in povidone-iodine group (P = 0.414). The granulation tissue was formed after 3.88 ± 1.94 days in the soap group compared to 4.48 ± 2.92 days in the other group (P = 0.391), and the duration of hospitalization was 5.48 ± 2.04 days in the soap group compared to 6.3 ± 2.95 days in the other group (P = 0.423). So, no differences were observed between the two groups. It can be concluded since there is no difference between the results of two groups, irrigation with normal saline and soap is safe, easy and causes no harm or allergy compared with povidone iodine and normal saline.
Eddy-current inversion in the thin-skin limit: Determination of depth and opening for a long crack
NASA Astrophysics Data System (ADS)
Burke, S. K.
1994-09-01
A method for crack size determination using eddy-current nondestructive evaluation is presented for the case of a plate containing an infinitely long crack of uniform depth and uniform crack opening. The approach is based on the approximate solution to Maxwell's equations for nonmagnetic conductors in the limit of small skin depth and relies on least-squares polynomial fits to a normalized coil impedance function as a function of skin depth. The method is straightforward to implement and is relatively insensitive to both systematic and random errors. The procedure requires the computation of two functions: a normalizing function, which depends both on the coil parameters and the skin depth, and a crack-depth function which depends only on the coil parameters in addition to the crack depth. The practical perfomance of the method was tested using a set of simulated cracks in the form of electro-discharge machined slots in aluminum alloy plates. The crack depths and crack opening deduced from the eddy-current measurements agree with the actual crack dimensions to within 10% or better. Recommendations concerning the optimum conditions for crack sizing are also made.
Sontag, Timothy J.; Chellan, Bijoy; Bhanvadia, Clarissa V.; Getz, Godfrey S.; Reardon, Catherine A.
2015-01-01
Macrophage conversion to atherosclerotic foam cells is partly due to the balance of uptake and efflux of cholesterol. Cholesterol efflux from cells by HDL and its apoproteins for subsequent hepatic elimination is known as reverse cholesterol transport. Numerous methods have been developed to measure in vivo macrophage cholesterol efflux. Most methods do not allow for macrophage recovery for analysis of changes in cellular cholesterol status. We describe a novel method for measuring cellular cholesterol balance using the in vivo entrapment of macrophages in alginate, which retains incorporated cells while being permeable to lipoproteins. Recipient mice were injected subcutaneously with CaCl2 forming a bubble into which a macrophage/alginate suspension was injected, entrapping the macrophages. Cells were recovered after 24 h. Cellular free and esterified cholesterol mass were determined enzymatically and normalized to cellular protein. Both normal and cholesterol loaded macrophages undergo measureable changes in cell cholesterol when injected into WT and apoA-I-, LDL-receptor-, or apoE-deficient mice. Cellular cholesterol balance is dependent on initial cellular cholesterol status, macrophage cholesterol transporter expression, and apolipoprotein deficiency. Alginate entrapment allows for the in vivo measurement of macrophage cholesterol homeostasis and is a novel platform for investigating the role of genetics and therapeutic interventions in atherogenesis. PMID:25465389
Weinstein, M J; Chute, L E; Schmitt, G W; Hamburger, R H; Bauer, K A; Troll, J H; Janson, P; Deykin, D
1985-01-01
Factor VIII antigen (VIII:CAg) exhibits molecular weight heterogeneity in normal plasma. We have compared the relative quantities of VIII:CAg forms present in normal individuals (n = 22) with VIII:CAg forms in renal dysfunction patients (n = 19) and in patients with disseminated intravascular coagulation (DIC; n = 7). In normal plasma, the predominant VIII: CAg form, detectable by sodium dodecyl sulfate polyacrylamide gel electrophoresis, was of molecular weight 2.4 X 10(5), with minor forms ranging from 8 X 10(4) to 2.6 X 10(5) D. A high proportion of VIII:CAg in renal dysfunction patients, in contrast, was of 1 X 10(5) mol wt. The patients' high 1 X 10(5) mol wt VIII: CAg level correlated with increased concentrations of serum creatinine, F1+2 (a polypeptide released upon prothrombin activation), and with von Willebrand factor. Despite the high proportion of the 1 X 10(5) mol wt VIII:CAg form, which suggests VIII:CAg proteolysis, the ratio of Factor VIII coagulant activity to total VIII:CAg concentration was normal in renal dysfunction patients. These results could be simulated in vitro by thrombin treatment of normal plasma, which yielded similar VIII:CAg gel patterns and Factor VIII coagulant activity to antigen ratios. DIC patients with high F1+2 levels but no evidence of renal dysfunction had an VIII:CAg gel pattern distinct from renal dysfunction patients. DIC patients had elevated concentrations of both the 1 X 10(5) and 8 X 10(4) mol wt VIII:CAg forms. We conclude that an increase in a particular VIII:CAg form correlates with the severity of renal dysfunction. The antigen abnormality may be the result of VIII:CAg proteolysis by a thrombinlike enzyme and/or prolonged retention of proteolyzed VIII:CAg fragments. Images PMID:3932466
A bulk viscosity approach for shock capturing on unstructured grids
NASA Astrophysics Data System (ADS)
Shoeybi, Mohammad; Larsson, Nils Johan; Ham, Frank; Moin, Parviz
2008-11-01
The bulk viscosity approach for shock capturing (Cook and Cabot, JCP, 2005) augments the bulk part of the viscous stress tensor. The intention is to capture shock waves without dissipating turbulent structures. The present work extends and modifies this method for unstructured grids. We propose a method that properly scales the bulk viscosity with the grid spacing normal to the shock for unstructured grid for which the shock is not necessarily aligned with the grid. The magnitude of the strain rate tensor used in the original formulation is replaced with the dilatation, which appears to be more appropriate in the vortical turbulent flow regions (Mani et al., 2008). The original form of the model is found to have an impact on dilatational motions away form the shock wave, which is eliminated by a proposed localization of the bulk viscosity. Finally, to allow for grid adaptation around shock waves, an explicit/implicit time advancement scheme has been developed that adaptively identifies the stiff regions. The full method has been verified with several test cases, including 2D shock-vorticity entropy interaction, homogenous isotropic turbulence, and turbulent flow over a cylinder.
Extraction of skin-friction fields from surface flow visualizations as an inverse problem
NASA Astrophysics Data System (ADS)
Liu, Tianshu
2013-12-01
Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Algebraic grid generation with corner singularities
NASA Technical Reports Server (NTRS)
Vinokur, M.; Lombard, C. K.
1983-01-01
A simple noniterative algebraic procedure is presented for generating smooth computational meshes on a quadrilateral topology. Coordinate distribution and normal derivative are provided on all boundaries, one of which may include a slope discontinuity. The boundary conditions are sufficient to guarantee continuity of global meshes formed of joined patches generated by the procedure. The method extends to 3-D. The procedure involves a synthesis of prior techniques stretching functions, cubic blending functions, and transfinite interpolation - to which is added the functional form of the corner solution. The procedure introduces the concept of generalized blending, which is implemented as an automatic scaling of the boundary derivatives for effective interpolation. Some implications of the treatment at boundaries for techniques solving elliptic PDE's are discussed in an Appendix.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
NASA Astrophysics Data System (ADS)
Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos
2017-06-01
The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.
NASA Astrophysics Data System (ADS)
Widodo, S. B.; Hamdani; Rizal, T. A.; Pambudi, N. A.
2018-02-01
In Langsa, fisheries are the sector leaders by fulfilling a capacity of about 6,050 tons per year and on the other hand, fish-aquaculture reaches 1,200 tons per year on average. The fish processing is conducted through catches and aquaculture. The facilities on which this processing takes place are divided into an ice factory unit, a gutting and cutting unit, a drying unit and a curing unit. However, the energy and electricity costs during the production process has become major constraint because of the increase in the fishermen’s production and income. In this study, the potential and cost-effectiveness of photovoltaic solar power plant to meet the energy demands of fish processing units have been analysed. The energy requirements of fish processing units have reached an estimate of 130 kW, while the proposed design of solar photovoltaic electricity generation is of 200 kW in an area of 0,75 hectares. In this analysis, given the closeness between the location of the processing units and the fish supply auctions, the assumption is made that the photovoltaic plants (OTR) were installed on the roof of the building as compared to the solar power plants (OTL) installed on the outside of the location. The results shows that the levelized cost of OTR instalation is IDR 1.115 per kWh, considering 25 years of plant life-span at 10% of discount rate, with a simple payback period of 13.2 years. OTL levelized energy, on the other hand, is at IDR 997.5 per kWh with a simple payback period of 9.6 years. Blood is an essential component of living creatures in the vascular space. For possible disease identification, it can be tested through a blood test, one of which can be seen from the form of red blood cells. The normal and abnormal morphology of the red blood cells of a patient is very helpful to doctors in detecting a disease. With the advancement of digital image processing technology can be used to identify normal and abnormal blood cells of a patient. This research used self-organizing map method to classify the normal and abnormal form of red blood cells in the digital image. The use of self-organizing map neural network method can be implemented to classify the normal and abnormal form of red blood cells in the input image with 93,78% accuracy testing.
Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan
2017-06-01
Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.
Method for rapid isolation of sensitive mutants
Freyer, James P.
1997-01-01
Sensitive mammalian cell mutants are rapidly isolated using flow cytometry. A first population of clonal spheroids is established to contain both normal and mutant cells. The population may be naturally occurring or may arise from mutagenized cells. The first population is then flow sorted by size to obtain a second population of clonal spheroids of a first uniform size. The second population is then exposed to a DNA-damaging agent that is being investigated. The exposed second population is placed in a growth medium to form a third population of clonal spheroids comprising spheroids of increased size from the mammalian cells that are resistant to the DNA-damaging agent and spheroids of substantially the first uniform size formed from the mammalian cells that are sensitive to the DNA-damaging agent. The third population is not flow sorted to differentiate the spheroids formed from resistant mammalian cells from spheroids formed from sensitive mammalian cells. The spheroids formed from sensitive mammalian cells are now treated to recover viable sensitive cells from which a sensitive cell line can be cloned.
Method for rapid isolation of sensitive mutants
Freyer, J.P.
1997-07-29
Sensitive mammalian cell mutants are rapidly isolated using flow cytometry. A first population of clonal spheroids is established to contain both normal and mutant cells. The population may be naturally occurring or may arise from mutagenized cells. The first population is then flow sorted by size to obtain a second population of clonal spheroids of a first uniform size. The second population is then exposed to a DNA-damaging agent that is being investigated. The exposed second population is placed in a growth medium to form a third population of clonal spheroids comprising spheroids of increased size from the mammalian cells that are resistant to the DNA-damaging agent and spheroids of substantially the first uniform size formed from the mammalian cells that are sensitive to the DNA-damaging agent. The third population is not flow sorted to differentiate the spheroids formed from resistant mammalian cells from spheroids formed from sensitive mammalian cells. The spheroids formed from sensitive mammalian cells are now treated to recover viable sensitive cells from which a sensitive cell line can be cloned. 15 figs.
The dynamics of the de Sitter resonance
NASA Astrophysics Data System (ADS)
Celletti, Alessandra; Paita, Fabrizio; Pucacco, Giuseppe
2018-02-01
We study the dynamics of the de Sitter resonance, namely the stable equilibrium configuration of the first three Galilean satellites. We clarify the relation between this family of configurations and the more general Laplace resonant states. In order to describe the dynamics around the de Sitter stable equilibrium, a one-degree-of-freedom Hamiltonian normal form is constructed and exploited to identify initial conditions leading to the two families. The normal form Hamiltonian is used to check the accuracy in the location of the equilibrium positions. Besides, it gives a measure of how sensitive it is with respect to the different perturbations acting on the system. By looking at the phase plane of the normal form, we can identify a Laplace-like configuration, which highlights many substantial aspects of the observed one.
Wang, Wei Bu; Liang, Yu; Zhang, Jing; Wu, Yi Dong; Du, Jian Jun; Li, Qi Ming; Zhu, Jian Zhuo; Su, Ji Guo
2018-06-22
Intra-molecular energy transport between distant functional sites plays important roles in allosterically regulating the biochemical activity of proteins. How to identify the specific intra-molecular signaling pathway from protein tertiary structure remains a challenging problem. In the present work, a non-equilibrium dynamics method based on the elastic network model (ENM) was proposed to simulate the energy propagation process and identify the specific signaling pathways within proteins. In this method, a given residue was perturbed and the propagation of energy was simulated by non-equilibrium dynamics in the normal modes space of ENM. After that, the simulation results were transformed from the normal modes space to the Cartesian coordinate space to identify the intra-protein energy transduction pathways. The proposed method was applied to myosin and the third PDZ domain (PDZ3) of PSD-95 as case studies. For myosin, two signaling pathways were identified, which mediate the energy transductions form the nucleotide binding site to the 50 kDa cleft and the converter subdomain, respectively. For PDZ3, one specific signaling pathway was identified, through which the intra-protein energy was transduced from ligand binding site to the distant opposite side of the protein. It is also found that comparing with the commonly used cross-correlation analysis method, the proposed method can identify the anisotropic energy transduction pathways more effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigase, Yves
2007-07-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less
Strain tolerant microfilamentary superconducting wire
Finnemore, D.K.; Miller, T.A.; Ostenson, J.E.; Schwartzkopf, L.A.; Sanders, S.C.
1993-02-23
A strain tolerant microfilamentary wire capable of carrying superconducting currents is provided comprising a plurality of discontinuous filaments formed from a high temperature superconducting material. The discontinuous filaments have a length at least several orders of magnitude greater than the filament diameter and are sufficiently strong while in an amorphous state to withstand compaction. A normal metal is interposed between and binds the discontinuous filaments to form a normal metal matrix capable of withstanding heat treatment for converting the filaments to a superconducting state. The geometry of the filaments within the normal metal matrix provides substantial filament-to-filament overlap, and the normal metal is sufficiently thin to allow supercurrent transfer between the overlapped discontinuous filaments but is also sufficiently thick to provide strain relief to the filaments.
Vibro-acoustography and multifrequency image compounding.
Urban, Matthew W; Alizad, Azra; Fatemi, Mostafa
2011-08-01
Vibro-acoustography is an ultrasound based imaging modality that can visualize normal and abnormal soft tissue through mapping the acoustic response of the object to a harmonic radiation force at frequency Δf induced by focused ultrasound. In this method, the ultrasound energy is converted from high ultrasound frequencies to a low acoustic frequency (acoustic emission) that is often two orders of magnitude smaller than the ultrasound frequency. The acoustic emission is normally detected by a hydrophone. Depending on the setup, this low frequency sound may reverberate by object boundaries or other structures present in the acoustic paths before it reaches the hydrophone. This effect produces an artifact in the image in the form of gradual variations in image intensity that may compromise image quality. The use of tonebursts with finite length yields acoustic emission at Δf and at sidebands centered about Δf. Multiple images are formed by selectively applying bandpass filters on the acoustic emission at Δf and the associated sidebands. The data at these multiple frequencies are compounded through both coherent and incoherent processes to reduce the acoustic emission reverberation artifacts. Experimental results from a urethane breast phantom are described. The coherent and incoherent compounding of multifrequency data show, both qualitatively and quantitatively, the efficacy of this reverberation reduction method. This paper presents theory describing the physical origin of this artifact and use of image data created using multifrequency vibro-acoustography for reducing reverberation artifacts. Copyright © 2011 Elsevier B.V. All rights reserved.
Vibro-acoustography and Multifrequency Image Compounding
Urban, Matthew W.; Alizad, Azra; Fatemi, Mostafa
2011-01-01
Vibro-acoustography is an ultrasound based imaging modality that can visualize normal and abnormal soft tissue through mapping the acoustic response of the object to a harmonic radiation force at frequency Δf induced by focused ultrasound. In this method, the ultrasound energy is converted from high ultrasound frequencies to a low acoustic frequency (acoustic emission) that is often two orders of magnitude smaller than the ultrasound frequency. The acoustic emission is normally detected by a hydrophone. Depending on the setup, this low frequency sound may reverberate by object boundaries or other structures present in the acoustic paths before it reaches the hydrophone. This effect produces an artifact in the image in the form of gradual variations in image intensity that may compromise image quality. The use of tonebursts with finite length yields acoustic emission at Δf and at sidebands centered about Δf. Multiple images are formed by selectively applying bandpass filters on the acoustic emission at Δf and the associated sidebands. The data at these multiple frequencies are compounded through both coherent and incoherent processes to reduce the acoustic emission reverberation artifacts. Experimental results from a urethane breast phantom are described. The coherent and incoherent compounding of multifrequency data show, both qualitatively and quantitatively, the efficacy of this reverberation reduction method. This paper presents theory describing the physical origin of this artifact and use of image data created using multifrequency vibro-acoustography for reducing reverberation artifacts. PMID:21377181
Di Donato, Guido; Laufer-Amorim, Renée; Palmieri, Chiara
2017-10-01
Ten normal prostates, 22 benign prostatic hyperplasia (BPH) and 29 prostate cancer (PC) were morphometrically analyzed with regard to mean nuclear area (MNA), mean nuclear perimeter (MNP), mean nuclear diameter (MND), coefficient of variation of the nuclear area (NACV), mean nuclear diameter maximum (MDx), mean nuclear diameter minimum (MDm), mean nuclear form ellipse (MNFe) and form factor (FF). The relationship between nuclear morphometric parameters and histological type, Gleason score, methods of sample collection, presence of metastases and survival time of canine PC were also investigated. Overall, nuclei from neoplastic cells were larger, with greater variation in nuclear size and shape compared to normal and hyperplastic cells. Significant differences were found between more (small acinar/ductal) and less (cribriform, solid) differentiated PCs with regard to FF (p<0.05). MNA, MNP, MND, MDx, and MDm were significantly correlated with the Gleason score of PC (p<0.05). MNA, MNP, MDx and MNFe may also have important prognostic implications in canine prostatic cancer since negatively correlated with the survival time. Biopsy specimens contained nuclei that were smaller and more irregular in comparison to those in prostatectomy and necropsy specimens and therefore factors associated with tissue sampling and processing may influence the overall morphometric evaluation. The results indicate that nuclear morphometric analysis in combination with Gleason score can help in canine prostate cancer grading, thus contributing to the establishment of a more precise prognosis and patient's management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
A simple method for quantitating the propensity for calcium oxalate crystallization in urine
NASA Technical Reports Server (NTRS)
Wabner, C. L.; Pak, C. Y.
1991-01-01
To assess the propensity for spontaneous crystallization of calcium oxalate in urine, the permissible increment in oxalate is calculated. The previous method required visual observation of crystallization with the addition of oxalate, this warranted the need for a large volume of urine and a sacrifice in accuracy in defining differences between small incremental changes of added oxalate. Therefore, this method has been miniaturized and spontaneous crystallization is detected from the depletion of radioactive oxalate. The new "micro" method demonstrated a marked decrease (p < 0.001) in the permissible increment in oxalate in urine of stone formers versus normal subjects. Moreover, crystallization inhibitors added to urine, in vitro (heparin or diphosphonate) or in vivo (potassium citrate administration), substantially increased the permissible increment in oxalate. Thus, the "micro" method has proven reliable and accurate in discriminating stone forming from control urine and in distinguishing changes of inhibitory activity.
Asad, A H; Chan, S; Cryer, D; Burrage, J W; Siddiqui, S A; Price, R I
2015-11-01
The proton beam energy of an isochronous 18MeV cyclotron was determined using a novel version of the stacked copper-foils technique. This simple method used stacked foils of natural copper forming 'thick' targets to produce Zn radioisotopes by the well-documented (p,x) monitor-reactions. Primary beam energy was calculated using the (65)Zn activity vs. depth profile in the target, with the results obtained using (62)Zn and (63)Zn (as comparators) in close agreement. Results from separate measurements using foil thicknesses of 100, 75, 50 or 25µm to form the stacks also concurred closely. Energy was determined by iterative least-squares comparison of the normalized measured activity profile in a target-stack with the equivalent calculated normalized profile, using 'energy' as the regression variable. The technique exploits the uniqueness of the shape of the activity vs. depth profile of the monitor isotope in the target stack for a specified incident energy. The energy using (65)Zn activity profiles and 50-μm foils alone was 18.03±0.02 [SD] MeV (95%CI=17.98-18.08), and 18.06±0.12MeV (95%CI=18.02-18.10; NS) when combining results from all isotopes and foil thicknesses. When the beam energy was re-measured using (65)Zn and 50-μm foils only, following a major upgrade of the ion sources and nonmagnetic beam controls the results were 18.11±0.05MeV (95%CI=18.00-18.23; NS compared with 'before'). Since measurement of only one Zn monitor isotope is required to determine the normalized activity profile this indirect yet precise technique does not require a direct beam-current measurement or a gamma-spectroscopy efficiency calibrated with standard sources, though a characteristic photopeak must be identified. It has some advantages over published methods using the ratio of cross sections of monitor reactions, including the ability to determine energies across a broader range and without need for customized beam degraders. Copyright © 2015 Elsevier Ltd. All rights reserved.
Three-dimensional wideband electromagnetic modeling on massively parallel computers
NASA Astrophysics Data System (ADS)
Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.
1996-01-01
A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.
Mosaic structure in epitaxial thin films having large lattice mismatch
NASA Astrophysics Data System (ADS)
Srikant, V.; Speck, J. S.; Clarke, D. R.
1997-11-01
Epitaxial films having a large lattice mismatch with their substrate invariably form a mosaic structure of slightly misoriented sub-grains. The mosaic structure is usually characterized by its x-ray rocking curve on a surface normal reflection but this is limited to the out-of-plane component unless off-axis or transmission experiments are performed. A method is presented by which the in-plane component of the mosaic misorientation can be determined from the rocking curves of substrate normal and off-axis reflections. Results are presented for two crystallographically distinct heteroepitaxial systems, ZnO, AlN, and GaN (wurtzite crystal structure) on c-plane sapphire and MgO (rock salt crystal structure) on (001) GaAs. The differences in the mosaic structure of these films are attributed to the crystallographic nature of their lattice dislocations.
Sensor placement for diagnosability in space-borne systems - A model-based reasoning approach
NASA Technical Reports Server (NTRS)
Chien, Steve; Doyle, Richard; Rouquette, Nicolas
1992-01-01
This paper presents an approach to evaluating sensor placements on the basis of how well they are able to discriminate between a given fault and normal operating modes and/or other fault modes. In this approach, a model of the system in both normal operations and fault modes is used to evaluate possible sensor placements upon the basis of three criteria. Discriminability measures how much of a divergence in expected sensor readings the two system modes can be expected to produce. Accuracy measures confidence in the particular model predictions. Timeliness measures how long after the fault occurrence the expected divergence will take place. These three metrics then can be used to form a recommendation for a sensor placement. This paper describes how these measures can be computed and illustrated these methods with a brief example.
Fuchs, F; Guillot, E; Cordier, A-G; Chis, C; Raynal, P; Panel, P
2008-04-01
Pregnancy in the rudimentary horn of a unicornuate uterus is an extremely rare form of ectopic gestation associated with a high risk of uterine rupture. We report the case of a pregnancy developed in a non communicating rudimentary horn of a unicornuate uterus complicated by horn rupture at 23 weeks of amenorrhea showing as an acute abdominal pain and massive hemoperitoneum. This patient's uterine abnormality was known before, as this woman has delivered two years before at term a healthy boy by cesarean section. This past pregnancy was located in the normal horn and the non communicating rudimentary horn seemed at this time normal. This uterine malformation is presented with its gynecological and obstetrical entailments as well as methods that could prevent such outcome.
Kitanov, Petko M.; Langford, William F.
2017-01-01
In 1665, Huygens observed that two identical pendulum clocks, weakly coupled through a heavy beam, soon synchronized with the same period and amplitude but with the two pendula swinging in opposite directions. This behaviour is now called anti-phase synchronization. This paper presents an analysis of the behaviour of a large class of coupled identical oscillators, including Huygens' clocks, using methods of equivariant bifurcation theory. The equivariant normal form for such systems is developed and the possible solutions are characterized. The transformation of the physical system parameters to the normal form parameters is given explicitly and applied to the physical values appropriate for Huygens' clocks, and to those of more recent studies. It is shown that Huygens' physical system could only exhibit anti-phase motion, explaining why Huygens observed exclusively this. By contrast, some more recent researchers have observed in-phase or other more complicated motion in their own experimental systems. Here, it is explained which physical characteristics of these systems allow for the existence of these other types of stable solutions. The present analysis not only accounts for these previously observed solutions in a unified framework, but also introduces behaviour not classified by other authors, such as a synchronized toroidal breather and a chaotic toroidal breather. PMID:28989780
Targeted transplantation of mitochondria to hepatocytes
Gupta, Nidhi; Wu, Catherine H; Wu, George Y
2016-01-01
Background Mitochondrial defects in hepatocytes can result in liver dysfunction and death. Hepatocytes have cell-surface asialoglycoprotein receptors (AsGRs) which internalize AsGs within endosomes. The aim of this study was to determine whether mitochondria could be targeted to hepatocytes by AsGR-mediated endocytosis. Materials and methods An AsG, AsOR, was linked to polylysine to create a conjugate, AsOR-PL, and complexed with healthy and functional mitochondria (defined by normal morphology, cytochrome c assays, and oxygen-consumption rates). Huh7 (AsGR+) and SK Hep1 (AsGR−) cells were treated with a mitochondrial toxin to form Huh7-Mito− and SK Hep1-Mito− cells, lacking detectable mitochondrial DNA. An endosomolytic peptide, LLO, was coupled to AsOR to form AsOR-LLO. A lysosomal inhibitor, amantadine, was used in mitochondria-uptake studies as a control for nonspecific endosomal release. Results Coincubation of complexed mitochondria and AsOR-LLO with Huh7-Mito− cells increased mitochondrial DNA to >9,700-fold over control at 7 days (P<0.001), and increased mitochondrial oxygen-consumption rates to >90% of control by 10 days. Conclusion Rescue of mitochondria-damaged hepatocytes can be achieved by targeted uptake of normal mitochondria through receptor-mediated endocytosis. PMID:27942238
Antioxidant and hypolipidemic activity of Kumbhajatu in hypercholesterolemic rats.
Ghosh, Rumi; Kadam, Parag P; Kadam, Vilasrao J
2010-07-01
To study the efficacy of Kumbhajatu in reducing the cholesterol levels and as an antioxidant in hypercholesterolemic rats. Hypercholesterolemia was induced in normal rats by including 2% w/w cholesterol, 1% w/w sodium cholate and 2.5% w/w coconut oil in the normal diet. Powdered form of Kumbhajatu was administered as feed supplement at 250 and 500 mg/kg dose levels to the hypercholesterolemic rats. Plasma lipid profile, hepatic superoxide dismutase (SOD) activity, catalase activity, reduced glutathione and extent of lipid peroxidation in the form of malondialdehyde were estimated using standard methods. Feed supplementation with 250 and 500 mg/kg of Kumbhajatu resulted in a significant decline in plasma lipid profiles. The feed supplementation increased the concentration of catalase, SOD, glutathione and HDL-c significantly in both the experimental groups (250 and 500 mg/kg). On the other hand, the concentration of malondialdehyde, cholesterol, triglycerides, LDL-c and VLDL in these groups (250 and 500 mg/kg) were decreased significantly. The present study demonstrates that addition of Kumbhajatu powder at 250 and 500 mg/kg level as a feed supplement reduces the plasma lipid levels and also decreases lipid peroxidation.
Identification of nonlinear modes using phase-locked-loop experimental continuation and normal form
NASA Astrophysics Data System (ADS)
Denis, V.; Jossic, M.; Giraud-Audine, C.; Chomette, B.; Renault, A.; Thomas, O.
2018-06-01
In this article, we address the model identification of nonlinear vibratory systems, with a specific focus on systems modeled with distributed nonlinearities, such as geometrically nonlinear mechanical structures. The proposed strategy theoretically relies on the concept of nonlinear modes of the underlying conservative unforced system and the use of normal forms. Within this framework, it is shown that without internal resonance, a valid reduced order model for a nonlinear mode is a single Duffing oscillator. We then propose an efficient experimental strategy to measure the backbone curve of a particular nonlinear mode and we use it to identify the free parameters of the reduced order model. The experimental part relies on a Phase-Locked Loop (PLL) and enables a robust and automatic measurement of backbone curves as well as forced responses. It is theoretically and experimentally shown that the PLL is able to stabilize the unstable part of Duffing-like frequency responses, thus enabling its robust experimental measurement. Finally, the whole procedure is tested on three experimental systems: a circular plate, a chinese gong and a piezoelectric cantilever beam. It enable to validate the procedure by comparison to available theoretical models as well as to other experimental identification methods.
NASA Astrophysics Data System (ADS)
Xu, Rong; Ayers, Brenda; Cowburn, David; Muir, Tom W.
1999-01-01
A convenient in vitro chemical ligation strategy has been developed that allows folded recombinant proteins to be joined together. This strategy permits segmental, selective isotopic labeling of the product. The src homology type 3 and 2 domains (SH3 and SH2) of Abelson protein tyrosine kinase, which constitute the regulatory apparatus of the protein, were individually prepared in reactive forms that can be ligated together under normal protein-folding conditions to form a normal peptide bond at the ligation junction. This strategy was used to prepare NMR sample quantities of the Abelson protein tyrosine kinase-SH(32) domain pair, in which only one of the domains was labeled with 15N Mass spectrometry and NMR analyses were used to confirm the structure of the ligated protein, which was also shown to have appropriate ligand-binding properties. The ability to prepare recombinant proteins with selectively labeled segments having a single-site mutation, by using a combination of expression of fusion proteins and chemical ligation in vitro, will increase the size limits for protein structural determination in solution with NMR methods. In vitro chemical ligation of expressed protein domains will also provide a combinatorial approach to the synthesis of linked protein domains.
Torii, Daisuke; Konishi, Kiyoshi; Watanabe, Nobuyuki; Goto, Shinichi; Tsutsui, Takeki
2015-01-01
The periodontal ligament (PDL) consists of a group of specialized connective tissue fibers embedded in the alveolar bone and cementum that are believed to contain progenitors for mineralized tissue-forming cell lineages. These progenitors may contribute to regenerative cell therapy or tissue engineering methods aimed at recovery of tissue formation and functions lost in periodontal degenerative changes. Some reports using immortal clonal cell lines of cementoblasts, which are cells containing mineralized tissue-forming cell lineages, have shown that their phenotypic alteration and gene expression are associated with mineralization. Immortal, multipotential PDL-derived cell lines may be useful biological tools for evaluating differentiation-inducing agents. In this study, we confirmed the gene expression and mineralization potential of primary and immortal human PDL cells and characterized their immunophenotype. Following incubation with mineralization induction medium containing β-glycerophosphate, ascorbic acid, and dexamethasone, normal human PDL (Pel) cells and an immortal derivative line (Pelt) cells showed higher levels of mineralization compared with cells grown in normal growth medium. Both cell types were positive for putative surface antigens of mesenchymal cells (CD44, CD73, CD90, and CD105). They were also positive for stage-specific embryonic antigen-3, a marker of multipotential stem cells. Furthermore, PDL cells expressed cementum attachment protein and cementum protein 1 when cultured with recombinant human bone morphogenetic protein-2 or -7. The results suggest that normal and immortal human PDL cells contain multipotential mesenchymal stem cells with cementogenic potential.
Human preocular mucins reflect changes in surface physiology
Berry, M; Ellingham, R B; Corfield, A P
2004-01-01
Background/aims: Mucin function is associated with both peptide core and glycosylation characteristics. The authors assessed whether structural alterations occurring during mucin residence in the tear film reflect changes in ocular surface physiology. Methods: Ocular surface mucus was collected from normal volunteers as N-acetyl cysteine (NAcCys) washes or directly from the speculum after cataract surgery. To assess the influence of surface health on mucins, NAcCys washings were also obtained from patients with symptoms, but no clinical signs of dry eye (symptomatics). Mucins were extracted in guanidine hydrochloride (GuHCl) with protease inhibitors. Buoyant density of mucin species, a correlate of glycosylation density, was followed by reactivity with anti-peptide core antibodies. Mucin hydrodynamic volume was assessed by gel filtration on Sepharose CL2B. Results: Surface fluid and mucus contained soluble forms of MUC1, MUC2, MUC4, and MUC5AC and also the same species requiring DTT solubilisation. Reactivity with antibodies to MUC2 and MUC5AC peaked at 1.3–1.5 g/ml in normals, while dominated by underglycosylated forms in symptomatics. Surface mucins were predominantly smaller than intracellular species. MUC2 size distributions were different in symptomatics and normals, while those of MUC5AC were similar in these two groups. Conclusions: A reduction in surface mucin size indicates post-secretory cleavage. Dissimilarities in surface mucin glycosylation and individual MUC size distributions in symptomatics suggest changes in preocular mucin that might precede dry eye signs. PMID:14977773
;border:0 }.ou-form textarea.form-control{height:auto }.ou-form .form-group{margin-bottom:15px }.ou-form -height:normal }.ou-form .btn,.ou-form .form-control{font-size:14px;line-height:1.42857143;background-image:none .form-control{display:block;width:100%;height:34px;padding:6px 12px;color:#555;background-color:#fff
Mineralization of wastes of human vital activity and plants to be used in a Life Support System.
Kudenko YuA; Gribovskaya, I V; Pavlenko, R A
1997-08-01
Available methods for mineralizing wastes of human activity and inedible biomass of plants used in this country and abroad are divided into two types: dry mineralization at high temperatures up to 1270 K with subsequent partial dissolution of the ash and the other--wet oxidation by acids. In this case mineralization is performed at a temperature of 470-460 K and a pressure of 220-270 atmospheres in pure oxygen with the output of mineral solution and dissoluble sediments in the form of scale. The drawback of the first method is the formation of dioxins, CO, SO2, NO2 and other toxic compounds. The latter method is too sophisticated and is presently confined to bench testing. The here proposed method to mineralize the wastes is in mid-position between the thermal and physical chemical methods. At a temperature of 80-90 degrees C the mixture was exposed to a controlled electromagnetic field at normal atmospheric pressure. The method merits simplicity, reliability, produces no dissoluble sediment or emissions noxious for human and plants. The basic difference from the above said methods is to employ as an oxidizer atomic oxygen, its active forms including OH-radicals with hydrogen peroxide as the source. Hydrogen peroxide can be produced with electric power from water inside the Life Support System (LSS).
Robinson, C; Kirkham, J; Percival, R; Shore, R C; Bonass, W A; Brookes, S J; Kusa, L; Nakagaki, H; Kato, K; Nattress, B
1997-01-01
The study of plaque biofilms in the oral cavity is difficult as plaque removal inevitably disrupts biofilm integrity precluding kinetic studies involving the penetration of components and metabolism of substrates in situ. A method is described here in which plaque is formed in vivo under normal (or experimental) conditions using a collection device which can be removed from the mouth after a specified time without physical disturbance to the plaque biofilm, permitting site-specific analysis or exposure of the undisturbed plaque to experimental conditions in vitro. Microbiological analysis revealed plaque flora which was similar to that reported from many natural sources. Analytical data can be related to plaque volume rather than weight. Using this device, plaque fluoride concentrations have been shown to vary with plaque depth and in vitro short-term exposure to radiolabelled components may be carried out, permitting important conclusions to be drawn regarding the site-specific composition and dynamics of dental plaque.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, W.L.; Eddy, M.M.; Hammond, R.B.
1991-12-10
This patent describes a method for producing a superconducting article comprising an oriented metal oxide superconducting layer containing thallium, optionally calcium, barium and copper, the layer being at least 30 {Angstrom} and having a c-axis oriented normal to a crystalline substrate surface. It comprises coating the crystalline substrate surface with a solution of thallium, optionally calcium, barium and copper carboxylate soaps dispersed in a medium of hydrocarbons of halohydrocarbons with a stoichiometric metal ratio to form the oxide superconducting layer, prepyrolyzing the soaps coated on the substrate at a temperature of 350{degrees} C. or less in an oxygen containing atmosphere,more » and pyrolyzing the soaps at a temperature in the range of 800{degrees} - 900{degrees} C. in the presence of oxygen and an overpressure of thallium for a sufficient time to produce the superconducting layer on the substrate, wherein usable portions of the superconducting layer are epitaxial to the substrate.« less
Single-step fabrication of quantum funnels via centrifugal colloidal casting of nanoparticle films
Kim, Jin Young; Adinolfi, Valerio; Sutherland, Brandon R.; Voznyy, Oleksandr; Kwon, S. Joon; Kim, Tae Wu; Kim, Jeongho; Ihee, Hyotcherl; Kemp, Kyle; Adachi, Michael; Yuan, Mingjian; Kramer, Illan; Zhitomirsky, David; Hoogland, Sjoerd; Sargent, Edward H.
2015-01-01
Centrifugal casting of composites and ceramics has been widely employed to improve the mechanical and thermal properties of functional materials. This powerful method has yet to be deployed in the context of nanoparticles—yet size–effect tuning of quantum dots is among their most distinctive and application-relevant features. Here we report the first gradient nanoparticle films to be constructed in a single step. By creating a stable colloid of nanoparticles that are capped with electronic-conduction-compatible ligands we were able to leverage centrifugal casting for thin-films devices. This new method, termed centrifugal colloidal casting, is demonstrated to form films in a bandgap-ordered manner with efficient carrier funnelling towards the lowest energy layer. We constructed the first quantum-gradient photodiode to be formed in a single deposition step and, as a result of the gradient-enhanced electric field, experimentally measured the highest normalized detectivity of any colloidal quantum dot photodetector. PMID:26165185
Experimental studies of breaking of elastic tired wheel under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.
Direct-to-digital holography and holovision
Thomas, Clarence E.; Baylor, Larry R.; Hanson, Gregory R.; Rasmussen, David A.; Voelkl, Edgar; Castracane, James; Simkulet, Michelle; Clow, Lawrence
2000-01-01
Systems and methods for direct-to-digital holography are described. An apparatus includes a laser; a beamsplitter optically coupled to the laser; a reference beam mirror optically coupled to the beamsplitter; an object optically coupled to the beamsplitter, a focusing lens optically coupled to both the reference beam mirror and the object; and a digital recorder optically coupled to the focusing lens. A reference beam is incident upon the reference beam mirror at a non-normal angle, and the reference beam and an object beam are focused by the focusing lens at a focal plane of the digital recorder to form an image. The systems and methods provide advantages in that computer assisted holographic measurements can be made.
Virtual mask digital electron beam lithography
Baylor, L.R.; Thomas, C.E.; Voelkl, E.; Moore, J.A.; Simpson, M.L.; Paulus, M.J.
1999-04-06
Systems and methods for direct-to-digital holography are described. An apparatus includes a laser; a beamsplitter optically coupled to the laser; a reference beam mirror optically coupled to the beamsplitter; an object optically coupled to the beamsplitter, a focusing lens optically coupled to both the reference beam mirror and the object; and a digital recorder optically coupled to the focusing lens. A reference beam is incident upon the reference beam mirror at a non-normal angle, and the reference beam and an object beam are focused by the focusing lens at a focal plane of the digital recorder to form an image. The systems and methods provide advantages in that computer assisted holographic measurements can be made. 5 figs.
One-step fabrication of multifunctional micromotors.
Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y
2015-09-07
Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.
Virtual mask digital electron beam lithography
Baylor, Larry R.; Thomas, Clarence E.; Voelkl, Edgar; Moore, James A.; Simpson, Michael L.; Paulus, Michael J.
1999-01-01
Systems and methods for direct-to-digital holography are described. An apparatus includes a laser; a beamsplitter optically coupled to the laser; a reference beam mirror optically coupled to the beamsplitter; an object optically coupled to the beamsplitter, a focusing lens optically coupled to both the reference beam mirror and the object; and a digital recorder optically coupled to the focusing lens. A reference beam is incident upon the reference beam mirror at a non-normal angle, and the reference beam and an object beam are focused by the focusing lens at a focal plane of the digital recorder to form an image. The systems and methods provide advantages in that computer assisted holographic measurements can be made.
Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.
Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan
2013-01-01
Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in prosthetic ankle-foot complex compared to normal one. The predicted plantar pressures and von Misses stress distributions for a normal foot were consistent with other FE models given in the literature. The present study is aimed to open new approaches for the development of ankle prosthesis.
Axline, S. G.
1968-01-01
The acid phosphatase activity of normal alveolar and BCG-induced alveolar macrophages has been examined. Five electrophoretically distinct forms of acid phosphatase have been identified in both normal and BCG-induced macrophages. The acid phosphatases can be divided into two major categories. One category, containing four distinct forms, is readily solubilized after repeated freezing and thawing or mechanical disruption The second category, containing one form, is firmly bound to the lysosomal membrane and can be solubilized by treatment of the lysosomal fraction with Triton X-100. The Triton-extractable acid phosphatase and the predominant aqueous soluble acid phosphatase have been shown to differ in the degree of membrane binding, in solubility, in net charge, and in molecular weight. The two pre-dominant phosphatases possess identical pH optimum and do not differ in response to enzyme inhibitors. BCG stimulation has been shown to result in a nearly twofold increase in acid phosphatase activity. A nearly proportionate increase in the major acid phosphatase forms has been observed. PMID:4878908
Li, Jun; Tibshirani, Robert
2015-01-01
We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579
Callahan, Damien L; De Souza, David; Bacic, Antony; Roessner, Ute
2009-07-01
Highly polar metabolites, such as sugars and most amino acids are not retained by conventional RP LC columns. Without sufficient retention low concentration compounds are not detected due ion suppression and structural isomers are not resolved. In contrast, hydrophilic interaction chromatography (HILIC) and aqueous normal phase chromatography (ANP) retain compounds based on their hydrophilicity and therefore provides a means of separating highly polar compounds. Here, an ANP method based on the diamond hydride stationary phase is presented for profiling biological small molecules by LC. A rapid separation system based upon a fast gradient that delivers reproducible chromatography is presented. Approximately 1000 compounds were reproducibly detected in human urine samples and clear differences between these samples were identified. This chromatography was also applied to xylem fluid from soyabean (Glycine max) plants to which 400 compounds were detected. This method greatly increases the metabolite coverage over RP-only metabolite profiling in biological samples. We show that both forms of chromatography are necessary for untargeted comprehensive metabolite profiling and that the diamond hydride stationary phase provides a good option for polar metabolite analysis.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION
Allen, Genevera I.; Tibshirani, Robert
2015-01-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
Sound Emission of Rotor Induced Deformations of Generator Casings
NASA Technical Reports Server (NTRS)
Polifke, W.; Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
The casing of large electrical generators can be deformed slightly by the rotor's magnetic field. The sound emission produced by these periodic deformations, which could possibly exceed guaranteed noise emission limits, is analysed analytically and numerically. From the deformation of the casing, the normal velocity of the generator's surface is computed. Taking into account the corresponding symmetry, an analytical solution for the acoustic pressure outside the generator is round in terms of the Hankel function of second order. The normal velocity or the generator surface provides the required boundary condition for the acoustic pressure and determines the magnitude of pressure oscillations. For the numerical simulation, the nonlinear 2D Euler equations are formulated In a perturbation form for low Mach number Computational Aeroacoustics (CAA). The spatial derivatives are discretized by the classical sixth-order central interior scheme and a third-order boundary scheme. Spurious high frequency oscillations are damped by a characteristic-based artificial compression method (ACM) filter. The time derivatives are approximated by the classical 4th-order Runge-Kutta method. The numerical results are In excellent agreement with the analytical solution.
Surface Curvatures Computation from Equidistance Contours
NASA Astrophysics Data System (ADS)
Tanaka, Hiromi T.; Kling, Olivier; Lee, Daniel T. L.
1990-03-01
The subject of our research is on the 3D shape representation problem for a special class of range image, one where the natural mode of the acquired range data is in the form of equidistance contours, as exemplified by a moire interferometry range system. In this paper we present a novel surface curvature computation scheme that directly computes the surface curvatures (the principal curvatures, Gaussian curvature and mean curvature) from the equidistance contours without any explicit computations or implicit estimates of partial derivatives. We show how the special nature of the equidistance contours, specifically, the dense information of the surface curves in the 2D contour plane, turns into an advantage for the computation of the surface curvatures. The approach is based on using simple geometric construction to obtain the normal sections and the normal curvatures. This method is general and can be extended to any dense range image data. We show in details how this computation is formulated and give an analysis on the error bounds of the computation steps showing that the method is stable. Computation results on real equidistance range contours are also shown.
Quaternion normalization in spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Galal, Ken
1992-01-01
Methods are presented to normalize the attitude quaternion in two extended Kalman filters (EKF), namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). It is concluded that all the normalization methods work well and yield comparable results. In the AEKF, normalization is not essential, since the data chosen for the test do not have a rapidly varying attitude. In the MEKF, normalization is necessary to avoid divergence of the attitude estimate. All of the methods of the methods behave similarly when the spacecraft experiences low angular rates.
The Functionality of Facial Appearance and Its Importance to a Korean Population
Kim, Young Jun; Park, Jang Wan; Kim, Jeong Min; Park, Sun Hyung; Hwang, Jae Ha; Lee, Sam Yong; Shin, Jun Ho
2013-01-01
Background Many people have an interest in the correction of facial scars or deformities caused by trauma. The increasing ability to correct such flaws has been one of the reasons for the increase in the popularity of facial plastic surgery. In addition to its roles in communication, breathing, eating, olfaction and vision, the appearance of the face also plays an important role in human interactions, including during social activities. However, studies on the importance of the functional role of facial appearance. As a function of the face are scare. Therefore, in the present study, we evaluated the importance of the functions of the face in Korea. Methods We conducted an online panel survey of 300 participants (age range, 20-70 years). Each respondent was administered the demographic data form, Facial Function Assessment Scale, Rosenberg Self-Esteem Scale, and standard gamble questionnaires. Results In the evaluation on the importance of facial functions, a normal appearance was considered as important as communication, breathing, speech, and vision. Of the 300 participants, 85% stated that a normal appearance is important in social activities. Conclusions The results of this survey involving a cross-section of the Korean population indicated that a normal appearance was considered one of the principal facial functions. A normal appearance was considered more important than the functions of olfaction and expression. Moreover, a normal appearance was determined to be an important facial function for leading a normal life in Korea. PMID:24286044
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S
2017-02-01
To assess the effect of the previously reported optical coherence tomography (OCT) signal normalization method on reducing the discrepancies in image appearance among spectral-domain OCT (SD-OCT) devices. Healthy eyes and eyes with various retinal pathologies were scanned at the macular region using similar volumetric scan patterns with at least two out of three SD-OCT devices at the same visit (Cirrus HD-OCT, Zeiss, Dublin, CA; RTVue, Optovue, Fremont, CA; and Spectralis, Heidelberg Engineering, Heidelberg, Germany). All the images were processed with the signal normalization. A set of images formed a questionnaire with 24 pairs of cross-sectional images from each eye with any combination of the three SD-OCT devices either both pre- or postsignal normalization. Observers were asked to evaluate the similarity of the two displayed images based on the image appearance. The effects on reducing the differences in image appearance before and after processing were analyzed. Twenty-nine researchers familiar with OCT images participated in the survey. Image similarity was significantly improved after signal normalization for all three combinations ( P ≤ 0.009) as Cirrus and RTVue combination became the most similar pair, followed by Cirrus and Spectralis, and RTVue and Spectralis. The signal normalization successfully minimized the disparities in the image appearance among multiple SD-OCT devices, allowing clinical interpretation and comparison of OCT images regardless of the device differences. The signal normalization would enable direct OCT images comparisons without concerning about device differences and broaden OCT usage by enabling long-term follow-ups and data sharing.
Amal, Haitham; Ding, Lu; Liu, Bin-bin; Tisch, Ulrike; Xu, Zhen-qin; Shi, Da-you; Zhao, Yan; Chen, Jie; Sun, Rui-xia; Liu, Hu; Ye, Sheng-Long; Tang, Zhao-you; Haick, Hossam
2012-01-01
Background: Hepatocellular carcinoma (HCC) is a common and aggressive form of cancer. Due to a high rate of postoperative recurrence, the prognosis for HCC is poor. Subclinical metastasis is the major cause of tumor recurrence and patient mortality. Currently, there is no reliable prognostic method of invasion. Aim: To investigate the feasibility of fingerprints of volatile organic compounds (VOCs) for the in-vitro prediction of metastasis. Methods: Headspace gases were collected from 36 cell cultures (HCC with high and low metastatic potential and normal cells) and analyzed using nanomaterial-based sensors. Predictive models were built by employing discriminant factor analysis pattern recognition, and the classification success was determined using leave-one-out cross-validation. The chemical composition of each headspace sample was studied using gas chromatography coupled with mass spectrometry (GC-MS). Results: Excellent discrimination was achieved using the nanomaterial-based sensors between (i) all HCC and normal controls; (ii) low metastatic HCC and normal controls; (iii) high metastatic HCC and normal controls; and (iv) high and low HCC. Several HCC-related VOCs that could be associated with biochemical cellular processes were identified through GC-MS analysis. Conclusion: The presented results constitute a proof-of-concept for the in-vitro prediction of the metastatic potential of HCC from VOC fingerprints using nanotechnology. Further studies on a larger number of more diverse cell cultures are needed to evaluate the robustness of the VOC patterns. These findings could benefit the development of a fast and potentially inexpensive laboratory test for subclinical HCC metastasis. PMID:22888249
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Use of history science methods in exposure assessment for occupational health studies
Johansen, K; Tinnerberg, H; Lynge, E
2005-01-01
Aims: To show the power of history science methods for exposure assessment in occupational health studies, using the dry cleaning industry in Denmark around 1970 as the example. Methods: Exposure data and other information on exposure status were searched for in unconventional data sources such as the Danish National Archives, the Danish Royal Library, archives of Statistics Denmark, the National Institute of Occupational Health, Denmark, and the Danish Labor Inspection Agency. Individual census forms were retrieved from the Danish National Archives. Results: It was estimated that in total 3267 persons worked in the dry cleaning industry in Denmark in 1970. They typically worked in small shops with an average size of 3.5 persons. Of these, 2645 persons were considered exposed to solvents as they were dry cleaners or worked very close to the dry cleaning process, while 622 persons were office workers, drivers, etc in shops with 10 or more persons. It was estimated that tetrachloroethylene constituted 85% of the dry cleaning solvent used, and that a shop would normally have two machines using 4.6 tons of tetrachloroethylene annually. Conclusion: The history science methods, including retrieval of material from the Danish National Archives and a thorough search in the Royal Library for publications on dry cleaning, turned out to be a very fruitful approach for collection of exposure data on dry cleaning work in Denmark. The history science methods proved to be a useful supplement to the exposure assessment methods normally applied in epidemiological studies. PMID:15961618
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
Carlisle, J F
1987-01-01
Currently popular systems for classification of spelling words or errors emphasize the learning of phoneme-grapheme correspondences and memorization of irregular words, but do not take into account the morphophonemic nature of the English language. This study is based on the premise that knowledge of the morphological rules of derivational morphology is acquired developmentally and is related to the spelling abilities of both normal and learning-disabled (LD) students. It addresses three issues: 1) how the learning of derivational morphology and the spelling of derived words by LD students compares to that of normal students; 2) whether LD students learn derived forms rulefully; and 3) the extent to which LD and normal students use knowledge of relationships between base and derived forms to spell derived words (e.g. "magic" and "magician"). The results showed that LD ninth graders' knowledge of derivational morphology was equivalent to that of normal sixth graders, following similar patterns of mastery of orthographic and phonological rules, but that their spelling of derived forms was equivalent to that of the fourth graders. Thus, they know more about derivational morphology than they use in spelling. In addition, they were significantly more apt to spell derived words as whole words, without regard for morphemic structure, than even the fourth graders. Nonetheless, most of the LD spelling errors were phonetically acceptable, suggesting that their misspellings cannot be attributed primarily to poor knowledge of phoneme-grapheme correspondences.
Variation in form of mandibular, light, round, preformed NiTi archwires.
Saze, Naomi; Arai, Kazuhito
2016-09-01
To evaluate the variation in form of nickel-titanium (NiTi) archwires by comparing them with the dental arch form of normal Japanese subjects before and after placing them in the first molar tubes. The mandibular dental casts of 30 normal subjects were scanned, and the dental arch depths and widths from the canine to the first molar were measured. Standardized images of 34 types of 0.016-inch preformed NiTi archwires were also taken in a 37°C environment, and the widths were measured and then classified by cluster analysis. Images of these archwires placed in a custom jig with brackets attached at the mean locations of the normal mandibular central incisors and first molar were additionally taken. The widths of the pooled and classified archwires were then compared with the normal dental arch widths before and after placement in the jig and among the groups (P < .05). The archwires were classified into three groups: small, medium, and large. The archwire widths in the small and medium groups were narrower than those at all examined tooth widths, except in the case of the premolars of the medium group. After placement in the jig, the pooled archwire widths were found to be significantly narrower and wider at the canine and second premolar, respectively, than at the dental arch, but not in the individual comparisons between groups. The variation observed in the mandibular NiTi archwire forms significantly decreased following fitting into the normal positions of the first molars.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-12-13
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-01-01
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
Kusano, Masahiro; Takizawa, Shota; Sakai, Tetsuya; Arao, Yoshihiko; Kubouchi, Masatoshi
2018-01-01
Since thermosetting resins have excellent resistance to chemicals, fiber reinforced plastics composed of such resins and reinforcement fibers are widely used as construction materials for equipment in chemical plants. Such equipment is usually used for several decades under severe corrosive conditions so that failure due to degradation may result. One of the degradation behaviors in thermosetting resins under chemical solutions is "corrosion-layer-forming" degradation. In this type of degradation, surface resins in contact with a solution corrode, and some of them remain asa corrosion layer on the pristine part. It is difficult to precisely measure the thickness of the pristine part of such degradation type materials by conventional pulse-echo ultrasonic testing, because the sound velocity depends on the degree of corrosion of the polymeric material. In addition, the ultrasonic reflection interface between the pristine part and the corrosion layer is obscure. Thus, we propose a pitch-catch method using a pair of normal and angle probes to measure four parameters: the thicknesses of the pristine part and the corrosion layer, and their respective sound velocities. The validity of the proposed method was confirmed by measuring a two-layer sample and a sample including corroded parts. The results demonstrate that the pitch-catch method can successfully measure the four parameters and evaluate the residual thickness of the pristine part in the corrosion-layer-forming sample. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Akdemir, Bayram; Güneş, Salih; Yosunkaya, Şebnem
Sleep disorders are a very common unawareness illness among public. Obstructive Sleep Apnea Syndrome (OSAS) is characterized with decreased oxygen saturation level and repetitive upper respiratory tract obstruction episodes during full night sleep. In the present study, we have proposed a novel data normalization method called Line Based Normalization Method (LBNM) to evaluate OSAS using real data set obtained from Polysomnography device as a diagnostic tool in patients and clinically suspected of suffering OSAS. Here, we have combined the LBNM and classification methods comprising C4.5 decision tree classifier and Artificial Neural Network (ANN) to diagnose the OSAS. Firstly, each clinical feature in OSAS dataset is scaled by LBNM method in the range of [0,1]. Secondly, normalized OSAS dataset is classified using different classifier algorithms including C4.5 decision tree classifier and ANN, respectively. The proposed normalization method was compared with min-max normalization, z-score normalization, and decimal scaling methods existing in literature on the diagnosis of OSAS. LBNM has produced very promising results on the assessing of OSAS. Also, this method could be applied to other biomedical datasets.
Instantaneous Normal Modes and the Protein Glass Transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, Roland; Krishnan, Marimuthu; Daidone, Isabella
2009-01-01
In the instantaneous normal mode method, normal mode analysis is performed at instantaneous configurations of a condensed-phase system, leading to modes with negative eigenvalues. These negative modes provide a means of characterizing local anharmonicities of the potential energy surface. Here, we apply instantaneous normal mode to analyze temperature-dependent diffusive dynamics in molecular dynamics simulations of a small protein (a scorpion toxin). Those characteristics of the negative modes are determined that correlate with the dynamical (or glass) transition behavior of the protein, as manifested as an increase in the gradient with T of the average atomic mean-square displacement at ~ 220more » K. The number of negative eigenvalues shows no transition with temperature. Further, although filtering the negative modes to retain only those with eigenvectors corresponding to double-well potentials does reveal a transition in the hydration water, again, no transition in the protein is seen. However, additional filtering of the protein double-well modes, so as to retain only those that, on energy minimization, escape to different regions of configurational space, finally leads to clear protein dynamical transition behavior. Partial minimization of instantaneous configurations is also found to remove nondiffusive imaginary modes. In summary, examination of the form of negative instantaneous normal modes is shown to furnish a physical picture of local diffusive dynamics accompanying the protein glass transition.« less
Instantaneous Normal Modes and the Protein Glass Transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, Roland; Krishnan, Marimuthu; Daidone, Isabella
2009-01-01
In the instantaneous normal mode method, normal mode analysis is performed at instantaneous configurations of a condensed-phase system, leading to modes with negative eigenvalues. These negative modes provide a means of characterizing local anharmonicities of the potential energy surface. Here, we apply instantaneous normal mode to analyze temperature-dependent diffusive dynamics in molecular dynamics simulations of a small protein (a scorpion toxin). Those characteristics of the negative modes are determined that correlate with the dynamical (or glass) transition behavior of the protein, as manifested as an increase in the gradient with T of the average atomic mean-square displacement at 220 K.more » The number of negative eigenvalues shows no transition with temperature. Further, although filtering the negative modes to retain only those with eigenvectors corresponding to double-well potentials does reveal a transition in the hydration water, again, no transition in the protein is seen. However, additional filtering of the protein double-well modes, so as to retain only those that, on energy minimization, escape to different regions of configurational space, finally leads to clear protein dynamical transition behavior. Partial minimization of instantaneous configurations is also found to remove nondiffusive imaginary modes. In summary, examination of the form of negative instantaneous normal modes is shown to furnish a physical picture of local diffusive dynamics accompanying the protein glass transition.« less
:4px 0 0;margin-top:1px\\9;line-height:normal}.ou-form .btn,.ou-form .form-control{font-size:14px;line 5px;outline-offset:-2px}.ou-form .form-control{display:block;width:100%;height:34px;padding:6px 12px )}.ou-form .form-control::-moz-placeholder{color:#999;opacity:1}.ou-form .form-control:-ms-input
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Arnold diffusion for smooth convex systems of two and a half degrees of freedom
NASA Astrophysics Data System (ADS)
Kaloshin, V.; Zhang, K.
2015-08-01
In the present note we announce a proof of a strong form of Arnold diffusion for smooth convex Hamiltonian systems. Let { T}2 be a 2-dimensional torus and B2 be the unit ball around the origin in { R}2 . Fix ρ > 0. Our main result says that for a ‘generic’ time-periodic perturbation of an integrable system of two degrees of freedom H_0(p)+\\varepsilon H_1(θ,p,t),\\quad θ\\in { T}^2, p\\in B^2, t\\in { T}={ R}/{ Z} , with a strictly convex H0, there exists a ρ-dense orbit (θε, pε, t)(t) in { T}2 × B2 × { T} , namely, a ρ-neighborhood of the orbit contains { T}2 × B2 × { T} . Our proof is a combination of geometric and variational methods. The fundamental elements of the construction are the usage of crumpled normally hyperbolic invariant cylinders from [9], flower and simple normally hyperbolic invariant manifolds from [36] as well as their kissing property at a strong double resonance. This allows us to build a ‘connected’ net of three-dimensional normally hyperbolic invariant manifolds. To construct diffusing orbits along this net we employ a version of the Mather variational method [41] equipped with weak KAM theory [28], proposed by Bernard in [7].
26 CFR 1.401(a)(4)-12 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the form of an annual benefit commencing at normal retirement age of an employee who continues in service until normal retirement age. Thus, for example, the benefit formula does not include the accrual... employee who terminates employment before normal retirement age. For purposes of this definition, a change...
Vortex dynamics and surface pressure fluctuations on a normal flat plate
NASA Astrophysics Data System (ADS)
Hemmati, Arman; Wood, David H.; Martinuzzi, Robert J.; Ferrari, Simon W.; Hu, Yaoping
2016-11-01
The effect of vortex formation and interactions on surface pressure fluctuations is examined in the wake of a normal flat plate by analyzing Direct Numerical Simulations at Re =1200. A novel local maximum score-based 3D method is used to track vortex development in the region close to the plate where the major contributions to the surface pressure are generated. Three distinct vortex shedding regimes are identified by changes in the lift and drag fluctuations. The instances of maximum drag coincide with impingement of newly formed vortices on the plate. This results in large and concentrated areas of rotational and strain contributions to generation of pressure fluctuations. Streamwise vortex straining and chordwise stretching are correlated with the large ratios of streamwise to chordwise normal stresses and regions of significant rotational contribution to the pressure. In contrast at the minimum drag, the vorticity field close to the plate is disorganized, and vortex roll-up occurs farther downstream. This leads to a uniform distribution of pressure. This study was supported by Alberta Innovates Technology Futures (AITF) and Natural Sciences and Engineering Research Council of Canada (NSERC).
Normally-off p-GaN/AlGaN/GaN high electron mobility transistors using hydrogen plasma treatment
NASA Astrophysics Data System (ADS)
Hao, Ronghui; Fu, Kai; Yu, Guohao; Li, Weiyi; Yuan, Jie; Song, Liang; Zhang, Zhili; Sun, Shichuang; Li, Xiajun; Cai, Yong; Zhang, Xinping; Zhang, Baoshun
2016-10-01
In this letter, we report a method by introducing hydrogen plasma treatment to realize normally-off p-GaN/AlGaN/GaN HEMT devices. Instead of using etching technology, hydrogen plasma was adopted to compensate holes in the p-GaN above the two dimensional electron gas (2DEG) channel to release electrons in the 2DEG channel and form high-resistivity area to reduce leakage current and increase gate control capability. The fabricated p-GaN/AlGaN/GaN HEMT exhibits normally-off operation with a threshold voltage of 1.75 V, a subthreshold swing of 90 mV/dec, a maximum transconductance of 73.1 mS/mm, an ON/OFF ratio of 1 × 107, a breakdown voltage of 393 V, and a maximum drain current density of 188 mA/mm at a gate bias of 6 V. The comparison of the two processes of hydrogen plasma treatment and p-GaN etching has also been made in this work.
Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie
2016-01-01
The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009
Fluorescent-Antibody Measurement Of Cancer-Cell Urokinase
NASA Technical Reports Server (NTRS)
Morrison, Dennis R.
1993-01-01
Combination of laboratory techniques provides measurements of amounts of urokinase in and between normal and cancer cells. Includes use of fluorescent antibodies specific against different forms of urokinase-type plasminogen activator, (uPA), fluorescence microscopy, quantitative analysis of images of sections of tumor tissue, and flow cytometry of different uPA's and deoxyribonucleic acid (DNA) found in suspended-tumor-cell preparations. Measurements provide statistical method for indicating or predicting metastatic potentials of some invasive tumors. Assessments of metastatic potentials based on such measurements used in determining appropriate follow-up procedures after surgical removal of tumors.
Stability and Bifurcation Analysis in a Maglev System with Multiple Delays
NASA Astrophysics Data System (ADS)
Zhang, Lingling; Huang, Jianhua; Huang, Lihong; Zhang, Zhizhou
This paper considers the time-delayed feedback control for Maglev system with two discrete time delays. We determine constraints on the feedback time delays which ensure the stability of the Maglev system. An algorithm is developed for drawing a two-parametric bifurcation diagram with respect to two delays τ1 and τ2. Direction and stability of periodic solutions are also determined using the normal form method and center manifold theory by Hassard. The complex dynamical behavior of the Maglev system near the domain of stability is confirmed by exhaustive numerical simulation.
ARC DISCHARGE AND METHOD OF PRODUCING THE SAME
Neidigh, R.V.
1960-03-15
A device for producing an energetic gas arc discharge between spaced electrodes in an evacuated chamber and within a magnetic field is described. Gas is fed into the arc in a direction normal to a refluxing stream of electrons and at a pressure higher than the pressure within the chamber to establish a pressure gradient along the arc discharge formed between the electrodes. This pressure gradient establishes rotating, time varying, radial electrical fields in the volume surroundimg the discharge, causing the discharge to rotate about the arc center line.
The simultaneous integration of many trajectories using nilpotent normal forms
NASA Technical Reports Server (NTRS)
Grayson, Matthew A.; Grossman, Robert
1990-01-01
Taylor's formula shows how to approximate a certain class of functions by polynomials. The approximations are arbitrarily good in some neighborhood whenever the function is analytic and they are easy to compute. The main goal is to give an efficient algorithm to approximate a neighborhood of the configuration space of a dynamical system by a nilpotent, explicitly integrable dynamical system. The major areas covered include: an approximating map; the generalized Baker-Campbell-Hausdorff formula; the Picard-Taylor method; the main theorem; simultaneous integration of trajectories; and examples.
The influence of learning and updating speed on the growth of commercial websites
NASA Astrophysics Data System (ADS)
Wan, Xiaoji; Deng, Guishi; Bai, Yang; Xue, Shaowei
2012-08-01
In this paper, we study the competition model of commercial websites with learning and updating speed, and further analyze the influence of learning and updating speed on the growth of commercial websites from a nonlinear dynamics perspective. Using the center manifold theory and the normal form method, we give the explicit formulas determining the stability and periodic fluctuation of commercial sites. Numerical simulations reveal that sites periodically fluctuate as the speed of learning and updating crosses one threshold. The study provides reference and evidence for website operators to make decisions.
Bifurcation Analysis and Chaos Control in a Modified Finance System with Delayed Feedback
NASA Astrophysics Data System (ADS)
Yang, Jihua; Zhang, Erli; Liu, Mei
2016-06-01
We investigate the effect of delayed feedback on the finance system, which describes the time variation of the interest rate, for establishing the fiscal policy. By local stability analysis, we theoretically prove the existences of Hopf bifurcation and Hopf-zero bifurcation. By using the normal form method and center manifold theory, we determine the stability and direction of a bifurcating periodic solution. Finally, we give some numerical solutions, which indicate that when the delay passes through certain critical values, chaotic oscillation is converted into a stable equilibrium or periodic orbit.
Effect of water depth on wind-wave frequency spectrum I. Spectral form
NASA Astrophysics Data System (ADS)
Wen, Sheng-Chang; Guan, Chang-Long; Sun, Shi-Cai; Wu, Ke-Jian; Zhang, Da-Cuo
1996-06-01
Wen et al's method developed to obtain wind-wave frequency spectrum in deep water was used to derive the spectrum in finite depth water. The spectrum S(ω) (ω being angular frequency) when normalized with the zeroth moment m 0 and peak frequency {ie97-1}, contains in addition to the peakness factor {ie97-2} a depth parameter η=(2π m o)1/2/ d ( d being water depth), so the spectrum behavior can be studied for different wave growth stages and water depths.
Stability and Hopf Bifurcation for a Delayed SLBRS Computer Virus Model
Yang, Huizhong
2014-01-01
By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results. PMID:25202722
Method for the purification of bis (2-ethyl-hexyl)phosphoric acid
Schulz, W.W.
1974-02-19
Foreign products including the neutral organophosphorous compounds and the iron salts normally present in commercial bis(2ethyl-hexyl) phosphoric acid(HDEHP), and the radiolytic degradation products of HDEHP on exposure of HDEHP to beta and gamma irradiation are removed from HDEHP containing one or more of such products by contacting the said foreign product containing HDEHP with a macroreticular anion exchange resin in base form whereby the DEHP- ion of HDEHP exchanges with the anion of the resin and is thus adsorbed on the resin and the said foreign products are not adsorbed and will pass through a bed of particles of the resin. The adsorbed DEHP- ion is then eluted from the resin and acidified to form and recover the purified HDEHP. (auth)
Wright, Aidan G C; Hallquist, Michael N
2014-01-01
Studying personality and its pathology as it changes, develops, or remains stable over time offers exciting insight into the nature of individual differences. Researchers interested in examining personal characteristics over time have a number of time-honored analytic approaches at their disposal. In recent years there have also been considerable advances in person-oriented analytic approaches, particularly longitudinal mixture models. In this methodological primer we focus on mixture modeling approaches to the study of normative and individual change in the form of growth mixture models and ipsative change in the form of latent transition analysis. We describe the conceptual underpinnings of each of these models, outline approaches for their implementation, and provide accessible examples for researchers studying personality and its assessment.
Spatial organization of surface nanobubbles and its implications in their formation process.
Lhuissier, Henri; Lohse, Detlef; Zhang, Xuehua
2014-02-21
We study the size and spatial distribution of surface nanobubbles formed by the solvent exchange method to gain insight into the mechanism of their formation. The analysis of Atomic Force Microscopy (AFM) images of nanobubbles formed on a hydrophobic surface reveals that the nanobubbles are not randomly located, which we attribute to the role of the history of nucleation during the formation. Moreover, the size of each nanobubble is found to be strongly correlated with the area of the bubble-depleted zone around it. The precise correlation suggests that the nanobubbles grow by diffusion of the gas from the bulk rather than by diffusion of the gas adsorbed on the surface. Lastly, the size distribution of the nanobubbles is found to be well described by a log-normal distribution.
Intermetallic alloy welding wires and method for fabricating the same
Santella, M.L.; Sikka, V.K.
1996-06-11
Welding wires for welding together intermetallic alloys of nickel aluminides, nickel-iron aluminides, iron aluminides, or titanium aluminides, and preferably including additional alloying constituents are fabricated as two-component, clad structures in which one component contains the primary alloying constituent(s) except for aluminum and the other component contains the aluminum constituent. This two-component approach for fabricating the welding wire overcomes the difficulties associated with mechanically forming welding wires from intermetallic alloys which possess high strength and limited ductilities at elevated temperatures normally employed in conventional metal working processes. The composition of the clad welding wires is readily tailored so that the welding wire composition when melted will form an alloy defined by the weld deposit which substantially corresponds to the composition of the intermetallic alloy being joined. 4 figs.
Intermetallic alloy welding wires and method for fabricating the same
Santella, Michael L.; Sikka, Vinod K.
1996-01-01
Welding wires for welding together intermetallic alloys of nickel aluminides, nickel-iron aluminides, iron aluminides, or titanium aluminides, and preferably including additional alloying constituents are fabricated as two-component, clad structures in which one component contains the primary alloying constituent(s) except for aluminum and the other component contains the aluminum constituent. This two-component approach for fabricating the welding wire overcomes the difficulties associated with mechanically forming welding wires from intermetallic alloys which possess high strength and limited ductilities at elevated temperatures normally employed in conventional metal working processes. The composition of the clad welding wires is readily tailored so that the welding wire composition when melted will form an alloy defined by the weld deposit which substantially corresponds to the composition of the intermetallic alloy being joined.
The role of fungi in diseases of the nose and sinuses
Schlosser, Rodney J.
2012-01-01
Background: Human exposure to fungal elements is inevitable, with normal respiration routinely depositing fungal hyphae within the nose and paranasal sinuses. Fungal species can cause sinonasal disease, with clinical outcomes ranging from mild symptoms to intracranial invasion and death. There has been much debate regarding the precise role fungal species play in sinonasal disease and optimal treatment strategies. Methods: A literature review of fungal diseases of the nose and sinuses was conducted. Results: Presentation, diagnosis, and current management strategies of each recognized form of fungal rhinosinusitis was reviewed. Conclusion: Each form of fungal rhinosinusitis has a characteristic presentation and clinical course, with the immune status of the host playing a critical pathophysiological role. Accurate diagnosis and targeted treatment strategies are necessary to achieve optimal outcomes. PMID:23168148
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Leong, S K
1980-08-04
The present study shows that 3--5 days following lesions of the dentate and interposed nuclei in normal adult rats degenerating axons and axon terminals can be detected in the contralateral pontine gray. The degenerating axon terminals form Gray's type I axo-dendritic contacts with fine and intermediate dendrites measuring between 0.8--2.4 microns. The present study also investigates, by electron microscopy, the synaptic rearrangement of the sensorimotor corticopontine projections following neonatal left cerebellar hemispherectomy. Following neonatal left cerebellar hemispherectomy, the right sensorimotor and adjacent cortex (SMC) presents a very dense ipsilateral and a modest amount of contralateral corticopontine projections in contrast with a predominantly ipsilateral corticopontine projection seen in the normal adult rat. As with the ipsilateral corticopontine projection seen in the normal adult animal, the bilateral corticopontine projections seen in the experimental animals form contacts with dendrites suggestive of Gray's type I synapses. While the corticopontine projections in normal control animals form synapses with fine dendrites measuring 0.2--1.2 micron the corticopontine projections in the experimental animals form synaptic relations with fine dendrites and with intermediate dendrites measuring 0.2--2.4 microns. As the normal cerebellopontine fibers from the dentate and interposed nuclei also form axo-dendritic synapses on fine and intermediate dendrites and the contracts formed are also of Gray's type I synapses, it is possible that some of the newly formed corticopontine fibers in the experimental animals might have replaced the cerebellopontine fibers synapsing on intermediate dendrites. Synaptic rearrangement appears to take place as suggested by the presence of synaptic complexes in which one axon terminal contacts two or more dendrites or two or more axon terminals contact one dendrite. Such complexes are frequently seen to undergo degeneration following the right SMC lesion in the experimental animals. Other complex synaptic structures are also present in both the right and left pontine gray in the experimental animals. They are not seen to undergo degeneration following the right SMC lesions. Occasional features of neuronal reaction could still be seen in both sides of the pontine gray for as long as 3--6 months after the neonatal cerebellar lesions.
Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter
2011-01-01
The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Well formed. 51.3060 Section 51.3060 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Florida Avocados Definitions § 51.3060 Well formed. Well formed means that the avocado has the normal shape characteristic of the variety. ...
7 CFR 51.1007 - Fairly well formed.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Fairly well formed. 51.1007 Section 51.1007 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....1007 Fairly well formed. Fairly well formed means that the fruit shows normal characteristic shape for...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Well formed. 51.3060 Section 51.3060 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Florida Avocados Definitions § 51.3060 Well formed. Well formed means that the avocado has the normal shape characteristic of the variety. ...
7 CFR 51.1007 - Fairly well formed.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Fairly well formed. 51.1007 Section 51.1007 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards....1007 Fairly well formed. Fairly well formed means that the fruit shows normal characteristic shape for...
Development and Standardization of the Air Force Officer Qualifying Test Form L.
ERIC Educational Resources Information Center
Miller, Robert E.
In accordance with the normal replacement cycle, a new form of the Air Force Officer Qualifying Test (AFOQT) was developed for implementation in Fiscal Year 1972. The new form is designated Form L. It resembles other recent forms in type of content, organization, and norming strategy. Like other forms, it yields pilot, navagation-technical,…
Zeng, Z. S.; Guillem, J. G.
1998-01-01
Experimental in vitro and animal data support an important role for matrix metalloproteinases (MMPs) in cancer invasion and metastasis via proteolytic degradation of the extracellular matrix (ECM). Our previous data have shown that MMP-9 mRNA is localized to the interface between liver metastasis and normal liver tissue, indicating that MMP-9 may play an important role in liver metastasis formation. In the present study, we analysed the cellular enzymatic expression of MMP-9 in 18 human colorectal cancer (CRC) liver metastasis specimens by enzyme-linked immunosorbent assay (ELISA) and zymography. ELISA analysis reveals that the latent form of MMP-9 is present in both liver metastasis and paired adjacent normal liver tissue. The mean level of the latent form of MMP-9 is 580+/-270 ng per mg total tissue protein (mean+/-s.e.) in liver metastasis vs 220+/-90 in normal liver tissue. However, this difference is not significantly different (P = 0.26). Using gelatin zymography, the 92-kDa band representative of the latent form is present in both liver metastasis and normal liver tissue. However, the 82 kDa band, representative of the active form of MMP-9, was seen only in liver metastasis. This was confirmed by Western blot analysis. Our observation of the unique presence of the active form of MMP-9 within liver metastasis suggests that proMMP-9 activation may be a pivotal event during CRC liver metastasis formation. Images Figure 3 Figure 4 PMID:9703281
Evaluation of retrieval methods of daytime convective boundary layer height based on lidar data
NASA Astrophysics Data System (ADS)
Li, Hong; Yang, Yi; Hu, Xiao-Ming; Huang, Zhongwei; Wang, Guoyin; Zhang, Beidou; Zhang, Tiejun
2017-04-01
The atmospheric boundary layer height is a basic parameter in describing the structure of the lower atmosphere. Because of their high temporal resolution, ground-based lidar data are widely used to determine the daytime convective boundary layer height (CBLH), but the currently available retrieval methods have their advantages and drawbacks. In this paper, four methods of retrieving the CBLH (i.e., the gradient method, the idealized backscatter method, and two forms of the wavelet covariance transform method) from lidar normalized relative backscatter are evaluated, using two artificial cases (an idealized profile and a case similar to real profile), to test their stability and accuracy. The results show that the gradient method is suitable for high signal-to-noise ratio conditions. The idealized backscatter method is less sensitive to the first estimate of the CBLH; however, it is computationally expensive. The results obtained from the two forms of the wavelet covariance transform method are influenced by the selection of the initial input value of the wavelet amplitude. Further sensitivity analysis using real profiles under different orders of magnitude of background counts show that when different initial input values are set, the idealized backscatter method always obtains consistent CBLH. For two wavelet methods, the different CBLH are always obtained with the increase in the wavelet amplitude when noise is significant. Finally, the CBLHs as measured by three lidar-based methods are evaluated by as measured from L-band soundings. The boundary layer heights from two instruments coincide with ±200 m in most situations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, J.C.; Lee, W.R.; Chang, S.H.
1992-01-01
To study mechanisms for dominance of phenotype, eight ENU- and four x-ray-induced mutations at the alcohol dehydrogenase (Adh) locus were analyzed for partial dominance in their interaction with normal alleles. All ENU and one of the x-ray mutations were single base substitutions; the other three x-ray mutations were 9-21 base deletions. All but one of the 12 mutant alleles were selected for this study because they produced detectable mutant polypeptides, but seven of the 11 producing a peptide could not form dimers with the normal peptide and the enzyme activity of heterozygotes was about half that of normal homozygotes. Fourmore » mutations formed dimers with a decreased catalytic efficiency and two of these were near the limit of detectability; these two also inhibited the formation of normal homodimers. The mutant alleles therefore show multiple mechanisms leading to partial enzyme expression in heterozygotes and a wide range of dominance ranging from almost complete recessive to nearly dominant. All amino acid changes in mutant peptides that form dimers are located between amino acids 182 and 194, so this region is not critical for dimerization. It may, however, be an important surface domain for catalyzation. 34 refs., 8 figs., 2 tabs.« less
Multivariate classification of the infrared spectra of cell and tissue samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haaland, D.M.; Jones, H.D.; Thomas, E.V.
1997-03-01
Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less
A Novel Approach for Adaptive Signal Processing
NASA Technical Reports Server (NTRS)
Chen, Ya-Chin; Juang, Jer-Nan
1998-01-01
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.
Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane
2018-02-01
Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.
[Primary culture of human normal epithelial cells].
Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun
2017-11-28
The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.
Bengtsson, Henrik; Hössjer, Ola
2006-03-01
Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.
An exercise in rational taxonomy.
Ho, M W
1990-11-07
The quest for a rational taxonomy of living forms began in the 17th century. Since the general acceptance of Darwin's theory of descent with modification, however, students of morphology became preoccupied with a systematics based on the genealogy of groups; and the rise of molecular phylogenies in recent years results in a further decline in the science of morphology. Reconstructing phylogenies by itself brings us no closer to the goal of rational taxonomy, which is to uncover the natural order inherent in the forms of living things. It is proposed that the rational taxonomy of forms should be derived from a study of development, much as von Baer had envisaged. To illustrate the method, a set of segmentation abnormalities in Drosophila larvae (previously exposed to either vapour) is considered, which can be individually classified as distinct disturbances in the process responsible for establishing normal segmental pattern. The process consists of a hierarchy of four successive bifurcations dividing the embryo's body first into two parts, then four, eight, and finally 16 subdivisions or segments. This gives rise to a taxonomic map of all possible transformations which contains the "phylogeny" of the actual forms and provides a natural system for classifying them. Attempts to recover the "true" phylogeny by various numerical methods are summarized and their implications for the validity of the basic assumptions of contemporary systematics discussed.
Ding, Yue; Peng, Ming; Zhang, Tong; Tao, Jian-Sheng; Cai, Zhen-Zhen; Zhang, Yong
2013-10-01
Glucuronidation and sulfation represent two major pathways in phase II drug metabolism in humans and other mammalian species. The great majority of drugs, for example, polyphenols, flavonoids and anthraquinones, could be transformed into sulfated and glucuronidated conjugates simultaneously and extensively in vivo. The pharmacological activities of drug conjugations are normally decreased compared with those of their free forms. However, some drug conjugates may either bear biological activities themselves or serve as excellent sources of biologically active compounds. As the bioactivities of drugs are thought to be relevant to the kinetics of their conjugates, it is essential to study the pharmacokinetic behaviors of the conjugates in more detail. Unfortunately, the free forms of drugs cannot be detected directly in most cases if their glucuronides and sulfates are the predominant forms in biological samples. Nevertheless, an initial enzymatic hydrolysis step using β-glucuronidase and/or sulfatase is usually performed to convert the glucuronidated and/or sulfated conjugates to their free forms prior to the extraction, purification and other subsequent analysis steps in the literature. This review provides fundamental information on drug metabolism pathways, the bio-analytical strategies for the quantification of various drug conjugates, and the applications of the analytical methods to pharmacokinetic studies. Copyright © 2013 John Wiley & Sons, Ltd.
[Clinical and MRI Findings in Patients with Congenital Anosmia].
Ogawa, Takao; Kato, Tomohisa; Ono, Mayu; Shimizu, Takeshi
2015-08-01
The clinical characteristics of 16 patients with congenital anosmia were examined retrospectively. MRI (magnetic resonance imaging) was used to assess the morphological changes in the olfactory bulbs and olfactory sulci according to the method of P. Rombaux (2009). Congenital anosmia was divided into two forms: syndromic forms in association with a syndrome, and isolated forms without evidence of other defects. Only three patients (19%) in our series had syndromic forms of congenital anosmia, such as the Kallmann syndrome. Most cases (13 patients, 81%) had isolated congenital anosmia. Psychophysical testing of the olfactory function included T&T olfactometry and the intravenous Alinamin test, which are widely used in Japan. In T&T olfactometry, detection and recognition thresholds for the five odorants are used to assign a diagnostic category representing the level of olfactory function. Most cases (14 patients, 88%) showed off-scale results on T&T olfactometry, and the Alinamin test resulted in no response in all 11 patients who underwent the test. Abnormal MRI findings of the olfactory bulbs and sulci were detected in 15 of 16 patients (94%). Olfactory bulbs were bilaterally absent in nine patients (56%), and two patients (13%) had unilateral olfactory bulbs. Four patients (25%) had bilateral hypoplastic olfactory bulbs, and only one patient had normal olfactory bulbs (6%). The olfactory sulcus was unilaterally absent in one patient (6%), and nine patients (56%) had bilaterally hypoplastic olfactory sulci. Two patients (13%) had a unilateral normal olfactory sulcus and hypoplastic olfactory sulcus. Three patients (19%) had normal olfactory sulci. Quantitative analysis showed that the volume of olfactory bulbs varied from 0 mm3 to 63.5 mm3, with a mean volume of 10.20 ± 18 mm3, and the mean depth of the olfactory sulcus varied from 0 mm to 12.22 mm, with a mean length of 4.85 ± 4.1 mm. Currently, there is no effective treatment for congenital anosmia. However, diagnosis of congenital anosmia is important, as its presence can lead to dangerous situations. Careful examination for hypogonadism is also required in people with anosmia. MRI examinations of the olfactory bulbs and sulci were useful for the diagnosis of congenital anosmia.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Identification, recognition and misidentification syndromes: a psychoanalytical perspective
Thibierge, Stéphane; Morin, Catherine
2013-01-01
Misidentification syndromes are currently often understood as cognitive disorders of either the “sense of uniqueness” (Margariti and Kontaxakis, 2006) or the recognition of people (Ellis and Lewis, 2001). It is however, necessary to consider how a normal “sense of uniqueness” or normal person recognition are acquired by normal or neurotic subjects. It will be shown here that the normal conditions of cognition can be considered as one of the possible forms of a complex structure and not as just a setting for our sense and perception data. The consistency and the permanency of the body image in neurosis is what permits the recognition of other people and ourselves as unique beings. This consistency and permanency are related to object repression, as shown by neurological disorders of body image (somatoparaphrenia), which cause the object to come to the foreground in the patient’s words (Thibierge and Morin, 2010). In misidentification syndromes, as in other psychotic syndromes, one can also observe damage to the specular image as well as an absence of object repression. This leads us to question whether, in the psychiatric disorders related to a damaged specular image, disorders of cognition can be studied and managed using the same methods as for neurotic patients. PMID:24298262
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Value of in vitro acoustic radiation force impulse application on uterine adenomyosis.
Bildaci, Tevfik Berk; Cevik, Halime; Yilmaz, Birnur; Desteli, Guldeniz Aksan
2017-11-24
Adenomyosis is the presence of endometrial glandular and stromal tissue in the myometrium. This phenomenon can be the cause of excessive bleeding and menstrual pain in premenopausal women. Diagnosis of adenomyosis may present difficulty with conventional methods such as ultrasound and magnetic resonance imaging. Frequently, diagnosis is accomplished retrospectively based on the hysterectomy specimen. This is a prospective case control study done in vitro on 90 patients' hysterectomy specimens. Acoustic radiation force impulse (ARFI) and color elastography were used to determine the elasticity of hysterectomy specimens of patients undergoing indicated surgeries. Based on histopathological examinations, two groups were formed: a study group (n = 28-with adenomyosis) and a control group (n = 62-without adenomyosis). Elasticity measurements of tissue with adenomyosis were observed to be significantly higher than measurements of normal myometrial tissue (p < 0.01). Uterine fibroids were found to have higher values on ARFI study compared to normal myometrial tissues (p < 0.01). The findings lead to the conclusion that adenomyosis tissue is significantly softer than the normal myometrium. ARFI was found to be beneficial in differentiating myometrial tissue with adenomyosis from normal myometrial tissue. It was found to be feasible and beneficial to implement ARFI in daily gynecology practice for diagnosis of adenomyosis.
Intra- and interpattern relations in letter recognition.
Sanocki, T
1991-11-01
Strings of 4 unrelated letters were backward masked at varying durations to examine 3 major issues. (a) One issue concerned relational features. Letters with abnormal relations but normal elements were created by interchanging elements between large and small normal letters. Overall accuracy was higher for letters with normal relations, consistent with the idea that relational features are important in recognition. (b) Interpattern relations were examined by mixing large and small letters within strings. Relative to pure strings, accuracy was reduced, but only for small letters and only when in mixed strings. This effect can be attributed to attentional priority for larger forms over smaller forms, which also explains global precedence with hierarchical forms. (c) Forced-choice alternatives were manipulated in Experiments 2 and 3 to test feature integration theory. Relational information was found to be processed at least as early as feature presence or absence.
1998-01-01
Cells from kidneys lose some of their special features in conventional culture but form spheres replete with specialized cell microvilli (hair) and synthesize hormones that may be clinically useful. Ground-based research studies have demonstrated that both normal and neoplastic cells and tissues recreate many of the characteristics in the NASA bioreactor that they display in vivo. Proximal kidney tubule cells that normally have rich apically oriented microvilli with intercellular clefts in the kidney do not form any of these structures in conventional two-dimensional monolayer culture. However, when normal proximal renal tubule cells are cultured in three-dimensions in the bioreactor, both the microvilli and the intercellular clefts form. This is important because, when the morphology is recreated, the function is more likely also to be rejuvenated. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC).
Tissue grown in NASA Bioreactor
NASA Technical Reports Server (NTRS)
1998-01-01
Cells from kidneys lose some of their special features in conventional culture but form spheres replete with specialized cell microvilli (hair) and synthesize hormones that may be clinically useful. Ground-based research studies have demonstrated that both normal and neoplastic cells and tissues recreate many of the characteristics in the NASA bioreactor that they display in vivo. Proximal kidney tubule cells that normally have rich apically oriented microvilli with intercellular clefts in the kidney do not form any of these structures in conventional two-dimensional monolayer culture. However, when normal proximal renal tubule cells are cultured in three-dimensions in the bioreactor, both the microvilli and the intercellular clefts form. This is important because, when the morphology is recreated, the function is more likely also to be rejuvenated. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC).
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Well formed. 51.3747 Section 51.3747 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Definitions § 51.3747 Well formed. Well formed means that the melon has the normal shape characteristic of the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Well formed. 51.488 Section 51.488 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Cantaloups 1 Definitions § 51.488 Well formed. Well formed means that the cantaloup has the normal shape characteristic of the variety. ...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Well formed. 51.488 Section 51.488 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Cantaloups 1 Definitions § 51.488 Well formed. Well formed means that the cantaloup has the normal shape characteristic of the variety. ...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Well formed. 51.3747 Section 51.3747 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... Definitions § 51.3747 Well formed. Well formed means that the melon has the normal shape characteristic of the...
Kornerup, Linda S; Fedosov, Sergey N; Juul, Christian B; Greibe, Eva; Heegaard, Christian W; Nexo, Ebba
2018-06-01
Hydroxocobalamin (HOCbl) is the dominating Cbl form in food, whereas cyanocobalamin (CNCbl) is common in vitamin pills and oral supplements. This study compares single-dose absorption and distribution of oral HO[ 57 Co]Cbl and CN[ 57 Co]Cbl in Cbl-deficient and normal rats. Male Wistar rats (7 weeks) were fed a 14-day diet with (n = 15) or without (n = 15) Cbl. We compared the uptakes of HO[ 57 Co]Cbl (free or bound to bovine transcobalamin) and free CN[ 57 Co]Cbl administered by gastric gavage (n = 5 in each diet group). Rats were sacrificed after 24 h. Blood, liver, kidney, brain, heart, spleen, intestines, skeletal muscle, 24-h urine and faeces were collected, and the content of [ 57 Co]Cbl was measured. Endogenous Cbl in tissues and plasma was analysed by routine methods. Mean endogenous plasma-Cbl was sevenfold lower in deficient vs. normal rats (190 vs. 1330 pmol/L, p < 0.0001). Cbl depletion increased endogenous Cbl ratios (tissue/plasma = k in /k out ) in all organs except for the kidney, where the ratio decreased considerably. Twenty-four-hour accumulation of labelled Cbl showed that HOCbl > CNCbl (liver) and CNCbl > HOCbl (brain, muscle and plasma). The Cbl status of rats and the administered Cbl form influence 24-h Cbl accumulation in tissues and plasma.
Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.
Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam
2018-06-01
The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.
Self-Esteem of Gifted, Normal, and Mild Mentally Handicapped Children.
ERIC Educational Resources Information Center
Chiu, Lian-Hwang
1990-01-01
Administered Coopersmith Self-Esteem Inventory (SEI) Form B to elementary school students (N=450) identified as gifted, normal, and mild mentally handicapped (MiMH). Results indicated that both the gifted and normal children had significantly higher self-esteem than did the MiMH children, but there were no differences between gifted and normal…
Sandstone-filled normal faults: A case study from central California
NASA Astrophysics Data System (ADS)
Palladino, Giuseppe; Alsop, G. Ian; Grippa, Antonio; Zvirtes, Gustavo; Phillip, Ruy Paulo; Hurst, Andrew
2018-05-01
Despite the potential of sandstone-filled normal faults to significantly influence fluid transmissivity within reservoirs and the shallow crust, they have to date been largely overlooked. Fluidized sand, forcefully intruded along normal fault zones, markedly enhances the transmissivity of faults and, in general, the connectivity between otherwise unconnected reservoirs. Here, we provide a detailed outcrop description and interpretation of sandstone-filled normal faults from different stratigraphic units in central California. Such faults commonly show limited fault throw, cm to dm wide apertures, poorly-developed fault zones and full or partial sand infill. Based on these features and inferences regarding their origin, we propose a general classification that defines two main types of sandstone-filled normal faults. Type 1 form as a consequence of the hydraulic failure of the host strata above a poorly-consolidated sandstone following a significant, rapid increase of pore fluid over-pressure. Type 2 sandstone-filled normal faults form as a result of regional tectonic deformation. These structures may play a significant role in the connectivity of siliciclastic reservoirs, and may therefore be crucial not just for investigation of basin evolution but also in hydrocarbon exploration.
NASA Astrophysics Data System (ADS)
Mukherjee, V.; Singh, N. P.; Yadav, R. A.
2013-04-01
Vibrational spectroscopic study has been made for the serotonin molecule and its deprotonated form. The Infrared and Raman spectra in optimum geometry of these two molecules are calculated using density functional theorem and the normal modes are assigned using potential energy distributions (PEDs) which are calculated using normal coordinate analysis method. The vibrational frequencies of these two molecules are reported and a comparison has been made. The effect of removal of the hydrogen atom from the serotonin molecule upon its geometry and vibrational frequencies are studied. Electronic structures of these two molecules are also studied using natural bond orbital (NBO) analysis. Theoretical Raman spectrum of serotonin at different exciting laser frequencies and at different temperatures are obtained and the results are discussed. Present study reveals that some wrong assignments had been made for serotonin molecule in earlier study.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
Biofilms of vaginal Lactobacillus in vitro test.
Wei, Xiao-Yu; Zhang, Rui; Xiao, Bing-Bing; Liao, Qin-Ping
2017-01-01
This paper focuses on biofilms of Lactobacillus spp. - a type of normal flora isolated from healthy human vaginas of women of childbearing age; thereupon, it broadens the research scope of investigation of vaginal normal flora. The static slide culture method was adopted to foster biofilms, marked by specific fluorescence staining. Laser scanning confocal and scanning electron microscopy were used to observe the microstructure of the biofilms. Photographs taken from the microstructure were analysed to calculate the density of the biofilms. The body of Lactobacillus spp., though red, turned yellow when interacting with the green extracellular polysaccharides. The structure of the biofilm and aquaporin within the biofilm were imaged. Lactobacillus density increases over time. This study provides convincing evidence that Lactobacillus can form biofilms and grow over time in vitro. This finding establishes an important and necessary condition for selecting proper strains for the pharmaceutics of vaginal ecology.
The tongue: deglutition, orofacial functions and craniofacial growth.
Landouzy, Jean-Marie; Sergent Delattre, Anne; Fenart, Raphaël; Delattre, Benoît; Claire, Jacques; Biecq, Marion
2009-09-01
So-called "primary" or "infantile" forms of deglutition, also termed lingual dyspraxia, are treated in different ways by orthodontists using various appliances to correct the condition and are also managed by speech-therapists and physiotherapists. The results obtained are often unstable. We have developed a more holistic approach to this disorder by attempting to grasp the underlying mechanisms in order to achieve more satisfactory correction. By establishing normal salivary deglutition more rapidly, this manual osteopathic technique complements the methods which use voluntary rehabilitation to impress upon the body's physical reflexes the "motor image" of the act to be accomplished. In order to render this article more lively and accessible, we have chosen to let the tongue speak in the first person--which, after all, is only normal! Copyright (c) 2009 Collège Européen d'Orthodontie. Published by Elsevier Masson SAS. All rights reserved.
Maintaining normality and support are central issues when receiving chemotherapy for ovarian cancer.
Ekman, Inger; Bergbom, Ingegerd; Ekman, Tor; Berthold, Harrieth; Mahsneh, Sawsan Majali
2004-01-01
The aim of this study was to enrich the understanding of patients' perspective of being diagnosed and treated for ovarian cancer. A qualitative approach was used to obtain knowledge and insight into patients' experiences and thoughts. Ten Swedish women, diagnosed with ovarian cancer, participated in a total of 23 interviews on 3 occasions: at the time of diagnosis, during chemotherapy, and after completion of chemotherapy. The results of the interpretation of the interviews were formulated in the form of 3 themes: (1) feeling the same despite radical castrating surgery, (2) accepting chemotherapy, and (3) maintaining normality and support. Suggestions of caring implications from our interpretation of the interview data underscore the need to support these women in learning to cope with their feelings of weakness and anxiety. The findings further indicate the potential in narrative methods to identify important issues in comprehensive cancer care.
Rapin, Nicolas; Bagger, Frederik Otzen; Jendholm, Johan; Mora-Jensen, Helena; Krogh, Anders; Kohlmann, Alexander; Thiede, Christian; Borregaard, Niels; Bullinger, Lars; Winther, Ole; Theilgaard-Mönch, Kim; Porse, Bo T
2014-02-06
Gene expression profiling has been used extensively to characterize cancer, identify novel subtypes, and improve patient stratification. However, it has largely failed to identify transcriptional programs that differ between cancer and corresponding normal cells and has not been efficient in identifying expression changes fundamental to disease etiology. Here we present a method that facilitates the comparison of any cancer sample to its nearest normal cellular counterpart, using acute myeloid leukemia (AML) as a model. We first generated a gene expression-based landscape of the normal hematopoietic hierarchy, using expression profiles from normal stem/progenitor cells, and next mapped the AML patient samples to this landscape. This allowed us to identify the closest normal counterpart of individual AML samples and determine gene expression changes between cancer and normal. We find the cancer vs normal method (CvN method) to be superior to conventional methods in stratifying AML patients with aberrant karyotype and in identifying common aberrant transcriptional programs with potential importance for AML etiology. Moreover, the CvN method uncovered a novel poor-outcome subtype of normal-karyotype AML, which allowed for the generation of a highly prognostic survival signature. Collectively, our CvN method holds great potential as a tool for the analysis of gene expression profiles of cancer patients.
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-09
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
Time-dependent summary receiver operating characteristics for meta-analysis of prognostic studies.
Hattori, Satoshi; Zhou, Xiao-Hua
2016-11-20
Prognostic studies are widely conducted to examine whether biomarkers are associated with patient's prognoses and play important roles in medical decisions. Because findings from one prognostic study may be very limited, meta-analyses may be useful to obtain sound evidence. However, prognostic studies are often analyzed by relying on a study-specific cut-off value, which can lead to difficulty in applying the standard meta-analysis techniques. In this paper, we propose two methods to estimate a time-dependent version of the summary receiver operating characteristics curve for meta-analyses of prognostic studies with a right-censored time-to-event outcome. We introduce a bivariate normal model for the pair of time-dependent sensitivity and specificity and propose a method to form inferences based on summary statistics reported in published papers. This method provides a valid inference asymptotically. In addition, we consider a bivariate binomial model. To draw inferences from this bivariate binomial model, we introduce a multiple imputation method. The multiple imputation is found to be approximately proper multiple imputation, and thus the standard Rubin's variance formula is justified from a Bayesian view point. Our simulation study and application to a real dataset revealed that both methods work well with a moderate or large number of studies and the bivariate binomial model coupled with the multiple imputation outperforms the bivariate normal model with a small number of studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Use of history science methods in exposure assessment for occupational health studies.
Johansen, K; Tinnerberg, H; Lynge, E
2005-07-01
To show the power of history science methods for exposure assessment in occupational health studies, using the dry cleaning industry in Denmark around 1970 as the example. Exposure data and other information on exposure status were searched for in unconventional data sources such as the Danish National Archives, the Danish Royal Library, archives of Statistics Denmark, the National Institute of Occupational Health, Denmark, and the Danish Labor Inspection Agency. Individual census forms were retrieved from the Danish National Archives. It was estimated that in total 3267 persons worked in the dry cleaning industry in Denmark in 1970. They typically worked in small shops with an average size of 3.5 persons. Of these, 2645 persons were considered exposed to solvents as they were dry cleaners or worked very close to the dry cleaning process, while 622 persons were office workers, drivers, etc in shops with 10 or more persons. It was estimated that tetrachloroethylene constituted 85% of the dry cleaning solvent used, and that a shop would normally have two machines using 4.6 tons of tetrachloroethylene annually. The history science methods, including retrieval of material from the Danish National Archives and a thorough search in the Royal Library for publications on dry cleaning, turned out to be a very fruitful approach for collection of exposure data on dry cleaning work in Denmark. The history science methods proved to be a useful supplement to the exposure assessment methods normally applied in epidemiological studies.
Masturbation, sexuality, and adaptation: normalization in adolescence.
Shapiro, Theodore
2008-03-01
During adolescence the central masturbation fantasy that is formulated during childhood takes its final form and paradoxically must now be directed outward for appropriate object finding and pair matching in the service of procreative aims. This is a step in adaptation that requires a further developmental landmark that I have called normalization. The path toward airing these private fantasies is facilitated by chumship relationships as a step toward further exposure to the social surround. Hartmann's structuring application of adaptation within psychoanalysis is used as a framework for understanding the process that simultaneously serves intrapsychic and social demands and permits goals that follow evolutionary principles. Variations in the normalization process from masturbatory isolation to a variety of forms of sexual socialization are examined in sociological data concerning current adolescent sexual behavior and in case examples that indicate some routes to normalized experience and practice.
Gaudreau, Éric; Bérubé, René; Bienvenu, Jean-François; Fleury, Normand
2016-06-01
Data on the stability of monohydroxy polycyclic aromatic hydrocarbons (OH-PAHs; metabolites of PAHs) in urine are needed in order to effectively study the effects of PAHs in the body, but the relevant data are not available in the literature. Therefore, in this work, we investigated the stability of OH-PAHs in urine. For each OH-PAH studied, the free form (as opposed to the conjugated form) comprised <10 % of the total OH-PAH in urine samples obtained from a normal population, except for 9-OH-phenanthrene (where the free form represented 22.2 % of the total 9-OH-phenanthrene). 1-Naphthol and 9-OH-phenanthrene were found to be less stable in their free forms in urine than in their conjugated forms when the urine samples were stored at 4 °C or room temperature. Free 3-OH-fluoranthene was also very unstable at 4 °C or room temperature. The conjugated forms of the OH-PAHs were more stable than their corresponding free forms. However, the free and conjugated forms of all the OH-PAHs were stable in urine at -20 °C and -80 °C. A freeze and thaw assay also revealed that freezing and thawing had minimal impact on the stability of the OH-PAHs in urine. For the derivatized extracts, storing the samples under an argon atmosphere at 4 °C was found to maintain sample integrity. In order to measure the stabilities of 19 hydroxylated metabolites of PAHs in urine, we developed a method with sensitivity in the low pg/mL range using nine labeled internal standards. This method combined enzymatic deconjugation with liquid-liquid extraction, derivatization with N-methyl-N-(trimethylsilyl)trifluoroacetamide (MSTFA), and gas chromatography/tandem mass spectrometry (GC-MS/MS). Graphical abstract Stability of the conjugated forms of the OH-PAHs versus free forms (e.g. 1-naphthol).
Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie
2016-01-01
The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.
7 CFR 51.1007 - Fairly well formed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Fairly well formed. 51.1007 Section 51.1007... STANDARDS) United States Standards for Persian (Tahiti) Limes Definitions § 51.1007 Fairly well formed. Fairly well formed means that the fruit shows normal characteristic shape for the Persian variety and is...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Well formed. 51.2653 Section 51.2653 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades for Sweet Cherries 1 Definitions § 51.2653 Well formed. Well formed means that the cherry has the normal shape characteristic of the variety, except that...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Well formed. 51.2653 Section 51.2653 Agriculture... Standards for Grades for Sweet Cherries 1 Definitions § 51.2653 Well formed. Well formed means that the cherry has the normal shape characteristic of the variety, except that mature well developed doubles...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Well formed. 51.2653 Section 51.2653 Agriculture... Standards for Grades for Sweet Cherries 1 Definitions § 51.2653 Well formed. Well formed means that the cherry has the normal shape characteristic of the variety, except that mature well developed doubles...
7 CFR 51.1007 - Fairly well formed.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Fairly well formed. 51.1007 Section 51.1007... STANDARDS) United States Standards for Persian (Tahiti) Limes Definitions § 51.1007 Fairly well formed. Fairly well formed means that the fruit shows normal characteristic shape for the Persian variety and is...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Well formed. 51.2653 Section 51.2653 Agriculture..., CERTIFICATION, AND STANDARDS) United States Standards for Grades for Sweet Cherries 1 Definitions § 51.2653 Well formed. Well formed means that the cherry has the normal shape characteristic of the variety, except that...
Visual attention and flexible normalization pools
Schwartz, Odelia; Coen-Cagli, Ruben
2013-01-01
Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413
NASA Astrophysics Data System (ADS)
Chen, Yi-Zhe; Liu, Wei; Yuan, Shi-Jian
2015-05-01
Normally, the strength and formability of aluminum alloys can be increased largely by severe plastic deformation and heat treatment. However, many plastic deformation processes are more suitable for making raw material, not for formed parts. In this article, an experimental study of the thermomechanical treatment by using the sheet hydroforming process was developed to improve both mechanical strength and formability for aluminum alloys in forming complex parts. The limiting drawing ratio, thickness, and strain distribution of complex parts formed by sheet hydroforming were investigated to study the formability and sheet-deformation behavior. Based on the optimal formed parts, the tensile strength, microhardness, grain structure, and strengthening precipitates were analyzed to identify the strengthening effect of thermomechanical treatment. The results show that in the solution state, the limiting drawing ratio of cylindrical parts could be increased for 10.9% compared with traditional deep drawing process. The peak values of tensile stress and microhardness of formed parts are 18.0% and 12.5% higher than that in T6 state. This investigation shows that the thermomechanical treatment by sheet hydroforming is a potential method for the products manufacturing of aluminum alloy with high strength and good formability.
Metal Complexation in Xylem Fluid 1
White, Michael C.; Chaney, Rufus L.; Decker, A. Morris
1981-01-01
The capacity of ligands in xylem fluid to form metal complexes was tested with a series of in vitro experiments using paper electrophoresis and radiographs. The xylem fluid was collected hourly for 8 hours from soybean (Glycine max L. Merr.) and tomato (Lycopersicon esculentum Mill.) plants grown in normal and Zn-phytotoxic nutrient solutions. Metal complexation was assayed by anodic or reduced cathodic movement of radionuclides (63Ni, 65Zn, 109Cd, 54Mn) that were presumed to have formed negatively charged complexes. Electrophoretic migration of Ni, Zn, Cd, and Mn added to xylem exudate and spotted on KCl- or KNO3-wetted paper showed that stable Ni, Zn, and Cd metal complexes were formed by exudate ligands. No anodic Mn complexes were observed in this test system. Solution pH, plant species, exudate collection time, and Zn phytotoxicity all affected the amount of metal complex formed in exudate. As the pH increased, there was increased anodic metal movement. Soybean exudate generally bound more of each metal than did tomato exudate. Metal binding usually decreased with increasing exudate collection time, and less metal was bound by the high-Zn exudate. Ni, Zn, Cd, and Mn in exudate added to exudate-wetted paper demonstrated the effect of ligand concentration on stable metal complex formation. Complexes for each metal were demonstratable with this method. Cathodic metal movement increased with time of exudate collection, and it was greater in the high-Zn exudate than in the normal-Zn exudate. A model study illustrated the effect of ligand concentration on metal complex stability in the electrophoretic field. Higher ligand (citric acid) concentrations increased the stability for all metals tested. Images PMID:16661666
ERAP1 Reduces Accumulation of Aberrant and Disulfide-Linked Forms of HLA-B27 on the Cell Surface
Tran, Tri; Hong, Sohee; Edwan, Jehad; Colbert, Robert A.
2016-01-01
Objective Endoplasmic reticulum (ER) aminopeptidase 1 (ERAP1) variants contribute to the risk of ankylosing spondylitis in HLA-B27 positive individuals, implying a disease-related interaction between these gene products. The aim of this study was to determine whether reduced ERAP1 expression would alter the cell surface expression of HLA-B27 and the formation of aberrant disulfide-linked forms that have been implicated in the pathogenesis of spondyloarthritis. Methods ERAP1 expression was knocked down in monocytic U937 cells expressing HLA-B27 and endogenous HLA class I. The effect of ERAP1 knockdown on the accumulation HLA-B alleles (B18, B51, and B27) was assessed using immunoprecipitation, isoelectric focusing, and immunoblotting, as well as flow cytometry with antibodies specific for different forms of HLA-B27. Cell surface expression of aberrant disulfide-linked HLA-B27 dimers was assessed by immunoprecipitation and electrophoresis on non-reducing polyacrylamide gels. Results ERAP1 knockdown increased the accumulation of HLA-B27 on the cell surface including disulfide-linked dimers, but had no effect on levels of HLA-B18 or -B51. Antibodies with unique specificity for HLA-B27 confirmed increased cell surface expression of complexes shown previously to contain long peptides. IFN-γ treatment resulted in striking increases in the expression of disulfide-linked HLA-B27 heavy chains, even in cells with normal ERAP1 expression. Conclusions Our results suggest that normal levels of ERAP1 reduce the accumulation of aberrant and disulfide-linked forms of HLA-B27 in monocytes, and thus help to maintain the integrity of cell surface HLA-B27 complexes. PMID:27107845
Koplas, P A; Rosenberg, R L; Oxford, G S
1997-05-15
Capsaicin (Cap) is a pungent extract of the Capsicum pepper family, which activates nociceptive primary sensory neurons. Inward current and membrane potential responses of cultured neonatal rat dorsal root ganglion neurons to capsaicin were examined using whole-cell and perforated patch recording methods. The responses exhibited strong desensitization operationally classified as acute (diminished response during constant Cap exposure) and tachyphylaxis (diminished response to successive applications of Cap). Both acute desensitization and tachyphylaxis were greatly diminished by reductions in external Ca2+ concentration. Furthermore, chelation of intracellular Ca2+ by addition of either EGTA or bis(2-aminophenoxy)ethane-N,N,N',N'-tetra-acetic acid to the patch pipette attenuated both forms of desensitization even in normal Ca2+. Release of intracellular Ca2+ by caffeine triggered acute desensitization in the absence of extracellular Ca2+, and barium was found to effectively substitute for calcium in supporting desensitization. Cap activated inward current at an ED50 of 728 nM, exhibiting cooperativity (Hill coefficient, 2.2); however, both forms of desensitization were only weakly dependent on [Cap], suggesting a dissociation between activation of Cap-sensitive channels and desensitization. Removal of ATP and GTP from the intracellular solutions resulted in nearly complete tachyphylaxis even with intracellular Ca2+ buffered to low levels, whereas changes in nucleotide levels did not significantly alter the acute form of desensitization. These data suggest a key role for intracellular Ca2+ in desensitization of Cap responses, perhaps through Ca2+-dependent dephosphorylation at a locus that normally sustains Cap responsiveness via ATP-dependent phosphorylation. It also seems that the signaling mechanisms underlying the two forms of desensitization are not identical in detail.
Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review
Chung, Stephanie T.; Chacko, Shaji K.; Sunehag, Agneta L.
2015-01-01
Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. PMID:26604176
Atmospheric constituent density profiles from full disk solar occultation experiments
NASA Technical Reports Server (NTRS)
Lumpe, J. D.; Chang, C. S.; Strickland, D. J.
1991-01-01
Mathematical methods are described which permit the derivation of the number of density profiles of atmospheric constituents from solar occultation measurements. The algorithm is first applied to measurements corresponding to an arbitrary solar-intensity distribution to calculate the normalized absorption profile. The application of Fourier transform to the integral equation yields a precise expression for the corresponding number density, and the solution is employed with the data given in the form of Laguerre polynomials. The algorithm is employed to calculate the results for the case of uniform distribution of solar intensity, and the results demonstrate the convergence properties of the method. The algorithm can be used to effectively model representative model-density profiles with constant and altitude-dependent scale heights.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less
A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
2011-01-01
Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Eissa, Maya S.; Abou Al Alamein, Amal M.
2018-03-01
Different innovative spectrophotometric methods were introduced for the first time for simultaneous quantification of sacubitril/valsartan in their binary mixture and in their combined dosage form without prior separation through two manipulation approaches. These approaches were developed and based either on two wavelength selection in zero-order absorption spectra namely; dual wavelength method (DWL) at 226 nm and 275 nm for valsartan, induced dual wavelength method (IDW) at 226 nm and 254 nm for sacubitril and advanced absorbance subtraction (AAS) based on their iso-absorptive point at 246 nm (λiso) and 261 nm (sacubitril shows equal absorbance values at the two selected wavelengths) or on ratio spectra using their normalized spectra namely; ratio difference spectrophotometric method (RD) at 225 nm and 264 nm for both of them in their ratio spectra, first derivative of ratio spectra (DR1) at 232 nm for valsartan and 239 nm for sacubitril and mean centering of ratio spectra (MCR) at 260 nm for both of them. Both sacubitril and valsartan showed linearity upon application of these methods in the range of 2.5-25.0 μg/mL. The developed spectrophotmetric methods were successfully applied to the analysis of their combined tablet dosage form ENTRESTO™. The adopted spectrophotometric methods were also validated according to ICH guidelines. The results obtained from the proposed methods were statistically compared to a reported HPLC method using Student t-test, F-test and a comparative study was also developed with one-way ANOVA, showing no statistical difference in accordance to precision and accuracy.
Chen, Li; Reeve, James; Zhang, Lujun; Huang, Shengbing; Wang, Xuefeng; Chen, Jun
2018-01-01
Normalization is the first critical step in microbiome sequencing data analysis used to account for variable library sizes. Current RNA-Seq based normalization methods that have been adapted for microbiome data fail to consider the unique characteristics of microbiome data, which contain a vast number of zeros due to the physical absence or under-sampling of the microbes. Normalization methods that specifically address the zero-inflation remain largely undeveloped. Here we propose geometric mean of pairwise ratios-a simple but effective normalization method-for zero-inflated sequencing data such as microbiome data. Simulation studies and real datasets analyses demonstrate that the proposed method is more robust than competing methods, leading to more powerful detection of differentially abundant taxa and higher reproducibility of the relative abundances of taxa.
Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada
NASA Astrophysics Data System (ADS)
Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.
2017-12-01
Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults in these zones were either parallel or perpendicular to the larger faults.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
Personality and trajectories of posttraumatic psychopathology: A latent change modelling approach.
Fletcher, Susan; O'Donnell, Meaghan; Forbes, David
2016-08-01
Survivors of traumatic events may develop a range of psychopathology, across the internalizing and externalizing dimensions of disorder and associated personality traits. However, research into personality-based internalizing and externalizing trauma responses has been limited to cross-sectional investigations of PTSD comorbidity. Personality typologies may present an opportunity to identify and selectively intervene with survivors at risk of posttraumatic disorder. Therefore this study examined whether personality prospectively influences the trajectory of disorder in a broader trauma-exposed sample. During hospitalization for a physical injury, 323 Australian adults completed the Multidimensional Personality Questionnaire-Brief Form and Structured Clinical Interview for DSM-IV, with the latter readministered 3 and 12 months later. Latent profile analysis conducted on baseline personality scores identified subgroups of participants, while latent change modelling examined differences in disorder trajectories. Three classes (internalizing, externalizing, and normal personality) were identified. The internalizing class showed a high risk of developing all disorders. Unexpectedly, however, the normal personality class was not always at lowest risk of disorder. Rather, the externalizing class, while more likely than the normal personality class to develop substance use disorders, were less likely to develop PTSD and depression. Results suggest that personality is an important mechanism in influencing the development and form of psychopathology after trauma, with internalizing and externalizing subtypes identifiable in the early aftermath of injury. These findings suggest that early intervention using a personality-based transdiagnostic approach may be an effective method of predicting and ultimately preventing much of the burden of posttraumatic disorder. Copyright © 2016 Elsevier Ltd. All rights reserved.
Guruprasad, Yadavalli; Jose, Maji; Saxena, Kartikay; K, Deepa; Prabhu, Vishnudas
2014-01-01
Background: Oral cancer is one of the most debilitating diseases afflicting mankind. Consumption of tobacco in various forms constitutes one of the most important etiological factors in initiation of oral cancer. When the focus of today’s research is to determine early genotoxic changes in human cells, micronucleus (MN) assay provides a simple, yet reliable indicator of genotoxic damage. Aims and Objectives: To identify and quantify micronuclei in the exfoliated cells of oral mucosa in individuals with different tobacco related habits and control group, to compare the genotoxicity of different tobacco related habits between each group and also with that of control group. Patients and Methods: In the present study buccal smears of 135 individuals with different tobacco related habits & buccal smears of 45 age and sex matched controls were obtained, stained using Giemsa stain and then observed under 100X magnification in order to identify and quantify micronuclei in the exfoliated cells of oral mucosa. Results: The mean Micronucleus (MN) count in individuals having smoking habit were 3.11 while the count was 0.50, 2.13, and 1.67 in normal control, smoking with beetle quid and smokeless tobacco habit respectively. MN count in smokers group was 2.6 times more compared to normal controls. MN count was more even in other groups when compared to normal control but to a lesser extent. Conclusion: From our study we concluded that tobacco in any form is genotoxic especially smokers are of higher risk and micronucleus assay can be used as a simple yet reliable marker for genotoxic evaluation. PMID:24995238
Macro creatine kinase: determination and differentiation of two types by their activation energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, W.; Bohner, J.; Steinhart, R.
1982-01-01
Determination of the MB isoenzyme of creatine kinase in patients with acute myocardial infarction may be disturbed by the presence of macro creatine kinase. The relative molecular mass of this form of creatine kinase in human serum is at least threefold that of the ordinary enzyme, and it is more thermostable. Here we describe our method for determination of macro creatine kinases and an easy-to-perform test for differentiating two forms of macro creatine kinase, based on their distinct activation energies. The activation energies of serum enzymes are mostly in the range of 40-65 kJ/mol of substrate. Unlike normal cytoplasmatic creatinemore » kinases and IgG-linked CK-BB (macro creatine kinase type 1) a second form of macro creatine kinase (macro creatine kinase type 2) shows activation energies greater than 80 kJ/mol of substrate. The exact composition of macro creatine kinase type 2 is still unknown, but there is good reason to believe that it is of mitochondrial origin.« less
de la Fuente-Gonzalo, Félix; Nieto, Jorge M; Velasco, Diego; Cela, Elena; Pérez, Germán; Fernández-Teijeiro, Ana; Escudero, Antonio; Villegas, Ana; González-Fernández, Fernando A; Ropero, Paloma
2016-04-01
Structural hemoglobinopathies do not usually have a clinical impact, but they can interfere with the analytical determination of some parameters, such as the glycated hemoglobin in diabetic patients. Thalassemias represent a serious health problem in areas where their incidence is high. The defects in the post-translational modifications produce hyper-unstable hemoglobin that is not detected by most of electrophoretic or chromatographic methods that are available so far. We studied seven patients who belong to six unrelated families. The first two families were studied because they had peak abnormal hemoglobin (Hb) during routine analytical assays. The other four families were studied because they had microcytosis and hypochromia with normal HbA2 and HbF without iron deficiency. HbA2 and F quantification and abnormal Hb separation were performed by chromatographic and electrophoretic methods. The molecular characterization was performed using specific sequencing. The Hb Puerta del Sol presents electrophoretic mobility and elution in HPLC that is different from HbA and similar to HbS. The electrophoretic and chromatographic profiles of the four other variants are normal and do not show any anomalies, and their identification was only possible with sequencing. Some variants, such as Hb Valdecilla, Hb Gran Vía, Hb Macarena and Hb El Retiro, have significant clinical impact when they are associated with other forms of α-thalassemia, which could lead to more serious forms of this group of pathologies as for HbH disease. Therefore, it is important to maintain an adequate program for screening these diseases in countries where the prevalence is high to prevent the occurrence of severe forms.
Musculus uvulae and velopharyngeal status.
Ijaduola, G T; Williams, O O
1987-06-01
A study of velopharyngeal status after partial excision of musculus uvulae, as in total uvulectomy, has been carried out in 15 adults with normally formed soft palates. Fifteen volunteers matched for age and sex with normal palates, who had not had total uvulectomy, were used as controls. Four assessment techniques were used: Air escape, with modified tongue anchor technique; Production of speech sounds; Transnasal nasopharyngoscopy; and Radiological screening. Even though Azzam and Kuehn (1977) have stressed the importance of the musculus uvulae in velopharyngeal closure, all assessments showed that partial excision of the musculus uvulae, as in total uvulectomy, has no statistically significant effect on the velopharyngeal status in subjects with a normally formed soft palate.
Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.
Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M
2017-02-27
RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Cultured normal mammalian tissue and process
NASA Technical Reports Server (NTRS)
Goodwin, Thomas J. (Inventor); Prewett, Tacey L. (Inventor); Wolf, David A. (Inventor); Spaulding, Glenn F. (Inventor)
1993-01-01
Normal mammalian tissue and the culturing process has been developed for the three groups of organ, structural and blood tissue. The cells are grown in vitro under microgravity culture conditions and form three dimensional cell aggregates with normal cell function. The microgravity culture conditions may be microgravity or simulated microgravity created in a horizontal rotating wall culture vessel.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro
2013-12-01
To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.
A Review of Depth and Normal Fusion Algorithms
Štolc, Svorad; Pock, Thomas
2018-01-01
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903
Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Kim, Sun Kwon; Gonzalez, Juan M.; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.
2010-01-01
Objective To describe a novel and simple technique (STAR: Simple Targeted Arterial Rendering) to visualize the fetal cardiac outflow tracts from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed a technique to image the outflow tracts by drawing three dissecting lines through the four-chamber view of the heart contained in a STIC volume dataset. Each line generated the following plane: 1) Line 1: ventricular septum “en face” with both great vessels (pulmonary artery anterior to the aorta); 2) Line 2: pulmonary artery with continuation into the longitudinal view of the ductal arch; and 3) Line 3: long axis view of the aorta arising from the left ventricle. The pattern formed by all 3 lines intersecting approximately through the crux of the heart resembles a “star”. The technique was then tested in 50 normal hearts (15.3 – 40.4 weeks of gestation). To determine if the technique could identify planes that departed from the normal images, we tested the technique in 4 cases with proven congenital heart defects (ventricular septal defect, transposition of great vessels, tetralogy of Fallot, and pulmonary atresia with intact ventricular septum). Results The STAR technique was able to generate the intended planes in all 50 normal cases. In the abnormal cases, the STAR technique allowed identification of the ventricular septal defect, demonstrated great vessel anomalies, and displayed views that deviated from what was expected from the examination of normal hearts. Conclusions This novel and simple technique can be used to visualize the outflow tracts and ventricular septum “en face” in normal fetal hearts. The inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease involving the great vessels and/or the ventricular septum. The STAR technique may simplify examination of the fetal heart and could reduce operator dependency. PMID:20878672
Assessing model uncertainty using hexavalent chromium and ...
Introduction: The National Research Council recommended quantitative evaluation of uncertainty in effect estimates for risk assessment. This analysis considers uncertainty across model forms and model parameterizations with hexavalent chromium [Cr(VI)] and lung cancer mortality as an example. The objective of this analysis is to characterize model uncertainty by evaluating the variance in estimates across several epidemiologic analyses.Methods: This analysis compared 7 publications analyzing two different chromate production sites in Ohio and Maryland. The Ohio cohort consisted of 482 workers employed from 1940-72, while the Maryland site employed 2,357 workers from 1950-74. Cox and Poisson models were the only model forms considered by study authors to assess the effect of Cr(VI) on lung cancer mortality. All models adjusted for smoking and included a 5-year exposure lag, however other latency periods and model covariates such as age and race were considered. Published effect estimates were standardized to the same units and normalized by their variances to produce a standardized metric to compare variability in estimates across and within model forms. A total of 7 similarly parameterized analyses were considered across model forms, and 23 analyses with alternative parameterizations were considered within model form (14 Cox; 9 Poisson). Results: Across Cox and Poisson model forms, adjusted cumulative exposure coefficients for 7 similar analyses ranged from 2.47
Comparative Testis Tissue Proteomics Using 2-Dye Versus 3-Dye DIGE Analysis.
Holland, Ashling
2018-01-01
Comparative tissue proteomics aims to analyze alterations of the proteome in response to a stimulus. Two-dimensional difference gel electrophoresis (2D-DIGE) is a modified and advanced form of 2D gel electrophoresis. DIGE is a powerful biochemical method that compares two or three protein samples on the same analytical gel, and can be used to establish differentially expressed protein levels between healthy normal and diseased pathological tissue sample groups. Minimal DIGE labeling can be used via a 2-dye system with Cy3 and Cy5 or a 3-dye system with Cy2, Cy3, and Cy5 to fluorescently label samples with CyDye flours pre-electrophoresis. DIGE circumvents gel-to-gel variability by multiplexing samples to a single gel and through the use of a pooled internal standard for normalization. This form of quantitative high-resolution proteomics facilitates the comparative analysis and evaluation of tissue protein compositions. Comparing tissue groups under different conditions is crucially important for advancing the biomedical field by characterization of cellular processes, understanding pathophysiological development and tissue biomarker discovery. This chapter discusses 2D-DIGE as a comparative tissue proteomic technique and describes in detail the experimental steps required for comparative proteomic analysis employing both options of 2-dye and 3-dye DIGE minimal labeling.
Enhancement of biocompatibility of 316LVM stainless steel by cyclic potentiodynamic passivation.
Shahryari, Arash; Omanovic, Sasha; Szpunar, Jerzy A
2009-06-15
Passivation of stainless steel implants is a common procedure used to increase their biocompatibility. The results presented in this work demonstrate that the electrochemical cyclic potentiodynamic polarization (CPP) of a biomedical grade 316LVM stainless steel surface is a very efficient passivation method that can be used to significantly improve the material's general corrosion resistance and thus its biocompatibility. The influence of a range of experimental parameters on the passivation/corrosion protection efficiency is discussed. The passive film formed on a 316LVM surface by using the CPP method offers a significantly higher general corrosion resistance than the naturally grown passive film. The corresponding relative corrosion protection efficiency measured in saline during a 2-month period was 97% +/- 1%, which demonstrates a very high stability of the CPP-formed passive film. Its high corrosion protection efficiency was confirmed also at temperatures and chloride concentrations well above normal physiological levels. It was also shown that the CPP is a significantly more effective passivation method than some other surface-treatment methods commonly used to passivate biomedical grade stainless steels. In addition, the CPP-passivated 316LVM surface showed an enhanced biocompatibility in terms of preosteoblast (MC3T3) cells attachment. An increased thickness of the CPP-formed passive film and its enrichment with Cr(VI) and oxygen was determined to be the origin of the material's increased general corrosion resistance, whereas the increased surface roughness and surface (Volta) potential were suggested to be the origin of the enhanced preosteoblast cells attachment. Copyright 2008 Wiley Periodicals, Inc.
Normalized Temperature Contrast Processing in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
NASA Astrophysics Data System (ADS)
Sunnquist, Ben
2018-06-01
Throughout the lifetime of WFC3, a growing number of 'blobs' (small, circular regions with slightly decreased sensitivity) have appeared in WFC3/IR images. In this report, we present the current workflow used for identifying, characterizing and flagging new IR blobs. We also describe the methods currently used to monitor the repeatability of the channel select mechanism (CSM) movements as a way to ensure that the CSM is still operating normally as these new blobs form. A full listing of all known blobs, which incorporates the work from past blob monitoring efforts, is presented in the Appendix as well as all of the IR bad pixel tables generated to include the strongest of these blobs. These tables, along with all of the other relevant figures and tables in this report, will be continuously updated as new blobs form.
Fast angular synchronization for phase retrieval via incomplete information
NASA Astrophysics Data System (ADS)
Viswanathan, Aditya; Iwen, Mark
2015-08-01
We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.
Musumeci, G.; Loreto, C.; Carnazza, M.L.; Coppolino, F.; Cardile, V.; Leonardi, R.
2011-01-01
Osteoarthritis (OA) is characterized by degenerative changes within joints that involved quantitative and/or qualitative alterations of cartilage and synovial fluid lubricin, a mucinous glycoprotein secreted by synovial fibroblasts and chondrocytes. Modern therapeutic methods, including tissue-engineering techniques, have been used to treat mechanical damage of the articular cartilage but to date there is no specific and effective treatment. This study aimed at investigating lubricin immunohistochemical expression in cartilage explant from normal and OA patients and in cartilage constructions formed by Poly (ethylene glycol) (PEG) based hydrogels (PEG-DA) encapsulated OA chondrocytes. The expression levels of lubricin were studied by immunohistochemistry: i) in tissue explanted from OA and normal human cartilage; ii) in chondrocytes encapsulated in hydrogel PEGDA from OA and normal human cartilage. Moreover, immunocytochemical and western blot analysis were performed in monolayer cells from OA and normal cartilage. The results showed an increased expression of lubricin in explanted tissue and in monolayer cells from normal cartilage, and a decreased expression of lubricin in OA cartilage. The chondrocytes from OA cartilage after 5 weeks of culture in hydrogels (PEGDA) showed an increased expression of lubricin compared with the control cartilage. The present study demonstrated that OA chondrocytes encapsulated in PEGDA, grown in the scaffold and were able to restore lubricin biosynthesis. Thus our results suggest the possibility of applying autologous cell transplantation in conjunction with scaffold materials for repairing cartilage lesions in patients with OA to reduce at least the progression of the disease. PMID:22073377
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Joshua, P. Patric; Valli, C.; Balakrishnan, V.
2016-01-01
Background and Aim: Nanoparticles can bypass conventional physiological ways of nutrient distribution and transport across tissue and cell membranes, as well as protect compounds against destruction prior to reaching their targets. In ovo administration of nanoparticles, may be seen as a new method of nano-nutrition, providing embryos with an additional quantity of nutrients. The aim of the study is to examine the effect of in ovo supplementation of nano forms of zinc, copper and selenium on the hatchability and post hatch performance of broiler chicken. Materials and Methods: Nano form of zinc at 20, 40, 60 and 80 µg/egg, nano form of copper at 4, 8, 12 and 16 µg/egg and nano form of selenium at 0.075, 0.15, 0.225 and 0.3 µg/egg were in ovo supplemented (18th day incubation, amniotic route) in fertile broiler eggs. Control group in ovo fed with normal saline alone was also maintained. Each treatment had thirty replicates. Parameters such as hatchability, hatch weight and post hatch performance were studied. Results: In ovo feeding of nano minerals were not harmful to the developing embryo and did not influence the hatchability. Significantly (p<0.05) best feed efficiency for nano forms of zinc (2.16), copper (2.46) and selenium (2.51) were observed, when 40, 4 and 0.225 µg/egg respectively were in ovo supplemented. Except in nano form of copper at 12 µg per egg which had significantly (p<0.05) highest breast muscle percentage there was no distinct trend to indicate that dressing percentage or breast muscle yield was influenced in other treatments. Conclusion: Nano forms of zinc, copper and selenium can be prepared at laboratory conditions. In ovo feeding of nano forms of zinc, copper and selenium at 18th day of incubation through amniotic route does not harm the developing embryo, does not affect hatchability. PMID:27057113
NASA Astrophysics Data System (ADS)
Jorge, Marco G.; Brennand, Tracy A.
2017-07-01
Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.
Li, Peipei; Piao, Yongjun; Shon, Ho Sun; Ryu, Keun Ho
2015-10-28
Recently, rapid improvements in technology and decrease in sequencing costs have made RNA-Seq a widely used technique to quantify gene expression levels. Various normalization approaches have been proposed, owing to the importance of normalization in the analysis of RNA-Seq data. A comparison of recently proposed normalization methods is required to generate suitable guidelines for the selection of the most appropriate approach for future experiments. In this paper, we compared eight non-abundance (RC, UQ, Med, TMM, DESeq, Q, RPKM, and ERPKM) and two abundance estimation normalization methods (RSEM and Sailfish). The experiments were based on real Illumina high-throughput RNA-Seq of 35- and 76-nucleotide sequences produced in the MAQC project and simulation reads. Reads were mapped with human genome obtained from UCSC Genome Browser Database. For precise evaluation, we investigated Spearman correlation between the normalization results from RNA-Seq and MAQC qRT-PCR values for 996 genes. Based on this work, we showed that out of the eight non-abundance estimation normalization methods, RC, UQ, Med, TMM, DESeq, and Q gave similar normalization results for all data sets. For RNA-Seq of a 35-nucleotide sequence, RPKM showed the highest correlation results, but for RNA-Seq of a 76-nucleotide sequence, least correlation was observed than the other methods. ERPKM did not improve results than RPKM. Between two abundance estimation normalization methods, for RNA-Seq of a 35-nucleotide sequence, higher correlation was obtained with Sailfish than that with RSEM, which was better than without using abundance estimation methods. However, for RNA-Seq of a 76-nucleotide sequence, the results achieved by RSEM were similar to without applying abundance estimation methods, and were much better than with Sailfish. Furthermore, we found that adding a poly-A tail increased alignment numbers, but did not improve normalization results. Spearman correlation analysis revealed that RC, UQ, Med, TMM, DESeq, and Q did not noticeably improve gene expression normalization, regardless of read length. Other normalization methods were more efficient when alignment accuracy was low; Sailfish with RPKM gave the best normalization results. When alignment accuracy was high, RC was sufficient for gene expression calculation. And we suggest ignoring poly-A tail during differential gene expression analysis.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
Using Formal Methods to Assist in the Requirements Analysis of the Space Shuttle GPS Change Request
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Roberts, Larry W.
1996-01-01
We describe a recent NASA-sponsored pilot project intended to gauge the effectiveness of using formal methods in Space Shuttle software requirements analysis. Several Change Requests (CR's) were selected as promising targets to demonstrate the utility of formal methods in this application domain. A CR to add new navigation capabilities to the Shuttle, based on Global Positioning System (GPS) technology, is the focus of this report. Carried out in parallel with the Shuttle program's conventional requirements analysis process was a limited form of analysis based on formalized requirements. Portions of the GPS CR were modeled using the language of SRI's Prototype Verification System (PVS). During the formal methods-based analysis, numerous requirements issues were discovered and submitted as official issues through the normal requirements inspection process. Shuttle analysts felt that many of these issues were uncovered earlier than would have occurred with conventional methods. We present a summary of these encouraging results and conclusions we have drawn from the pilot project.
Normalization Of Thermal-Radiation Form-Factor Matrix
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn T.
1994-01-01
Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.
Griffith, D P; Osborne, C A
1987-01-01
Infection-induced stones in man probably form solely as a consequence of urealysis which is catalyzed by the bacterial protein urease. Urease stones composed of struvite and carbonate-apatite may form primarily, or as secondary stones or pre-existent metabolic stones. Struvite stones form and grow rapidly owing to (a) supersaturation of urine with stone forming salts, (b) 'salting out' of poorly soluble organic substances normally dissolved in urine and (c) ammonia-induced destruction of the normally protective urothelial glycosaminoglycan layer. Immature (predominantly organic) matrix stones mature into densely mineralized stones. Curative treatment is possible only by eliminating all of the stone and by eradicating all urinary and parenchymal infection. A variety of operative and pharmaceutical approaches are available. Patient treatment must be individualized inasmuch as some patients are better candidates for one type of treatment than another.
Ghassemi, Rezwan; Brown, Robert; Narayanan, Sridar; Banwell, Brenda; Nakamura, Kunio; Arnold, Douglas L
2015-01-01
Intensity variation between magnetic resonance images (MRI) hinders comparison of tissue intensity distributions in multicenter MRI studies of brain diseases. The available intensity normalization techniques generally work well in healthy subjects but not in the presence of pathologies that affect tissue intensity. One such disease is multiple sclerosis (MS), which is associated with lesions that prominently affect white matter (WM). To develop a T1-weighted (T1w) image intensity normalization method that is independent of WM intensity, and to quantitatively evaluate its performance. We calculated median intensity of grey matter and intraconal orbital fat on T1w images. Using these two reference tissue intensities we calculated a linear normalization function and applied this to the T1w images to produce normalized T1w (NT1) images. We assessed performance of our normalization method for interscanner, interprotocol, and longitudinal normalization variability, and calculated the utility of the normalization method for lesion analyses in clinical trials. Statistical modeling showed marked decreases in T1w intensity differences after normalization (P < .0001). We developed a WM-independent T1w MRI normalization method and tested its performance. This method is suitable for longitudinal multicenter clinical studies for the assessment of the recovery or progression of disease affecting WM. Copyright © 2014 by the American Society of Neuroimaging.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
Influence of phase inversion on the formation and stability of one-step multiple emulsions.
Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J
2009-07-21
A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.
Injection of thermal and suprathermal seed particles into coronal shocks of varying obliquity
NASA Astrophysics Data System (ADS)
Battarbee, M.; Vainio, R.; Laitinen, T.; Hietala, H.
2013-10-01
Context. Diffusive shock acceleration in the solar corona can accelerate solar energetic particles to very high energies. Acceleration efficiency is increased by entrapment through self-generated waves, which is highly dependent on the amount of accelerated particles. This, in turn, is determined by the efficiency of particle injection into the acceleration process. Aims: We present an analysis of the injection efficiency at coronal shocks of varying obliquity. We assessed injection through reflection and downstream scattering, including the effect of a cross-shock potential. Both quasi-thermal and suprathermal seed populations were analysed. We present results on the effect of cross-field diffusion downstream of the shock on the injection efficiency. Methods: Using analytical methods, we present applicable injection speed thresholds that were compared with both semi-analytical flux integration and Monte Carlo simulations, which do not resort to binary thresholds. Shock-normal angle θBn and shock-normal velocity Vs were varied to assess the injection efficiency with respect to these parameters. Results: We present evidence of a significant bias of thermal seed particle injection at small shock-normal angles. We show that downstream isotropisation methods affect the θBn-dependence of this result. We show a non-negligible effect caused by the cross-shock potential, and that the effect of downstream cross-field diffusion is highly dependent on boundary definitions. Conclusions: Our results show that for Monte Carlo simulations of coronal shock acceleration a full distribution function assessment with downstream isotropisation through scatterings is necessary to realistically model particle injection. Based on our results, seed particle injection at quasi-parallel coronal shocks can result in significant acceleration efficiency, especially when combined with varying field-line geometry. Appendices are available in electronic form at http://www.aanda.org
Homoclinic Bifurcation in an SIQR Model for Childhood Diseases
NASA Astrophysics Data System (ADS)
Wu, Lih-Ing; Feng, Zhilan
2000-11-01
We consider a system of ODEs which describes the transmission dynamics of childhood diseases. A center manifold reduction at a bifurcation point has the normal form x‧=y, y‧=axy+bx2y+O(4), indicating a bifurcation of codimension greater than two. A three-parameter unfolding of the normal form is studied to capture possible complex dynamics of the original system which is subjected to certain constraints on the state space due to biological considerations. It is shown that the perturbed system produces homoclinic bifurcation.
Three-dimensional reconstruction of Roman coins from photometric image sets
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay; Moitinho de Almeida, Vera; Hess, Mona
2017-01-01
A method is presented for increasing the spatial resolution of the three-dimensional (3-D) digital representation of coins by combining fine photometric detail derived from a set of photographic images with accurate geometric data from a 3-D laser scanner. 3-D reconstructions were made of the obverse and reverse sides of two ancient Roman denarii by processing sets of images captured under directional lighting in an illumination dome. Surface normal vectors were calculated by a "bounded regression" technique, excluding both shadow and specular components of reflection from the metallic surface. Because of the known difficulty in achieving geometric accuracy when integrating photometric normals to produce a digital elevation model, the low spatial frequencies were replaced by those derived from the point cloud produced by a 3-D laser scanner. The two datasets were scaled and registered by matching the outlines and correlating the surface gradients. The final result was a realistic rendering of the coins at a spatial resolution of 75 pixels/mm (13-μm spacing), in which the fine detail modulated the underlying geometric form of the surface relief. The method opens the way to obtain high quality 3-D representations of coins in collections to enable interactive online viewing.
Immunogold staining procedure for the localisation of regulatory peptides.
Varndell, I M; Tapia, F J; Probert, L; Buchan, A M; Gu, J; De Mey, J; Bloom, S R; Polak, J M
1982-01-01
The use of protein A- and IgG-conjugated colloidal gold staining methods for the immuno-localisation of peptide hormones and neurotransmitters at light- and electron microscope level are described and discussed. Bright-field and dark-ground illumination modes have been used to visualise the gold-labelled antigenic sites at the light microscope level. Immunogold staining procedures at the ultrastructural level using region-specific antisera have been adopted to localise specific molecular forms of peptides including gastrin (G17 and G34), glucagon and pro-glucagon, insulin and pro-insulin, in normal tissue and in tumours of the gastroenteropancreatic system. Similar methods have been used to demonstrate the heterogeneity of p-type nerves in the enteric nervous system. Vasoactive intestinal polypeptide (VIP) has been localised to granular sites (mean +/- S.D. granule diameter = 98 +/- 19 nm) in nerve terminals of the enteric plexuses and in tumour cells of diarrhoeogenic VIP-producing neoplasias (mean +/- S.D. granule diameter = 126 +/- 37 nm) using immunogold procedures applied to ultraviolet-cured ultrathin sections. Co-localisation of amines and peptides in carotid body type I cells and in chromaffin cells of normal adrenal medulla and phaeochromocytomas has also been demonstrated. Advantages of the immunogold procedures over alternative immunocytochemical techniques are discussed.
A thin-plate spline analysis of the face and tongue in obstructive sleep apnea patients.
Pae, E K; Lowe, A A; Fleetham, J A
1997-12-01
The shape characteristics of the face and tongue in obstructive sleep apnea (OSA) patients were investigated using thin-plate (TP) splines. A relatively new analytic tool, the TP spline method, provides a means of size normalization and image analysis. When shape is one's main concern, various sizes of a biologic structure may be a source of statistical noise. More seriously, the strong size effect could mask underlying, actual attributes of the disease. A set of size normalized data in the form of coordinates was generated from cephalograms of 80 male subjects. The TP spline method envisioned the differences in the shape of the face and tongue between OSA patients and nonapneic subjects and those between the upright and supine body positions. In accordance with OSA severity, the hyoid bone and the submental region positioned inferiorly and the fourth vertebra relocated posteriorly with respect to the mandible. This caused a fanlike configuration of the lower part of the face and neck in the sagittal plane in both upright and supine body positions. TP splines revealed tongue deformations caused by a body position change. Overall, the new morphometric tool adopted here was found to be viable in the analysis of morphologic changes.
NASA Astrophysics Data System (ADS)
Guennoun, L.; El jastimi, J.; Guédira, F.; Marakchi, K.; Kabbaj, O. K.; El Hajji, A.; Zaydoun, S.
2011-01-01
The 3,5-diamino-1,2,4-triazole (guanazole) was investigated by vibrational spectroscopy and quantum methods. The solid phase FT-IR and FT-Raman spectra were recorded in the region 4000-400 cm -1 and 3600-50 cm -1 respectively, and the band assignments were supported by deuteration effects. The results of energy calculations have shown that the most stable form is 1H-3,5-diamino-1,2,4-triazole under C 1 symmetry. For this form, the molecular structure, harmonic vibrational wave numbers, infrared intensities and Raman activities were calculated by the ab initio/HF and DFT/B3LYP methods using 6-31G* basis set. The calculated geometrical parameters of the guanazole molecule using B3LYP methodology are in good agreement with the previously reported X-ray data, and the scaled vibrational wave number values are in good agreement with the experimental data. The normal vibrations were characterized in terms of potential energy distribution (PEDs) using VEDA 4 program.
Herrero Latorre, C; Barciela García, J; García Martín, S; Peña Crecente, R M
2013-12-04
Selenium is an essential element for the normal cellular function of living organisms. However, selenium is toxic at concentrations of only three to five times higher than the essential concentration. The inorganic forms (mainly selenite and selenate) present in environmental water generally exhibit higher toxicity (up to 40 times) than organic forms. Therefore, the determination of low levels of different inorganic selenium species in water is an analytical challenge. Solid-phase extraction has been used as a separation and/or preconcentration technique prior to the determination of selenium species due to the need for accurate measurements for Se species in water at extremely low levels. The present paper provides a critical review of the published methods for inorganic selenium speciation in water samples using solid phase extraction as a preconcentration procedure. On the basis of more than 75 references, the different speciation strategies used for this task have been highlighted and classified. The solid-phase extraction sorbents and the performance and analytical characteristics of the developed methods for Se speciation are also discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
The mechanisms of temporal inference
NASA Technical Reports Server (NTRS)
Fox, B. R.; Green, S. R.
1987-01-01
The properties of a temporal language are determined by its constituent elements: the temporal objects which it can represent, the attributes of those objects, the relationships between them, the axioms which define the default relationships, and the rules which define the statements that can be formulated. The methods of inference which can be applied to a temporal language are derived in part from a small number of axioms which define the meaning of equality and order and how those relationships can be propagated. More complex inferences involve detailed analysis of the stated relationships. Perhaps the most challenging area of temporal inference is reasoning over disjunctive temporal constraints. Simple forms of disjunction do not sufficiently increase the expressive power of a language while unrestricted use of disjunction makes the analysis NP-hard. In many cases a set of disjunctive constraints can be converted to disjunctive normal form and familiar methods of inference can be applied to the conjunctive sub-expressions. This process itself is NP-hard but it is made more tractable by careful expansion of a tree-structured search space.
A Guided Tour of Mathematical Methods - 2nd Edition
NASA Astrophysics Data System (ADS)
Snieder, Roel
2004-09-01
Mathematical methods are essential tools for all physical scientists. This second edition provides a comprehensive tour of the mathematical knowledge and techniques that are needed by students in this area. In contrast to more traditional textbooks, all the material is presented in the form of problems. Within these problems the basic mathematical theory and its physical applications are well integrated. The mathematical insights that the student acquires are therefore driven by their physical insight. Topics that are covered include vector calculus, linear algebra, Fourier analysis, scale analysis, complex integration, Green's functions, normal modes, tensor calculus, and perturbation theory. The second edition contains new chapters on dimensional analysis, variational calculus, and the asymptotic evaluation of integrals. This book can be used by undergraduates, and lower-level graduate students in the physical sciences. It can serve as a stand-alone text, or as a source of problems and examples to complement other textbooks. All the material is presented in the form of problems Mathematical insights are gained by getting the reader to develop answers themselves Many applications of the mathematics are given
Predicting the time of conversion to MCI in the elderly: role of verbal expression and learning.
Oulhaj, Abderrahim; Wilcock, Gordon K; Smith, A David; de Jager, Celeste A
2009-11-03
Increasing awareness that minimal or mild cognitive impairment (MCI) in the elderly may be a precursor of dementia has led to an increase in the number of people attending memory clinics. We aimed to develop a way of predicting the period of time before cognitive impairment occurs in community-dwelling elderly. The method is illustrated by the use of simple tests of different cognitive domains. A cohort of 241 normal elderly volunteers was followed for up to 20 years with regular assessments of cognitive abilities using the Cambridge Cognitive Examination (CAMCOG); 91 participants developed MCI. We used interval-censored survival analysis statistical methods to model which baseline cognitive tests best predicted the time to convert to MCI. Out of several baseline variables, only age and CAMCOG subscores for expression and learning/memory were predictors of the time to conversion. The time to conversion was 14% shorter for each 5 years of age, 17% shorter for each point lower in the expression score, and 15% shorter for each point lower in the learning score. We present in tabular form the probability of converting to MCI over intervals between 2 and 10 years for different combinations of expression and learning scores. In apparently normal elderly people, subtle measurable cognitive deficits that occur within the normal range on standard testing protocols reliably predict the time to clinically relevant cognitive impairment long before clinical symptoms are reported.
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Histamine inhalation challenge in normal horses and in horses with small airway disease.
Doucet, M Y; Vrins, A A; Ford-Hutchinson, A W
1991-01-01
A histamine inhalation challenge (HIC) procedure was developed to assess hyperreactive states in horses. Following clinical evaluation, percutaneous lung biopsies were performed on nine light breed mares aged 6 to 15 years. Five horses, with normal small airways, were classified as group A and four subjects with small airway disease (SAD) lesions formed group B. Pulmonary mechanics parameters were monitored following an aerosol of 0.9% saline and every 5 min for up to 30 min after HIC with 0.5% w/v of histamine diphosphate, administered through a face mask for 2.5 min. Tidal volume (VT) and airflow (V) values were obtained with a pneumotachograph. Transpulmonary pressure (delta Ppl) was measured by the esophageal balloon catheter method. Dynamic compliance (Cdyn), total pulmonary resistance (RL), end expiratory work of breathing (EEW) and respiratory rate (f) were calculated by a pulmonary mechanics computer. Group A horses had increases in RL, and decreases in Cdyn whereas horses in group B were hyperreactive and showed greater changes in EEW, Cdyn, and delta Ppl but with a relatively lower variation of RL. One horse in clinical remission from SAD, but with a high biopsy score (group B), and one clinically normal horse belonging to group A showed marked hyperreactivity as shown by increases in EEW, maximum change in delta Ppl and RL and decreases in Cdyn. These results suggest that the HIC described can be used as a method to investigate airway hyperreactivity and SAD in horses. Images Fig. 1. PMID:1889039
Sun, Yi; Luo, Deyi; Yang, Lu; Wei, Xin; Tang, Cai; Chen, Mei; Shen, Hong; Wei, Qiang
2017-12-01
To compare the efficacy between 2 different slings in normal weight and overweight women. Of 426 women, 220 (119 normal weight and 101 overweight) accepted the tension-free vaginal tape Abbrevo (TVT-A) and 206 (114 normal weight and 92 overweight) accepted the TVT Exact (TVT-E) procedure. Data collected contained the subjective efficiency, objective efficiency International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF), Incontinence Quality of Life Questionnaire (I-QOL), Pelvic Floor Impact Questionnaire-Short Form (PFIQ-7), Urogenital Distress Inventory-Short Form (UDI-6), and Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire-Short Form (PISQ)-12 at 36 months after surgery. In the normal weight patients, the subjectively and objectively cured rates were all high in both TVT-A and TVE-E (94.12% and 95.61% in objective result; 92.44% and 94.74% in subjective result). In addition, the score of I-QOL, PFIQ-7, and UDI-6 have significantly changed (P <.00001 in each one). In the overweight patients, the subjective and objective efficiency were better in TVT-E than TVT-A. Moreover, the score of I-QOL, PFIQ-7, and UDI-6 of overweight women have significantly changed only in the TVT-E (P <.00001 in each one), whereas both procedures have no effect on the score of PISQ-12 (P = .063 and P = .180 for TVT-A and TVT-E, respectively). The TVT-E might be a better choice for the overweight patient than TVT-A. Copyright © 2017 Elsevier Inc. All rights reserved.
Heart failure: when form fails to follow function.
Katz, Arnold M; Rolett, Ellis L
2016-02-01
Cardiac performance is normally determined by architectural, cellular, and molecular structures that determine the heart's form, and by physiological and biochemical mechanisms that regulate the function of these structures. Impaired adaptation of form to function in failing hearts contributes to two syndromes initially called systolic heart failure (SHF) and diastolic heart failure (DHF). In SHF, characterized by high end-diastolic volume (EDV), the left ventricle (LV) cannot eject a normal stroke volume (SV); in DHF, with normal or low EDV, the LV cannot accept a normal venous return. These syndromes are now generally defined in terms of ejection fraction (EF): SHF became 'heart failure with reduced ejection fraction' (HFrEF) while DHF became 'heart failure with normal or preserved ejection fraction' (HFnEF or HFpEF). However, EF is a chimeric index because it is the ratio between SV--which measures function, and EDV--which measures form. In SHF the LV dilates when sarcomere addition in series increases cardiac myocyte length, whereas sarcomere addition in parallel can cause concentric hypertrophy in DHF by increasing myocyte thickness. Although dilatation in SHF allows the LV to accept a greater venous return, it increases the energy cost of ejection and initiates a vicious cycle that contributes to progressive dilatation. In contrast, concentric hypertrophy in DHF facilitates ejection but impairs filling and can cause heart muscle to deteriorate. Differences in the molecular signals that initiate dilatation and concentric hypertrophy can explain why many drugs that improve prognosis in SHF have little if any benefit in DHF. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Shocks and metallicity gradients in normal star-forming galaxies
NASA Astrophysics Data System (ADS)
Ho, I.-Ting
Gas flow is one of the most fundamental processes driving galaxy evolution. This thesis explores gas flows in local galaxies by studying metallicity gradients and galactic-scale outflows in normal star-forming galaxies. This is made possible by new integral field spectroscopy data that provide simultaneously spatial and spectral information of galaxies. First, I measure metallicity gradients in isolated disk galaxies and show that their metallicity gradients are remarkably simple and universal. When the metallicity gradients are normalized to galaxy sizes, all the 49 galaxies studied have virtually the same metallicity gradient. I model the common metallicity gradient using a simple chemical evolution model to understand its origin. The common metallicity gradient is a direct result of the coevolution of gas and stellar disk while galactic disks build up their masses from inside-out. Tight constraints on the mass outflow rates and inflow rates can be placed by the chemical evolution model. Second, I investigate galactic winds in normal star-forming galaxies using data from an integral field spectroscopy survey. I demonstrate how to search for galactic winds by probing emission line ratios, shocks, and gas kinematics. Galactic winds are found to be common even in normal star-forming galaxies that were not expected to host winds. By comparing galaxies with and without hosting winds, I show that galaxies with high star formation rate surface densities and bursty star formation histories are more likely to drive large-scale galactic winds. Finally, lzifu, a toolkit for fitting multiple emission lines simultaneously in integral field spectroscopy data, is developed in this thesis. I describe in detail the structure of the toolkit and demonstrate the capabilities of lzifu.
... with other blood proteins to form fibrin. Fibrin strands form a net that entraps more platelets and ... that is normally dissolved in blood, into long strands of fibrin that radiate from the clumped platelets ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaminski, Adam
A method and apparatus to generate harmonically related laser wavelengths includes a pair of lenses at opposing faces of a non-linear optical material. The lenses are configured to promote incoming and outgoing beams to be normal to each outer lens surface over a range of acceptance angles of the incoming laser beam. This reduces reflection loss for higher efficiency operation. Additionally, the lenses allow a wider range of wavelengths for lasers for more universal application. Examples of the lenses include plano-cylindrical and plano-spherical form factors.
Investigation of Lyapunov stability of a central configuration in the restricted four-body problem
NASA Astrophysics Data System (ADS)
Bardin, B. S.; Esipov, P. A.
2018-05-01
The restricted planar four-body problem is considered. It is supposed that one of four bodies has infinitesimal mass and does not affect on motions of three other bodies. If two bodies have equal masses then there exists a central configuration such that three bodies are located in vertex of an equilateral triangle and the fourth body having infinitesimal mass is located in perpendicular bisector of the triangle. By using the method of normal forms and KAM theory stability of the above configuration is studied in the sense of Lyapunov.
Method for extracting lanthanides and actinides from acid solutions by modification of Purex solvent
Horwitz, E.P.; Kalina, D.G.
1984-05-21
A process has been developed for the extraction of multivalent lanthanide and actinide values from acidic waste solutions, and for the separation of these values from fission product and other values, which utilizes a new series of neutral bi-functional extractants, the alkyl(phenyl)-N, N-dialkylcarbamoylmethylphosphine oxides, in combination with a phase modifier to form an extraction solution. The addition of the extractant to the Purex process extractant, tri-n-butylphosphate in normal paraffin hydrocarbon diluent, will permit the extraction of multivalent lanthanide and actinide values from 0.1 to 12.0 molar acid solutions.
Selective determination of ertapenem in the presence of its degradation product
NASA Astrophysics Data System (ADS)
Hassan, Nagiba Y.; Abdel-Moety, Ezzat M.; Elragehy, Nariman A.; Rezk, Mamdouh R.
2009-06-01
Stability-indicative determination of ertapenem (ERTM) in the presence of its β-lactam open-ring degradation product, which is also the metabolite, is investigated. The degradation product has been isolated, via acid-degradation, characterized and elucidated. Selective quantification of ERTM, singly in bulk form, pharmaceutical formulations and/or in the presence of its major degradant is demonstrated. The indication of stability has been undertaken under conditions likely to be expected at normal storage conditions. Among the spectrophotometric methods adopted for quantification are first derivative ( 1D), first derivative of ratio spectra ( 1DD) and bivariate analysis.
Self-consistent-field perturbation theory for the Schröautdinger equation
NASA Astrophysics Data System (ADS)
Goodson, David Z.
1997-06-01
A method is developed for using large-order perturbation theory to solve the systems of coupled differential equations that result from the variational solution of the Schröautdinger equation with wave functions of product form. This is a noniterative, computationally efficient way to solve self-consistent-field (SCF) equations. Possible applications include electronic structure calculations using products of functions of collective coordinates that include electron correlation, vibrational SCF calculations for coupled anharmonic oscillators with selective coupling of normal modes, and ab initio calculations of molecular vibration spectra without the Born-Oppenheimer approximation.
Yoo, Youngjin; Tang, Lisa Y W; Brosch, Tom; Li, David K B; Kolind, Shannon; Vavasour, Irene; Rauscher, Alexander; MacKay, Alex L; Traboulsee, Anthony; Tam, Roger C
2018-01-01
Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t -test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.
Cho, Hanna; Kim, Jin Su; Choi, Jae Yong; Ryu, Young Hoon; Lyoo, Chul Hyoung
2014-01-01
We developed a new computed tomography (CT)-based spatial normalization method and CT template to demonstrate its usefulness in spatial normalization of positron emission tomography (PET) images with [(18)F] fluorodeoxyglucose (FDG) PET studies in healthy controls. Seventy healthy controls underwent brain CT scan (120 KeV, 180 mAs, and 3 mm of thickness) and [(18)F] FDG PET scans using a PET/CT scanner. T1-weighted magnetic resonance (MR) images were acquired for all subjects. By averaging skull-stripped and spatially-normalized MR and CT images, we created skull-stripped MR and CT templates for spatial normalization. The skull-stripped MR and CT images were spatially normalized to each structural template. PET images were spatially normalized by applying spatial transformation parameters to normalize skull-stripped MR and CT images. A conventional perfusion PET template was used for PET-based spatial normalization. Regional standardized uptake values (SUV) measured by overlaying the template volume of interest (VOI) were compared to those measured with FreeSurfer-generated VOI (FSVOI). All three spatial normalization methods underestimated regional SUV values by 0.3-20% compared to those measured with FSVOI. The CT-based method showed slightly greater underestimation bias. Regional SUV values derived from all three spatial normalization methods were correlated significantly (p < 0.0001) with those measured with FSVOI. CT-based spatial normalization may be an alternative method for structure-based spatial normalization of [(18)F] FDG PET when MR imaging is unavailable. Therefore, it is useful for PET/CT studies with various radiotracers whose uptake is expected to be limited to specific brain regions or highly variable within study population.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Varga, R.J.; Faulds, J.E.; Snee, L.W.; Harlan, S.S.; Bettison-Varga, L.
2004-01-01
Recent studies demonstrate that rifts are characterized by linked tilt domains, each containing a consistent polarity of normal faults and stratal tilt directions, and that the transition between domains is typically through formation of accommodation zones and generally not through production of throughgoing transfer faults. The mid-Miocene Black Mountains accommodation zone of southern Nevada and western Arizona is a well-exposed example of an accommodation zone linking two regionally extensive and opposing tilt domains. In the southeastern part of this zone near Kingman, Arizona, east dipping normal faults of the Whipple tilt domain and west dipping normal faults of the Lake Mead domain coalesce across a relatively narrow region characterized by a series of linked, extensional folds. The geometry of these folds in this strike-parallel portion of the accommodation zone is dictated by the geometry of the interdigitating normal faults of opposed polarity. Synclines formed where normal faults of opposite polarity face away from each other whereas anticlines formed where the opposed normal faults face each other. Opposed normal faults with small overlaps produced short folds with axial trends at significant angles to regional strike directions, whereas large fault overlaps produce elongate folds parallel to faults. Analysis of faults shows that the folds are purely extensional and result from east/northeast stretching and fault-related tilting. The structural geometry of this portion of the accommodation zone mirrors that of the Black Mountains accommodation zone more regionally, with both transverse and strike-parallel antithetic segments. Normal faults of both tilt domains lose displacement and terminate within the accommodation zone northwest of Kingman, Arizona. However, isotopic dating of growth sequences and crosscutting relationships show that the initiation of the two fault systems in this area was not entirely synchronous and that west dipping faults of the Lake Mead domain began to form between 1 m.y. to 0.2 m.y. prior to east dipping faults of the Whipple domain. The accommodation zone formed above an active and evolving magmatic center that, prior to rifting, produced intermediate-composition volcanic rocks and that, during rifting, produced voluminous rhyolite and basalt magmas. Copyright 2004 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, J.; Yu, T.; Papajak, E.
2011-01-01
Many methods for correcting harmonic partition functions for the presence of torsional motions employ some form of one-dimensional torsional treatment to replace the harmonic contribution of a specific normal mode. However, torsions are often strongly coupled to other degrees of freedom, especially other torsions and low-frequency bending motions, and this coupling can make assigning torsions to specific normal modes problematic. Here, we present a new class of methods, called multi-structural (MS) methods, that circumvents the need for such assignments by instead adjusting the harmonic results by torsional correction factors that are determined using internal coordinates. We present three versions ofmore » the MS method: (i) MS-AS based on including all structures (AS), i.e., all conformers generated by internal rotations; (ii) MS-ASCB based on all structures augmented with explicit conformational barrier (CB) information, i.e., including explicit calculations of all barrier heights for internal-rotation barriers between the conformers; and (iii) MS-RS based on including all conformers generated from a reference structure (RS) by independent torsions. In the MS-AS scheme, one has two options for obtaining the local periodicity parameters, one based on consideration of the nearly separable limit and one based on strongly coupled torsions. The latter involves assigning the local periodicities on the basis of Voronoi volumes. The methods are illustrated with calculations for ethanol, 1-butanol, and 1-pentyl radical as well as two one-dimensional torsional potentials. The MS-AS method is particularly interesting because it does not require any information about conformational barriers or about the paths that connect the various structures.« less
Zheng, Jingjing; Yu, Tao; Papajak, Ewa; Alecu, I M; Mielke, Steven L; Truhlar, Donald G
2011-06-21
Many methods for correcting harmonic partition functions for the presence of torsional motions employ some form of one-dimensional torsional treatment to replace the harmonic contribution of a specific normal mode. However, torsions are often strongly coupled to other degrees of freedom, especially other torsions and low-frequency bending motions, and this coupling can make assigning torsions to specific normal modes problematic. Here, we present a new class of methods, called multi-structural (MS) methods, that circumvents the need for such assignments by instead adjusting the harmonic results by torsional correction factors that are determined using internal coordinates. We present three versions of the MS method: (i) MS-AS based on including all structures (AS), i.e., all conformers generated by internal rotations; (ii) MS-ASCB based on all structures augmented with explicit conformational barrier (CB) information, i.e., including explicit calculations of all barrier heights for internal-rotation barriers between the conformers; and (iii) MS-RS based on including all conformers generated from a reference structure (RS) by independent torsions. In the MS-AS scheme, one has two options for obtaining the local periodicity parameters, one based on consideration of the nearly separable limit and one based on strongly coupled torsions. The latter involves assigning the local periodicities on the basis of Voronoi volumes. The methods are illustrated with calculations for ethanol, 1-butanol, and 1-pentyl radical as well as two one-dimensional torsional potentials. The MS-AS method is particularly interesting because it does not require any information about conformational barriers or about the paths that connect the various structures.
Rhythm-based heartbeat duration normalization for atrial fibrillation detection.
Islam, Md Saiful; Ammour, Nassim; Alajlan, Naif; Aboalsamh, Hatim
2016-05-01
Screening of atrial fibrillation (AF) for high-risk patients including all patients aged 65 years and older is important for prevention of risk of stroke. Different technologies such as modified blood pressure monitor, single lead ECG-based finger-probe, and smart phone using plethysmogram signal have been emerging for this purpose. All these technologies use irregularity of heartbeat duration as a feature for AF detection. We have investigated a normalization method of heartbeat duration for improved AF detection. AF is an arrhythmia in which heartbeat duration generally becomes irregularly irregular. From a window of heartbeat duration, we estimate the possible rhythm of the majority of heartbeats and normalize duration of all heartbeats in the window based on the rhythm so that we can measure the irregularity of heartbeats for both AF and non-AF rhythms in the same scale. Irregularity is measured by the entropy of distribution of the normalized duration. Then we classify a window of heartbeats as AF or non-AF by thresholding the measured irregularity. The effect of this normalization is evaluated by comparing AF detection performances using duration with the normalization, without normalization, and with other existing normalizations. Sensitivity and specificity of AF detection using normalized heartbeat duration were tested on two landmark databases available online and compared with results of other methods (with/without normalization) by receiver operating characteristic (ROC) curves. ROC analysis showed that the normalization was able to improve the performance of AF detection and it was consistent for a wide range of sensitivity and specificity for use of different thresholds. Detection accuracy was also computed for equal rates of sensitivity and specificity for different methods. Using normalized heartbeat duration, we obtained 96.38% accuracy which is more than 4% improvement compared to AF detection without normalization. The proposed normalization method was found useful for improving performance and robustness of AF detection. Incorporation of this method in a screening device could be crucial to reduce the risk of AF-related stroke. In general, the incorporation of the rhythm-based normalization in an AF detection method seems important for developing a robust AF screening device. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Huiwei; Wu, Ping; Ziegler, Sibylle I; Guan, Yihui; Wang, Yuetao; Ge, Jingjie; Schwaiger, Markus; Huang, Sung-Cheng; Zuo, Chuantao; Förster, Stefan; Shi, Kuangyu
2017-02-01
In brain 18 F-FDG PET data intensity normalization is usually applied to control for unwanted factors confounding brain metabolism. However, it can be difficult to determine a proper intensity normalization region as a reference for the identification of abnormal metabolism in diseased brains. In neurodegenerative disorders, differentiating disease-related changes in brain metabolism from age-associated natural changes remains challenging. This study proposes a new data-driven method to identify proper intensity normalization regions in order to improve separation of age-associated natural changes from disease related changes in brain metabolism. 127 female and 128 male healthy subjects (age: 20 to 79) with brain 18 F-FDG PET/CT in the course of a whole body cancer screening were included. Brain PET images were processed using SPM8 and were parcellated into 116 anatomical regions according to the AAL template. It is assumed that normal brain 18 F-FDG metabolism has longitudinal coherency and this coherency leads to better model fitting. The coefficient of determination R 2 was proposed as the coherence coefficient, and the total coherence coefficient (overall fitting quality) was employed as an index to assess proper intensity normalization strategies on single subjects and age-cohort averaged data. Age-associated longitudinal changes of normal subjects were derived using the identified intensity normalization method correspondingly. In addition, 15 subjects with clinically diagnosed Parkinson's disease were assessed to evaluate the clinical potential of the proposed new method. Intensity normalizations by paracentral lobule and cerebellar tonsil, both regions derived from the new data-driven coherency method, showed significantly better coherence coefficients than other intensity normalization regions, and especially better than the most widely used global mean normalization. Intensity normalization by paracentral lobule was the most consistent method within both analysis strategies (subject-based and age-cohort averaging). In addition, the proposed new intensity normalization method using the paracentral lobule generates significantly higher differentiation from the age-associated changes than other intensity normalization methods. Proper intensity normalization can enhance the longitudinal coherency of normal brain glucose metabolism. The paracentral lobule followed by the cerebellar tonsil are shown to be the two most stable intensity normalization regions concerning age-dependent brain metabolism. This may provide the potential to better differentiate disease-related changes from age-related changes in brain metabolism, which is of relevance in the diagnosis of neurodegenerative disorders. Copyright © 2016 Elsevier Inc. All rights reserved.
Molecular cloud formation in high-shear, magnetized colliding flows
NASA Astrophysics Data System (ADS)
Fogerty, E.; Frank, A.; Heitsch, F.; Carroll-Nellenback, J.; Haig, C.; Adams, M.
2016-08-01
The colliding flows (CF) model is a well-supported mechanism for generating molecular clouds. However, to-date most CF simulations have focused on the formation of clouds in the normal-shock layer between head-on colliding flows. We performed simulations of magnetized colliding flows that instead meet at an oblique-shock layer. Oblique shocks generate shear in the post-shock environment, and this shear creates inhospitable environments for star formation. As the degree of shear increases (I.e. the obliquity of the shock increases), we find that it takes longer for sink particles to form, they form in lower numbers, and they tend to be less massive. With regard to magnetic fields, we find that even a weak field stalls gravitational collapse within forming clouds. Additionally, an initially oblique collision interface tends to reorient over time in the presence of a magnetic field, so that it becomes normal to the oncoming flows. This was demonstrated by our most oblique shock interface, which became fully normal by the end of the simulation.
Sewer, Alain; Gubian, Sylvain; Kogel, Ulrike; Veljkovic, Emilija; Han, Wanjiang; Hengstermann, Arnd; Peitsch, Manuel C; Hoeng, Julia
2014-05-17
High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the "common reference design" and processed as "pseudo-single-channel". They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription-polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository.
2014-01-01
Background High-quality expression data are required to investigate the biological effects of microRNAs (miRNAs). The goal of this study was, first, to assess the quality of miRNA expression data based on microarray technologies and, second, to consolidate it by applying a novel normalization method. Indeed, because of significant differences in platform designs, miRNA raw data cannot be normalized blindly with standard methods developed for gene expression. This fundamental observation motivated the development of a novel multi-array normalization method based on controllable assumptions, which uses the spike-in control probes to adjust the measured intensities across arrays. Results Raw expression data were obtained with the Exiqon dual-channel miRCURY LNA™ platform in the “common reference design” and processed as “pseudo-single-channel”. They were used to apply several quality metrics based on the coefficient of variation and to test the novel spike-in controls based normalization method. Most of the considerations presented here could be applied to raw data obtained with other platforms. To assess the normalization method, it was compared with 13 other available approaches from both data quality and biological outcome perspectives. The results showed that the novel multi-array normalization method reduced the data variability in the most consistent way. Further, the reliability of the obtained differential expression values was confirmed based on a quantitative reverse transcription–polymerase chain reaction experiment performed for a subset of miRNAs. The results reported here support the applicability of the novel normalization method, in particular to datasets that display global decreases in miRNA expression similarly to the cigarette smoke-exposed mouse lung dataset considered in this study. Conclusions Quality metrics to assess between-array variability were used to confirm that the novel spike-in controls based normalization method provided high-quality miRNA expression data suitable for reliable downstream analysis. The multi-array miRNA raw data normalization method was implemented in an R software package called ExiMiR and deposited in the Bioconductor repository. PMID:24886675
Rabanus, J P; Gelderblom, H R; Schuppan, D; Becker, J
1991-05-01
The ultrastructural localization of collagens type V and VI in normal human gingival mucosa was investigated by immunoelectron microscopy. Twenty biopsies were fixed in dimethylsuberimidate and shock-frozen in slush nitrogen. Collagen type V was mainly located to meshworks of uniform nonstriated microfibrils of 12 to 20 nm width, which preferentially appeared in larger spaces between cross-striated major collagen fibrils. Occasionally single microfibrils of collagen type V fanned out from the ends of major collagen fibrils, which may indicate a role as a core fibril. Collagen type V was not found in the subepithelial basement membrane and the immediately adjacent stroma. Collagen type VI was detected in a loose reticular network of unbanded microfilaments that were morphologically distinguishable by knoblike protrusions every 100-110 nm. These microfilaments were found in the vicinity, but not as an intrinsic component, of the subepithelial basement membrane. Single filaments of collagen type VI filaments appeared to form bridges between neighboring cross-striated major collagen fibrils, suggesting an interconnecting role for this collagen type. The method presented appears to be excellently suited to study the normal and pathological supramolecular organization of the oral extracellular matrix.
Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma
NASA Astrophysics Data System (ADS)
Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.
2004-05-01
Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.
Shafipour, Maryam; Sabbaghian, Marjan; Shahhoseini, Maryam; Sadighi Gilani, Mohammad Ali
2014-01-01
Background: Septins are an evolutionary conserved group of GTP-binding and filament-forming proteins that have diverse cellular roles. An increasing body of data implicates the septin family in the pathogenesis of diverse states including cancers, neurodegeneration, and male infertility. Objective: The objective of the study was to evaluate the expression pattern of Septin14 in testis tissue of men with and without spermatogenic failure. Materials and Methods: The samples retrieved accessible random between infertile men who underwent diagnostic testicular biopsy in Royan institute. 10 infertile men with obstructive azoospermia and normal spermatogenesis and 20 infertile men with non-obstructive azoospermia were recruited for real-time reverse transcription (RT)-PCR analysis of the testicular tissue. Total RNA was extracted with trizol reagent. Results: Comparison of the mRNA level of septin14 revealed that in tissues with partial (n=10) or complete spermatogenesis (n=10), the expression of septin 14 was significantly higher than sertoli cell only tissues. Conclusion: The testicular tissues of men with hypospermatogenesis, maturation arrest and sertoli cell only had lower levels of septin 14 transcripts than normal men. These data indicates that Septin 14 expression level is critical for human spermatogenesis. PMID:24799881
Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D
2018-01-01
Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.
Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A.; Peddada, Shyamal D.
2018-01-01
Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html PMID:29456555
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
DNA Secondary Structure at Chromosomal Fragile Sites in Human Disease
Thys, Ryan G; Lehman, Christine E; Pierce, Levi C. T; Wang, Yuh-Hwa
2015-01-01
DNA has the ability to form a variety of secondary structures that can interfere with normal cellular processes, and many of these structures have been associated with neurological diseases and cancer. Secondary structure-forming sequences are often found at chromosomal fragile sites, which are hotspots for sister chromatid exchange, chromosomal translocations, and deletions. Structures formed at fragile sites can lead to instability by disrupting normal cellular processes such as DNA replication and transcription. The instability caused by disruption of replication and transcription can lead to DNA breakage, resulting in gene rearrangements and deletions that cause disease. In this review, we discuss the role of DNA secondary structure at fragile sites in human disease. PMID:25937814
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.