Feizi, Sepehr; Delfazayebaher, Siamak; Ownagh, Vahid; Sadeghpour, Fatemeh
To evaluate the agreement between total corneal astigmatism calculated by vector summation of anterior and posterior corneal astigmatism (TCA Vec ) and total corneal astigmatism measured by ray tracing (TCA Ray ). This study enrolled a total of 204 right eyes of 204 normal subjects. The eyes were measured using a Galilei double Scheimpflug analyzer. The measured parameters included simulated keratometric astigmatism using the keratometric index, anterior corneal astigmatism using the corneal refractive index, posterior corneal astigmatism, and TCA Ray . TCA Vec was derived by vector summation of the astigmatism on the anterior and posterior corneal surfaces. The magnitudes and axes of TCA Vec and TCA Ray were compared. The Pearson correlation coefficient and Bland-Altman plots were used to assess the relationship and agreement between TCA Vec and TCA Ray , respectively. The mean TCA Vec and TCA Ray magnitudes were 0.76±0.57D and 1.00±0.78D, respectively (P<0.001). The mean axis orientations were 85.12±30.26° and 89.67±36.76°, respectively (P=0.02). Strong correlations were found between the TCA Vec and TCA Ray magnitudes (r=0.96, P<0.001). Moderate associations were observed between the TCA Vec and TCA Ray axes (r=0.75, P<0.001). Bland-Altman plots produced the 95% limits of agreement for the TCA Vec and TCA Ray magnitudes from -0.33 to 0.82D. The 95% limits of agreement between the TCA Vec and TCA Ray axes was -43.0 to 52.1°. The magnitudes and axes of astigmatisms measured by the vector summation and ray tracing methods cannot be used interchangeably. There was a systematic error between the TCA Vec and TCA Ray magnitudes. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
VecScreen_plus_taxonomy: imposing a tax(onomy) increase on vector contamination screening.
Schäffer, Alejandro A; Nawrocki, Eric P; Choi, Yoon; Kitts, Paul A; Karsch-Mizrachi, Ilene; McVeigh, Richard
2018-03-01
Nucleic acid sequences in public databases should not contain vector contamination, but many sequences in GenBank do (or did) contain vectors. The National Center for Biotechnology Information uses the program VecScreen to screen submitted sequences for contamination. Additional tools are needed to distinguish true-positive (contamination) from false-positive (not contamination) VecScreen matches. A principal reason for false-positive VecScreen matches is that the sequence and the matching vector subsequence originate from closely related or identical organisms (for example, both originate in Escherichia coli). We collected information on the taxonomy of sources of vector segments in the UniVec database used by VecScreen. We used that information in two overlapping software pipelines for retrospective analysis of contamination in GenBank and for prospective analysis of contamination in new sequence submissions. Using the retrospective pipeline, we identified and corrected over 8000 contaminated sequences in the nonredundant nucleotide database. The prospective analysis pipeline has been in production use since April 2017 to evaluate some new GenBank submissions. Data on the sources of UniVec entries were included in release 10.0 (ftp://ftp.ncbi.nih.gov/pub/UniVec/). The main software is freely available at https://github.com/aaschaffer/vecscreen_plus_taxonomy. aschaffe@helix.nih.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
NASA Astrophysics Data System (ADS)
Konstantakis, Konstantinos N.; Michaelides, Panayotis G.; Vouldis, Angelos T.
2016-06-01
As a result of domestic and international factors, the Greek economy faced a severe crisis which is directly comparable only to the Great Recession. In this context, a prominent victim of this situation was the country's banking system. This paper attempts to shed light on the determining factors of non-performing loans in the Greek banking sector. The analysis presents empirical evidence from the Greek economy, using aggregate data on a quarterly basis, in the time period 2001-2015, fully capturing the recent recession. In this work, we use a relevant econometric framework based on a real time Vector Autoregressive (VAR)-Vector Error Correction (VEC) model, which captures the dynamic interdependencies among the variables used. Consistent with international evidence, the empirical findings show that both macroeconomic and financial factors have a significant impact on non-performing loans in the country. Meanwhile, the deteriorating credit quality feeds back into the economy leading to a self-reinforcing negative loop.
Modeling Musical Context With Word2Vec
NASA Astrophysics Data System (ADS)
Herremans, Dorien; Chuan, Ching-Hua
2017-05-01
We present a semantic vector space model for capturing complex polyphonic musical context. A word2vec model based on a skip-gram representation with negative sampling was used to model slices of music from a dataset of Beethoven's piano sonatas. A visualization of the reduced vector space using t-distributed stochastic neighbor embedding shows that the resulting embedded vector space captures tonal relationships, even without any explicit information about the musical contents of the slices. Secondly, an excerpt of the Moonlight Sonata from Beethoven was altered by replacing slices based on context similarity. The resulting music shows that the selected slice based on similar word2vec context also has a relatively short tonal distance from the original slice.
Chinese Text Summarization Algorithm Based on Word2vec
NASA Astrophysics Data System (ADS)
Chengzhang, Xu; Dan, Liu
2018-02-01
In order to extract some sentences that can cover the topic of a Chinese article, a Chinese text summarization algorithm based on Word2vec is used in this paper. Words in an article are represented as vectors trained by Word2vec, the weight of each word, the sentence vector and the weight of each sentence are calculated by combining word-sentence relationship with graph-based ranking model. Finally the summary is generated on the basis of the final sentence vector and the final weight of the sentence. The experimental results on real datasets show that the proposed algorithm has a better summarization quality compared with TF-IDF and TextRank.
NASA Astrophysics Data System (ADS)
Zhang, Chi; Zhou, Kaile; Yang, Shanlin; Shao, Zhen
2017-05-01
Since the reforming and opening up in 1978, China has experienced a miraculous development. To investigate the transformation and upgrading of China's economy, this study focuses on the relationship between economic growth and electricity consumption of the secondary and tertiary industry in China. This paper captures the dynamic interdependencies among the related variables using a theoretical framework based on a Vector Autoregressive (VAR)-Vector Error Correction (VEC) model. Using the macroeconomic and electricity consumption data, the results show that, for secondary industry, there is only a unidirectional Granger causality from electricity consumption to Gross Domestic Product (GDP) from 1980 to 2000. However, for the tertiary industry, it only occurs that GDP Granger causes electricity consumption from 2001 to 2014. All these conclusions are verified by the impulse response function and variance decomposition. This study has a great significance to reveal the relationship between industrial electricity consumption and the pattern of economic development. Meanwhile, it further suggests that, since China joined the World Trade Organization (WTO) in 2001, the trend of the economic transformation and upgrading has gradually appeared.
NASA Astrophysics Data System (ADS)
Alrasyid, Harun; Safi, Fahrudin; Iranata, Data; Chen-Ou, Yu
2017-11-01
This research shows the prediction of shear behavior of High-Strength Reinforced Concrete Columns using Finite-Element Method. The experimental data of nine half scale high-strength reinforced concrete were selected. These columns using specified concrete compressive strength of 70 MPa, specified yield strength of longitudinal and transverse reinforcement of 685 and 785 MPa, respectively. The VecTor2 finite element software was used to simulate the shear critical behavior of these columns. The combination axial compression load and monotonic loading were applied at this prediction. It is demonstrated that VecTor2 finite element software provides accurate prediction of load-deflection up to peak at applied load, but provide similar behavior at post peak load. The shear strength prediction provide by VecTor 2 are slightly conservative compare to test result.
A simple method of equine limb force vector analysis and its potential applications.
Hobbs, Sarah Jane; Robinson, Mark A; Clayton, Hilary M
2018-01-01
Ground reaction forces (GRF) measured during equine gait analysis are typically evaluated by analyzing discrete values obtained from continuous force-time data for the vertical, longitudinal and transverse GRF components. This paper describes a simple, temporo-spatial method of displaying and analyzing sagittal plane GRF vectors. In addition, the application of statistical parametric mapping (SPM) is introduced to analyse differences between contra-lateral fore and hindlimb force-time curves throughout the stance phase. The overall aim of the study was to demonstrate alternative methods of evaluating functional (a)symmetry within horses. GRF and kinematic data were collected from 10 horses trotting over a series of four force plates (120 Hz). The kinematic data were used to determine clean hoof contacts. The stance phase of each hoof was determined using a 50 N threshold. Vertical and longitudinal GRF for each stance phase were plotted both as force-time curves and as force vector diagrams in which vectors originating at the centre of pressure on the force plate were drawn at intervals of 8.3 ms for the duration of stance. Visual evaluation was facilitated by overlay of the vector diagrams for different limbs. Summary vectors representing the magnitude (VecMag) and direction (VecAng) of the mean force over the entire stance phase were superimposed on the force vector diagram. Typical measurements extracted from the force-time curves (peak forces, impulses) were compared with VecMag and VecAng using partial correlation (controlling for speed). Paired samples t -tests (left v. right diagonal pair comparison and high v. low vertical force diagonal pair comparison) were performed on discrete and vector variables using traditional methods and Hotelling's T 2 tests on normalized stance phase data using SPM. Evidence from traditional statistical tests suggested that VecMag is more influenced by the vertical force and impulse, whereas VecAng is more influenced by the longitudinal force and impulse. When used to evaluate mean data from the group of ten sound horses, SPM did not identify differences between the left and right contralateral limb pairs or between limb pairs classified according to directional asymmetry. When evaluating a single horse, three periods were identified during which differences in the forces between the left and right forelimbs exceeded the critical threshold ( p < .01). Traditional statistical analysis of 2D GRF peak values, summary vector variables and visual evaluation of force vector diagrams gave harmonious results and both methods identified the same inter-limb asymmetries. As alpha was more tightly controlled using SPM, significance was only found in the individual horse although T 2 plots followed the same trends as discrete analysis for the group. The techniques of force vector analysis and SPM hold promise for investigations of sidedness and asymmetry in horses.
A simple method of equine limb force vector analysis and its potential applications
Robinson, Mark A.; Clayton, Hilary M.
2018-01-01
Background Ground reaction forces (GRF) measured during equine gait analysis are typically evaluated by analyzing discrete values obtained from continuous force-time data for the vertical, longitudinal and transverse GRF components. This paper describes a simple, temporo-spatial method of displaying and analyzing sagittal plane GRF vectors. In addition, the application of statistical parametric mapping (SPM) is introduced to analyse differences between contra-lateral fore and hindlimb force-time curves throughout the stance phase. The overall aim of the study was to demonstrate alternative methods of evaluating functional (a)symmetry within horses. Methods GRF and kinematic data were collected from 10 horses trotting over a series of four force plates (120 Hz). The kinematic data were used to determine clean hoof contacts. The stance phase of each hoof was determined using a 50 N threshold. Vertical and longitudinal GRF for each stance phase were plotted both as force-time curves and as force vector diagrams in which vectors originating at the centre of pressure on the force plate were drawn at intervals of 8.3 ms for the duration of stance. Visual evaluation was facilitated by overlay of the vector diagrams for different limbs. Summary vectors representing the magnitude (VecMag) and direction (VecAng) of the mean force over the entire stance phase were superimposed on the force vector diagram. Typical measurements extracted from the force-time curves (peak forces, impulses) were compared with VecMag and VecAng using partial correlation (controlling for speed). Paired samples t-tests (left v. right diagonal pair comparison and high v. low vertical force diagonal pair comparison) were performed on discrete and vector variables using traditional methods and Hotelling’s T2 tests on normalized stance phase data using SPM. Results Evidence from traditional statistical tests suggested that VecMag is more influenced by the vertical force and impulse, whereas VecAng is more influenced by the longitudinal force and impulse. When used to evaluate mean data from the group of ten sound horses, SPM did not identify differences between the left and right contralateral limb pairs or between limb pairs classified according to directional asymmetry. When evaluating a single horse, three periods were identified during which differences in the forces between the left and right forelimbs exceeded the critical threshold (p < .01). Discussion Traditional statistical analysis of 2D GRF peak values, summary vector variables and visual evaluation of force vector diagrams gave harmonious results and both methods identified the same inter-limb asymmetries. As alpha was more tightly controlled using SPM, significance was only found in the individual horse although T2 plots followed the same trends as discrete analysis for the group. Conclusions The techniques of force vector analysis and SPM hold promise for investigations of sidedness and asymmetry in horses. PMID:29492341
Dynamics relationship between stock prices and economic variables in Malaysia
NASA Astrophysics Data System (ADS)
Chun, Ooi Po; Arsad, Zainudin; Huen, Tan Bee
2014-07-01
Knowledge on linkages between stock prices and macroeconomic variables are essential in the formulation of effective monetary policy. This study investigates the relationship between stock prices in Malaysia (KLCI) with four selected macroeconomic variables, namely industrial production index (IPI), quasi money supply (MS2), real exchange rate (REXR) and 3-month Treasury bill (TRB). The variables used in this study are monthly data from 1996 to 2012. Vector error correction (VEC) model and Kalman filter (KF) technique are utilized to assess the impact of macroeconomic variables on the stock prices. The results from the cointegration test revealed that the stock prices and macroeconomic variables are cointegrated. Different from the constant estimate from the static VEC model, the KF estimates noticeably exhibit time-varying attributes over the entire sample period. The varying estimates of the impact coefficients should be better reflect the changing economic environment. Surprisingly, IPI is negatively related to the KLCI with the estimates of the impact slowly increase and become positive in recent years. TRB is found to be generally negatively related to the KLCI with the impact fluctuating along the constant estimate of the VEC model. The KF estimates for REXR and MS2 show a mixture of positive and negative impact on the KLCI. The coefficients of error correction term (ECT) are negative in majority of the sample period, signifying the stock prices responded to stabilize any short term deviation in the economic system. The findings from the KF model indicate that any implication that is based on the usual static model may lead to authorities implementing less appropriate policies.
Large-scale Clinical-grade Retroviral Vector Production in a Fixed-Bed Bioreactor
Wang, Xiuyan; Olszewska, Malgorzata; Qu, Jinrong; Wasielewska, Teresa; Bartido, Shirley; Hermetet, Gregory; Sadelain, Michel
2015-01-01
The successful genetic engineering of patient T cells with γ-retroviral vectors expressing chimeric antigen receptors or T-cell receptors for phase II clinical trials and beyond requires the large-scale manufacture of high-titer vector stocks. The production of retroviral vectors from stable packaging cell lines using roller bottles or 10- to 40-layer cell factories is limited by a narrow harvest window, labor intensity, open-system operations, and the requirement for significant incubator space. To circumvent these shortcomings, we optimized the production of vector stocks in a disposable fixed-bed bioreactor using good manufacturing practice–grade packaging cell lines. High-titer vector stocks were harvested over 10 days, representing a much broader harvest window than the 3-day harvest afforded by cell factories. For PG13 and 293Vec packaging cells, the average vector titer and the vector stocks’ yield in the bioreactor were higher by 3.2- to 7.3-fold, and 5.6- to 13.1-fold, respectively, than those obtained in cell factories. The vector production was 10.4 and 18.6 times more efficient than in cell factories for PG13 and 293Vec cells, respectively. Furthermore, the vectors produced from the fixed-bed bioreactors passed the release test assays for clinical applications. Therefore, a single vector lot derived from 293Vec is suitable to transduce up to 500 patients cell doses in the context of large clinical trials using chimeric antigen receptors or T-cell receptors. These findings demonstrate for the first time that a robust fixed-bed bioreactor process can be used to produce γ-retroviral vector stocks scalable up to the commercialization phase. PMID:25751502
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2017-06-01
Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface mass density distribution. Fortran 90 subroutines to execute these methods and their test programs and sample outputs are available from the author's WEB site: https://www.researchgate.net/profile/Toshio_Fukushima/
Nonuniform continuum model for solvatochromism based on frozen-density embedding theory.
Shedge, Sapana Vitthal; Wesolowski, Tomasz A
2014-10-20
Frozen-density embedding theory (FDET) provides the formal framework for multilevel numerical simulations, such that a selected subsystem is described at the quantum mechanical level, whereas its environment is described by means of the electron density (frozen density; ${\\rho _{\\rm{B}} (\\vec r)}$). The frozen density ${\\rho _{\\rm{B}} (\\vec r)}$ is usually obtained from some lower-level quantum mechanical methods applied to the environment, but FDET is not limited to such choices for ${\\rho _{\\rm{B}} (\\vec r)}$. The present work concerns the application of FDET, in which ${\\rho _{\\rm{B}} (\\vec r)}$ is the statistically averaged electron density of the solvent ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. The specific solute-solvent interactions are represented in a statistical manner in ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. A full self-consistent treatment of solvated chromophore, thus involves a single geometry of the chromophore in a given state and the corresponding ${\\left\\langle {\\rho _{\\rm{B}} (\\vec r)} \\right\\rangle }$. We show that the coupling between the two descriptors might be made in an approximate manner that is applicable for both absorption and emission. The proposed protocol leads to accurate (error in the range of 0.05 eV) descriptions of the solvatochromic shifts in both absorption and emission. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santini, Danilo J.; Poyer, David A.
Vector error correction (VEC) was used to test the importance of a theoretical causal chain from transportation fuel cost to vehicle sales to macroeconomic activity. Real transportation fuel cost was broken into two cost components: real gasoline price (rpgas) and real personal consumption of gasoline and other goods (gas). Real personal consumption expenditure on vehicles (RMVE) represented vehicle sales. Real gross domestic product (rGDP) was used as the measure of macroeconomic activity. The VEC estimates used quarterly data from the third quarter of 1952 to the first quarter of 2014. Controlling for the financial causes of the recent Great Recession,more » real homeowners’ equity (equity) and real credit market instruments liability (real consumer debt, rcmdebt) were included. Results supported the primary hypothesis of the research, but also introduced evidence that another financial path through equity is important, and that use of the existing fleet of vehicles (not just sales of vehicles) is an important transport-related contributor to macroeconomic activity. Consumer debt reduction is estimated to be a powerful short-run force reducing vehicle sales. Findings are interpreted in the context of the recent Greene, Lee, and Hopson (2012) (hereafter GLH) estimation of the magnitude of three distinct macroeconomic damage effects that result from dependence on imported oil, the price of which is manipulated by the Organization of Petroleum Exporting Countries (OPEC). The three negative macroeconomic impacts are due to (1) dislocation (positive oil price shock), (2) high oil price levels, and (3) a high value of the quantity of oil imports times an oil price delta (cartel price less competitive price). The third of these is the wealth effect. The VEC model addresses the first two, but the software output from the model (impulse response plots) does not isolate them. Nearly all prior statistical tests in the literature have used vector autoregression (VAR) and autoregressive distributed lag models that considered effects of oil price changes, but did not account for effects of oil price levels. Gasoline prices were rarely examined. The tests conducted in this report evaluate gasoline instead of oil.« less
Video2vec Embeddings Recognize Events When Examples Are Scarce.
Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G M
2017-10-01
This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlations between the words are utilized to learn a more effective representation by optimizing a joint objective balancing descriptiveness and predictability. We show how learning the Video2vec embedding using a multimodal predictability loss, including appearance, motion and audio features, results in a better predictable representation. We also propose an event specific variant of Video2vec to learn a more accurate representation for the words, which are indicative of the event, by introducing a term sensitive descriptiveness loss. Our experiments on three challenging collections of web videos from the NIST TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets demonstrate: i) the advantages of Video2vec over representations using attributes or alternative embeddings, ii) the benefit of fusing video modalities by an embedding over common strategies, iii) the complementarity of term sensitive descriptiveness and multimodal predictability for event recognition. By its ability to improve predictability of present day audio-visual video features, while at the same time maximizing their semantic descriptiveness, Video2vec leads to state-of-the-art accuracy for both few- and zero-example recognition of events in video.
Engineering HSV-1 vectors for gene therapy.
Goins, William F; Huang, Shaohua; Cohen, Justus B; Glorioso, Joseph C
2014-01-01
Virus vectors have been employed as gene transfer vehicles for various preclinical and clinical gene therapy applications, and with the approval of Glybera (alipogene tiparvovec) as the first gene therapy product as a standard medical treatment (Yla-Herttuala, Mol Ther 20: 1831-1832, 2013), gene therapy has reached the status of being a part of standard patient care. Replication-competent herpes simplex virus (HSV) vectors that replicate specifically in actively dividing tumor cells have been used in Phase I-III human trials in patients with glioblastoma multiforme, a fatal form of brain cancer, and in malignant melanoma. In fact, T-VEC (talimogene laherparepvec, formerly known as OncoVex GM-CSF) displayed efficacy in a recent Phase III trial when compared to standard GM-CSF treatment alone (Andtbacka et al. J Clin Oncol 31: sLBA9008, 2013) and may soon become the second FDA-approved gene therapy product used in standard patient care. In addition to the replication-competent oncolytic HSV vectors like T-VEC, replication-defective HSV vectors have been employed in Phase I-II human trials and have been explored as delivery vehicles for disorders such as pain, neuropathy, and other neurodegenerative conditions. Research during the last decade on the development of HSV vectors has resulted in the engineering of recombinant vectors that are totally replication defective, nontoxic, and capable of long-term transgene expression in neurons. This chapter describes methods for the construction of recombinant genomic HSV vectors based on the HSV-1 replication-defective vector backbones, steps in their purification, and their small-scale production for use in cell culture experiments as well as preclinical animal studies.
In vivo Assembly in Escherichia coli of Transformation Vectors for Plastid Genome Engineering
Wu, Yuyong; You, Lili; Li, Shengchun; Ma, Meiqi; Wu, Mengting; Ma, Lixin; Bock, Ralph; Chang, Ling; Zhang, Jiang
2017-01-01
Plastid transformation for the expression of recombinant proteins and entire metabolic pathways has become a promising tool for plant biotechnology. However, large-scale application of this technology has been hindered by some technical bottlenecks, including lack of routine transformation protocols for agronomically important crop plants like rice or maize. Currently, there are no standard or commercial plastid transformation vectors available for the scientific community. Construction of a plastid transformation vector usually requires tedious and time-consuming cloning steps. In this study, we describe the adoption of an in vivo Escherichia coli cloning (iVEC) technology to quickly assemble a plastid transformation vector. The method enables simple and seamless build-up of a complete plastid transformation vector from five DNA fragments in a single step. The vector assembled for demonstration purposes contains an enhanced green fluorescent protein (GFP) expression cassette, in which the gfp transgene is driven by the tobacco plastid ribosomal RNA operon promoter fused to the 5′ untranslated region (UTR) from gene10 of bacteriophage T7 and the transcript-stabilizing 3′UTR from the E. coli ribosomal RNA operon rrnB. Successful transformation of the tobacco plastid genome was verified by Southern blot analysis and seed assays. High-level expression of the GFP reporter in the transplastomic plants was visualized by confocal microscopy and Coomassie staining, and GFP accumulation was ~9% of the total soluble protein. The iVEC method represents a simple and efficient approach for construction of plastid transformation vector, and offers great potential for the assembly of increasingly complex vectors for synthetic biology applications in plastids. PMID:28871270
In vivo Assembly in Escherichia coli of Transformation Vectors for Plastid Genome Engineering.
Wu, Yuyong; You, Lili; Li, Shengchun; Ma, Meiqi; Wu, Mengting; Ma, Lixin; Bock, Ralph; Chang, Ling; Zhang, Jiang
2017-01-01
Plastid transformation for the expression of recombinant proteins and entire metabolic pathways has become a promising tool for plant biotechnology. However, large-scale application of this technology has been hindered by some technical bottlenecks, including lack of routine transformation protocols for agronomically important crop plants like rice or maize. Currently, there are no standard or commercial plastid transformation vectors available for the scientific community. Construction of a plastid transformation vector usually requires tedious and time-consuming cloning steps. In this study, we describe the adoption of an in vivo Escherichia coli cloning (iVEC) technology to quickly assemble a plastid transformation vector. The method enables simple and seamless build-up of a complete plastid transformation vector from five DNA fragments in a single step. The vector assembled for demonstration purposes contains an enhanced green fluorescent protein (GFP) expression cassette, in which the gfp transgene is driven by the tobacco plastid ribosomal RNA operon promoter fused to the 5' untranslated region (UTR) from gene10 of bacteriophage T7 and the transcript-stabilizing 3'UTR from the E. coli ribosomal RNA operon rrnB . Successful transformation of the tobacco plastid genome was verified by Southern blot analysis and seed assays. High-level expression of the GFP reporter in the transplastomic plants was visualized by confocal microscopy and Coomassie staining, and GFP accumulation was ~9% of the total soluble protein. The iVEC method represents a simple and efficient approach for construction of plastid transformation vector, and offers great potential for the assembly of increasingly complex vectors for synthetic biology applications in plastids.
NASA Astrophysics Data System (ADS)
de Senna, Viviane; Souza, Adriano Mendonça
2016-11-01
Since the 1988 Federal Constitution social assistance has become a duty of the State and a right to everyone, guaranteeing the population a dignified life. To ensure these rights federal government has created programs that can supply the main needs of people in extreme poverty. Among the programs that provide social assistance to the population, the best known are the ;Bolsa Família; Program - PBF and the Continuous Cash Benefit - Continuous Cash Benefit - BPC. This research's main purpose is to analyze the relationship between the main macroeconomic variables and the Federal government spending on social welfare policy in the period from January 2004 to August 2014. The used methodologies are the Vector auto regression model - VAR and Error Correction Vector - VEC. The conclusion, was that there is a meaningful relationship between macroeconomic variables and social assistance programs. This indicates that if the government takes a more abrupt resolution in changing the existing programs it will result in fluctuations in the main macroeconomic variables interfering with the stability of Brazilian domestic economy up to twelve months.
USDA-ARS?s Scientific Manuscript database
Dual luciferase reporter systems are valuable tools for functional genomic studies, but have not previously been developed for use in tick cell culture. We evaluated expression of available luciferase constructs in tick cell cultures derived from Rhipicephalus (Boophilus) microplus, an important vec...
Laboratory Validation of the Sand Fly Fever Virus Antigen Assay
2015-12-01
several commercially available assays from VecTOR Test Systems Inc. for malaria, West Nile virus, Rift Valley fever virus, dengue , chikungunya, and...Sabin AB. 1955. Recent advances in our knowledge of dengue and sandfly fever. Am J Trop Med Hyg 4:198–207. Sather GE. 1970. Catalogue of arthropod
Accelerating navigation in the VecGeom geometry modeller
NASA Astrophysics Data System (ADS)
Wenzel, Sandro; Zhang, Yang; pre="for the"> VecGeom Developers, 2017-10-01 The VecGeom geometry library is a relatively recent effort aiming to provide a modern and high performance geometry service for particle detector simulation in hierarchical detector geometries common to HEP experiments. One of its principal targets is the efficient use of vector SIMD hardware instructions to accelerate geometry calculations for single track as well as multi-track queries. Previously, excellent performance improvements compared to Geant4/ROOT could be reported for elementary geometry algorithms at the level of single shape queries. In this contribution, we will focus on the higher level navigation algorithms in VecGeom, which are the most important components as seen from the simulation engines. We will first report on our R&D effort and developments to implement SIMD enhanced data structures to speed up the well-known “voxelised” navigation algorithms, ubiquitously used for particle tracing in complex detector modules consisting of many daughter parts. Second, we will discuss complementary new approaches to improve navigation algorithms in HEP. These ideas are based on a systematic exploitation of static properties of the detector layout as well as automatic code generation and specialisation of the C++ navigator classes. Such specialisations reduce the overhead of generic- or virtual function based algorithms and enhance the effectiveness of the SIMD vector units. These novel approaches go well beyond the existing solutions available in Geant4 or TGeo/ROOT, achieve a significantly superior performance, and might be of interest for a wide range of simulation backends (GeantV, Geant4). We exemplify this with concrete benchmarks for the CMS and ALICE detectors.
Measurement of the beam asymmetry Σ for π 0 and η photoproduction on the proton at E γ = 9 GeV
Al Ghoul, H.; Anassontzis, E. G.; Austregesilo, A.; ...
2017-04-24
In this paper, we report measurements of the photon beam asymmetrymore » $$\\Sigma$$ for the reactions $$\\vec{\\gamma}p\\to p\\pi^0$$ and $$\\vec{\\gamma}p\\to p\\eta $$ from the GLUEX experiment using a 9 GeV linearly polarized, tagged photon beam incident on a liquid hydrogen target in Jefferson Lab's Hall D. The asymmetries, measured as a function of the proton momentum transfer, possess greater precision than previous $$\\pi^0$$ measurements and are the first $$\\eta$$ measurements in this energy regime. Lastly, the results are compared with theoretical predictions based on $t$-channel, quasi-particle exchange and constrain the axial-vector component of the neutral meson production mechanism in these models.« less
On the image of AGS 3He 2+ $$\\vec{n}$$ 0 in the blue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meot, F.; Huang, H.; Tsoupas, N.
This note addresses the transport of Helion spinmore » $$\\vec{n}$$ 0 vector, from its periodic orientation in the AGS to RHIC Blue ring injection kicker, via the AGS extraction system and the AtR line. The goal is to investigate optimal injection energy into RHIC, in the matter of Helion spin matching, in the hypothesis of equal warm and cold snake strengths in the AGS. The study uses recently computed OPERA 3-D field maps of the AGS cold snake, including possibility of independent solenoid and helix settings (as discussed in Tech. Note C-A/AP/485), together with the machinery of the AGS and AtR models developed in the stepwise ray-tracing code Zgoubi. Computing tools and methods employed are discussed as well, in order to facilitate possible further checks or investigations. They are however similar to those used in an earlier study regarding the image in RHIC Blue and Yellow of AGS $$\\vec{n}$$ 0 via the AtR in the case of proton beam (Tech. Note C-A/AP/502), which can be referred to for additional details.« less
Efficacy versus health risks: An in vitro evaluation of power-driven scalers.
Graetz, Christian; Plaumann, Anna; Bielfeldt, Jule; Tillner, Anica; Sälzer, Sonja; Dörfer, Christof Edmund
2015-01-01
Power-driven instrumentation of root surfaces during supportive periodontal therapy is an alternative to hand instrumentation. The purpose of this pilot in vitro study was to investigate the efficacy of sub- and supragingival plaque removal with a sonic (AIR: Synea, W and H, Bürmoos, Austria) and two ultrasonic devices (TIG: Tigon+, W and H, Bürmoos, Austria; VEC: Vector, Dürr, Bietigheim-Bissingen, Germany) as well as the health-risk for dental professionals during treatment. The power-driven devices were utilized to remove plaque from model teeth in dummy heads. The percentage of residual artificial plaque after 2 min of supra- or subgingival instrumentation was calculated by means of image-processing techniques at four sites (n = 576) of each tooth. The Health-Risk-Index (HRI: spatter/residual plaque quotient) with the different power-driven devices was assessed during treatment. The smallest amounts of residual plaque were found for the sonic device AIR (8.89% ± 10.92%) and the ultrasonic scaler TIG (8.72% ± 12.02%) (P = 0.707). Significantly more plaque was remained after the use of the ultrasonic scaler VEC (18.76% ± 18.07%) (P < 0.001). Irrespectively of the scaler, efficacy was similar sub- (10.7% ± 11.6%) and supragingivally (13.5% ± 17.2%) (P = 0.901). AIR/TIG demonstrated equal residual amounts of plaque sub- (P = 0.831) as well as supragingivally (P = 0.510). However, AIR/VEC and TIG/VEC were significantly in favor of AIR and TIG (P < 0.001). In contrast, the lowest HRI was found after using VEC (0.0043) and differed considerably for AIR (0.2812) and TIG (0.0287). Sonic devices are as effective as ultrasonic devices in the removal of biofilm but bear a higher risk to the dental professional's health concerning the formation of spatter.
Uniform Interfaces for Distributed Systems.
1980-05-01
in data str ’.ctures on stable storage (such as disk). The Virtual Terminals associated with a particular user (i.e., rM display terminal) are all...vec MESSAGESIZE let error = nil [S ReceiveAny (msg) // The copy is made so that lower-level routines may // munge the message template without losing
Prediction of enhancer-promoter interactions via natural language processing.
Zeng, Wanwen; Wu, Mengmeng; Jiang, Rui
2018-05-09
Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput. We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features. EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.
Hassan, Ghada S; Jacques, Danielle; D'Orléans-Juste, Pedro; Magder, Sheldon; Bkaily, Ghassan
2018-05-14
The interaction between vascular endothelial cells (VECs) and vascular smooth muscle cells (VSMCs) plays an important role in the modulation of vascular tone. There is, however, no information on whether direct physical communication regulates the intracellular calcium levels of human VECs (hVECs) and (or) human VSMCs (hVSMCs). Thus, the objective of the study is to verify whether co-culture of hVECs and hVSMCs modulates cytosolic ([Ca 2+ ] c ) and nuclear calcium ([Ca 2+ ] n ) levels via physical contact and (or) factors released by both cell types. Quantitative 3D confocal microscopy for [Ca 2+ ] c and [Ca 2+ ] n measurement was performed in cultured hVECs or hVSMCs or in co-culture of hVECs-hVSMCs. Our results show that: (1) physical contact between hVECs-hVECs or hVSMCs-hVSMCs does not affect [Ca 2+ ] c and [Ca 2+ ] n in these 2 cell types; (2) physical contact between hVECs and hVSMCs induces a significant increase only of [Ca 2+ ] n of hVECs without affecting the level of [Ca 2+ ] c and [Ca 2+ ] n of hVSMCs; and (3) preconditioned culture medium of hVECs or hVSMCs does not affect [Ca 2+ ] c and [Ca 2+ ] n of both types of cells. We concluded that physical contact between hVECs and hVSMCs only modulates [Ca 2+ ] n in hVECs. The increase of [Ca 2+ ] n in hVECs may modulate nuclear functions that are calcium dependent.
Margined winner-take-all: New learning rule for pattern recognition.
Fukushima, Kunihiko
2018-01-01
The neocognitron is a deep (multi-layered) convolutional neural network that can be trained to recognize visual patterns robustly. In the intermediate layers of the neocognitron, local features are extracted from input patterns. In the deepest layer, based on the features extracted in the intermediate layers, input patterns are classified into classes. A method called IntVec (interpolating-vector) is used for this purpose. This paper proposes a new learning rule called margined Winner-Take-All (mWTA) for training the deepest layer. Every time when a training pattern is presented during the learning, if the result of recognition by WTA (Winner-Take-All) is an error, a new cell is generated in the deepest layer. Here we put a certain amount of margin to the WTA. In other words, only during the learning, a certain amount of handicap is given to cells of classes other than that of the training vector, and the winner is chosen under this handicap. By introducing the margin to the WTA, we can generate a compact set of cells, with which a high recognition rate can be obtained with a small computational cost. The ability of this mWTA is demonstrated by computer simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uchide, Keiji; Sakon, Masato; Ariyoshi, Hideo; Nakamori, Syouji; Tokunaga, Masaru; Monden, Morito
2007-02-01
Cancer cell mediated vascular endothelial cell (vEC) retraction plays a pivotal role in cancer metastasis. The aim of this study is to clarify the biochemical character of vEC retraction factor derived from human breast cancer cell line, MCF-7. In order to estimate vEC retracting activity, transwell chamber assay system was employed. We first tested the effects of trypsin digestion as well as lipid extraction of culture medium (CM). Trypsin digestion of CM resulted in approximately 40% loss of vEC retracting activity and lipid extraction of CM by Brigh and Dyer methods recovered approximately 60% of vEC retracting activity, suggesting that approximately 60% of vEC retracting activity in MCF-7 derived CM is due to lipid. Although Nordihydroguaiaretic acid (NDGA), the specific lipoxygenase inhibitor, suppressed vEC retracting activity in CM, Acetyl salicylic acid (ASA), a specific cyclooxygenase inhibitor, did not affect the activity, suggesting that lipid exerting vEC retracting activity in CM belongs to lipoxygenase mediated arachidonate metabolites. Thin layer chromatography clearly demonstrated that Rf value of lipid vEC retracting factor in CM is identical to 12HETE. Authentic 12(S)HETE, but not 12(R)HETE, showed vEC retracting activity. After the ultracentrifugation of CM, most lipid vEC retracting activity was recovered from the pellet fraction, and flow cytometric analysis using specific antibody against 12(S)HETE clearly showed the association of 12(S)HETE with small particle in CM. These findings suggested the principal involvement of 12(S)HETE in cancer cell derived microparticles in cancer cell mediated vEC retraction.
Liu, Zan; Xu, Bo; Nameta, Masaaki; Zhang, Ying; Magdeldin, Sameh; Yoshida, Yutaka; Yamamoto, Keiko; Fujinaka, Hidehiko; Yaoita, Eishin; Tasaki, Masayuki; Nakagawa, Yuki; Saito, Kazuhide; Takahashi, Kota; Yamamoto, Tadashi
2013-06-01
Vascular endothelial cells (VECs) play crucial roles in physiological and pathologic conditions in tissues and organs. Most of these roles are related to VEC plasma membrane proteins. In the kidney, VECs are closely associated with structures and functions; however, plasma membrane proteins in kidney VECs remain to be fully elucidated. Rat kidneys were perfused with cationic colloidal silica nanoparticles (CCSN) to label the VEC plasma membrane. The CCSN-labeled plasma membrane fraction was collected by gradient ultracentrifugation. The VEC plasma membrane or whole-kidney lysate proteins were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis and digested with trypsin in gels for liquid chromatography-tandem mass spectrometry. Enrichment analysis was then performed. The VEC plasma membrane proteins were purified by the CCSN method with high yield (approximately 20 μg from 1 g of rat kidney). By Mascot search, 582 proteins were identified in the VEC plasma membrane fraction, and 1,205 proteins were identified in the kidney lysate. In addition to 16 VEC marker proteins such as integrin beta-1 and intercellular adhesion molecule-2 (ICAM-2), 8 novel proteins such as Deltex 3-like protein and phosphatidylinositol binding clathrin assembly protein (PICALM) were identified. As expected, many key functions of plasma membranes in general and of endothelial cells in particular (i.e., leukocyte adhesion) were significantly overrepresented in the proteome of CCSN-labeled kidney VEC fraction. The CCSN method is a reliable technique for isolation of VEC plasma membrane from the kidney, and proteomic analysis followed by bioinformatics revealed the characteristics of in vivo VECs in the kidney.
Jin, Cheng-Yun; Moon, Dong-Oh; Choi, Yung Hyun; Lee, Jae-Dong; Kim, Gi-Young
2007-08-01
Agaricus blazei is a medicinal mushroom that possesses antimetastatic, antitumor, antimutagenic, and immunostimulating effects. However, the molecular mechanisms involved in A. blazei-mediated apoptosis remain unclear. In the present study, to elucidate the role of the Bcl-2 in A. blazei-mediated apoptosis, U937 cells were transfected with either empty vector (U937/vec) or vector containing cDNA encoding full-length Bcl-2 (U937/Bcl-2). As compared with U937/vec, U937/Bcl-2 cells exhibited a 4-fold greater expression of Bcl-2. Treatment of U937/vec with 1.0-4.0 mg/ml of A. blazei extract (ABE) for 24 h resulted in a significant induction of morphologic features indicative of apoptosis. In contrast, U937/Bcl-2 exposed to the same ABE treatment only exhibited a slight induction of apoptotic features. ABE-induced apoptosis was accompanied by downregulation of antiapoptotic proteins such as X-linked inhibitor of apoptosis protein (XIAP), inhibitor of apoptosis protein (cIAP)-2 and Bcl-2, activation of caspase-3, and cleavage of poly(ADP-ribose)polymerase (PARP). Ectopic expression of Bcl-2 was associated with significantly induced expression of antiapoptotic proteins, such as cIAP-2 and Bcl-2, but not XIAP. Ectopic expression of Bcl-2 also reduced caspase-3 activation and PARP cleavage in ABE treated U937 cells. Furthermore, treatment with the caspase-3 inhibitor z-DEVD-fmk was sufficient to restore cell viability following ABE treatment. This increase in viability was ascribed to downregulation of caspase-3 and blockage of PARP and PLC-gamma cleavage. ABE also triggered the downregulation of Akt, and combined treatment with LY294002 (an inhibitor of Akt) significantly decreased cell viability. The results indicated that major regulators of ABE-induced apoptosis in human leukemic U937 cells are Bcl-2 and caspase-3, which are associated with dephosphorylation of the Akt signal pathway.
Hu, Zhiwei; Cheng, Jijun; Xu, Jie; Ruf, Wolfram; Lockwood, Charles J
2017-02-01
Identification of target molecules specific for angiogenic vascular endothelial cells (VEC), the inner layer of pathological neovasculature, is critical for discovery and development of neovascular-targeting therapy for angiogenesis-dependent human diseases, notably cancer, macular degeneration and endometriosis, in which vascular endothelial growth factor (VEGF) plays a central pathophysiological role. Using VEGF-stimulated vascular endothelial cells (VECs) isolated from microvessels, venous and arterial blood vessels as in vitro angiogenic models and unstimulated VECs as a quiescent VEC model, we examined the expression of tissue factor (TF), a membrane-bound receptor on the angiogenic VEC models compared with quiescent VEC controls. We found that TF is specifically expressed on angiogenic VECs in a time-dependent manner in microvessels, venous and arterial vessels. TF-targeted therapeutic agents, including factor VII (fVII)-IgG1 Fc and fVII-conjugated photosensitizer, can selectively bind angiogenic VECs, but not the quiescent VECs. Moreover, fVII-targeted photodynamic therapy can selectively and completely eradicate angiogenic VECs. We conclude that TF is an angiogenic-specific receptor and the target molecule for fVII-targeted therapeutics. This study supports clinical trials of TF-targeted therapeutics for the treatment of angiogenesis-dependent diseases such as cancer, macular degeneration and endometriosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaowinn, Sirichat; Cho, Il-Rae; Moon, Jeong
2015-04-03
Pancreatic adenocarcinoma upregulated factor (PAUF), a novel oncogene, plays a crucial role in the development of pancreatic cancer, including its metastasis and proliferation. Therefore, PAUF-expressing pancreatic cancer cells could be important targets for oncolytic virus-mediated treatment. Panc-1 cells expressing PAUF (Panc-PAUF) showed relative resistance to parvovirus H-1 infection compared with Panc-1 cells expressing an empty vector (Panc-Vec). Of interest, expression of type I IFN-α receptor (IFNAR) was higher in Panc-PAUF cells than in Panc-Vec cells. Increased expression of IFNAR in turn increased the activation of Stat1 and Tyk2 in Panc-PAUF cells compared with that in Panc-Vec cells. Suppression of Tyk2more » and Stat1, which are important downstream molecules for IFN-α signaling, sensitized pancreatic cancer cells to parvovirus H-1-mediated apoptosis. Further, constitutive suppression of PAUF sensitized Bxpc3 pancreatic cancer cells to parvovirus H-1 infection. Taken together, these results suggested that PAUF conferred resistance to pancreatic cancer cells against oncolytic parvovirus H-1 infection through IFNAR-mediated signaling. - Highlights: • PAUF confers resistance against oncolytic parvovirus H-1 infection. • PAUF enhances the expression of IFNAR in Panc-1 cells. • Increased activation of Tyk2 or Stat1 by PAUF provides resistance to parvovirus H-1-mediated apoptosis. • Constitutive inhibition of PAUF enhances parvovirus H-1-mediated oncolysis of Bxpc3 pancreatic cancer cells.« less
Ginsberg, Michael; James, Daylon; Ding, Bi-Sen; Nolan, Daniel; Geng, Fuqiang; Butler, Jason M; Schachterle, William; Pulijaal, Venkat R; Mathew, Susan; Chasen, Stephen T; Xiang, Jenny; Rosenwaks, Zev; Shido, Koji; Elemento, Olivier; Rabbany, Sina Y; Rafii, Shahin
2012-01-01
ETS transcription factors ETV2, FLI1 and ERG1 specify pluripotent stem cells into endothelial cells (ECs). However, these ECs are unstable and drift towards non-vascular cell fates. We show that human mid-gestation c-Kit− lineage-committed amniotic cells (ACs) can be readily reprogrammed into induced vascular endothelial cells (iVECs). Transient ETV2 expression in ACs generated proliferative but immature iVECs, while co-expression with FLI1/ERG1 endowed iVECs with a vascular repertoire and morphology matching mature stable ECs. Brief TGFβ-inhibition functionalized VEGFR2 signaling, augmenting specification of ACs to iVECs. Genome-wide transcriptional analyses showed that iVECs are similar to adult ECs in which vascular-specific genes are turned on and non-vascular genes are silenced. Functionally, iVECs form long-lasting patent vasculature in Matrigel plugs and regenerating livers. Thus, short-term ETV2 expression and TGFβ-inhibition along with constitutive ERG1/FLI1 co-expression reprogram mature ACs into durable and functional iVECs with clinical-scale expansion potential. Public banking of HLA-typed iVECs would establish a vascular inventory for treatment of genetically diverse disorders. PMID:23084400
Cuy, Janet L; Beckstead, Benjamin L; Brown, Chad D; Hoffman, Allan S; Giachelli, Cecilia M
2003-11-01
Stable endothelialization of a tissue-engineered heart valve is essential for proper valve function, although adhesive characteristics of the native valve endothelial cell (VEC) have rarely been explored. This research evaluated VEC adhesive qualities and attempted to enhance VEC growth on the biopolymer chitosan, a novel tissue-engineering scaffold material with promising biological and chemical properties. Aortic VEC cultures were isolated and found to preferentially adhere to fibronectin, collagen types IV and I over laminin and osteopontin in a dose-dependent manner. Seeding of VEC onto comparison substrates revealed VEC growth and morphology to be preferential in the order: tissue culture polystyrene > gelatin, poly(DL-lactide-co-glycolide), chitosan > poly(hydroxy alkanoate). Adhesive protein precoating of chitosan did not significantly enhance VEC growth, despite equivalent protein adsorption as to polystyrene. Initial cell adhesion to protein-precoated chitosan, however, was higher than for polystyrene. Composite chitosan/collagen type IV films were investigated as an alternative to simple protein precoatings, and were shown to improve VEC growth and morphology over chitosan alone. These findings suggest potential manipulation of chitosan properties to improve amenability to valve tissue-engineering applications. Copyright 2003 Wiley Periodicals, Inc.
VE-Cadherin–Mediated Epigenetic Regulation of Endothelial Gene Expression
Morini, Marco F.; Giampietro, Costanza; Corada, Monica; Pisati, Federica; Lavarone, Elisa; Cunha, Sara I.; Conze, Lei L.; O’Reilly, Nicola; Joshi, Dhira; Kjaer, Svend; George, Roger; Nye, Emma; Ma, Anqi; Jin, Jian; Mitter, Richard; Lupia, Michela; Cavallaro, Ugo; Pasini, Diego; Calado, Dinis P.
2018-01-01
Rationale: The mechanistic foundation of vascular maturation is still largely unknown. Several human pathologies are characterized by deregulated angiogenesis and unstable blood vessels. Solid tumors, for instance, get their nourishment from newly formed structurally abnormal vessels which present wide and irregular interendothelial junctions. Expression and clustering of the main endothelial-specific adherens junction protein, VEC (vascular endothelial cadherin), upregulate genes with key roles in endothelial differentiation and stability. Objective: We aim at understanding the molecular mechanisms through which VEC triggers the expression of a set of genes involved in endothelial differentiation and vascular stabilization. Methods and Results: We compared a VEC-null cell line with the same line reconstituted with VEC wild-type cDNA. VEC expression and clustering upregulated endothelial-specific genes with key roles in vascular stabilization including claudin-5, vascular endothelial-protein tyrosine phosphatase (VE-PTP), and von Willebrand factor (vWf). Mechanistically, VEC exerts this effect by inhibiting polycomb protein activity on the specific gene promoters. This is achieved by preventing nuclear translocation of FoxO1 (Forkhead box protein O1) and β-catenin, which contribute to PRC2 (polycomb repressive complex-2) binding to promoter regions of claudin-5, VE-PTP, and vWf. VEC/β-catenin complex also sequesters a core subunit of PRC2 (Ezh2 [enhancer of zeste homolog 2]) at the cell membrane, preventing its nuclear translocation. Inhibition of Ezh2/VEC association increases Ezh2 recruitment to claudin-5, VE-PTP, and vWf promoters, causing gene downregulation. RNA sequencing comparison of VEC-null and VEC-positive cells suggested a more general role of VEC in activating endothelial genes and triggering a vascular stability-related gene expression program. In pathological angiogenesis of human ovarian carcinomas, reduced VEC expression paralleled decreased levels of claudin-5 and VE-PTP. Conclusions: These data extend the knowledge of polycomb-mediated regulation of gene expression to endothelial cell differentiation and vessel maturation. The identified mechanism opens novel therapeutic opportunities to modulate endothelial gene expression and induce vascular normalization through pharmacological inhibition of the polycomb-mediated repression system. PMID:29233846
VE-Cadherin-Mediated Epigenetic Regulation of Endothelial Gene Expression.
Morini, Marco F; Giampietro, Costanza; Corada, Monica; Pisati, Federica; Lavarone, Elisa; Cunha, Sara I; Conze, Lei L; O'Reilly, Nicola; Joshi, Dhira; Kjaer, Svend; George, Roger; Nye, Emma; Ma, Anqi; Jin, Jian; Mitter, Richard; Lupia, Michela; Cavallaro, Ugo; Pasini, Diego; Calado, Dinis P; Dejana, Elisabetta; Taddei, Andrea
2018-01-19
The mechanistic foundation of vascular maturation is still largely unknown. Several human pathologies are characterized by deregulated angiogenesis and unstable blood vessels. Solid tumors, for instance, get their nourishment from newly formed structurally abnormal vessels which present wide and irregular interendothelial junctions. Expression and clustering of the main endothelial-specific adherens junction protein, VEC (vascular endothelial cadherin), upregulate genes with key roles in endothelial differentiation and stability. We aim at understanding the molecular mechanisms through which VEC triggers the expression of a set of genes involved in endothelial differentiation and vascular stabilization. We compared a VEC-null cell line with the same line reconstituted with VEC wild-type cDNA. VEC expression and clustering upregulated endothelial-specific genes with key roles in vascular stabilization including claudin-5 , vascular endothelial-protein tyrosine phosphatase ( VE-PTP ), and von Willebrand factor ( vWf ). Mechanistically, VEC exerts this effect by inhibiting polycomb protein activity on the specific gene promoters. This is achieved by preventing nuclear translocation of FoxO1 (Forkhead box protein O1) and β-catenin, which contribute to PRC2 (polycomb repressive complex-2) binding to promoter regions of claudin-5 , VE-PTP , and vWf . VEC/β-catenin complex also sequesters a core subunit of PRC2 (Ezh2 [enhancer of zeste homolog 2]) at the cell membrane, preventing its nuclear translocation. Inhibition of Ezh2/VEC association increases Ezh2 recruitment to claudin-5 , VE-PTP , and vWf promoters, causing gene downregulation. RNA sequencing comparison of VEC-null and VEC-positive cells suggested a more general role of VEC in activating endothelial genes and triggering a vascular stability-related gene expression program. In pathological angiogenesis of human ovarian carcinomas, reduced VEC expression paralleled decreased levels of claudin-5 and VE-PTP. These data extend the knowledge of polycomb-mediated regulation of gene expression to endothelial cell differentiation and vessel maturation. The identified mechanism opens novel therapeutic opportunities to modulate endothelial gene expression and induce vascular normalization through pharmacological inhibition of the polycomb-mediated repression system. © 2017 The Authors.
Fleeman, Nigel; Bagust, Adrian; Boland, Angela; Beale, Sophie; Richardson, Marty; Krishan, Ashma; Stainthorpe, Angela; Abdulla, Ahmed; Kotas, Eleanor; Banks, Lindsay; Payne, Miranda
2017-10-01
The National Institute for Health and Care Excellence (NICE) invited the manufacturer (Amgen) of talimogene laherparepvec (T-VEC) to submit clinical and cost-effectiveness evidence for previously untreated advanced (unresectable or metastatic) melanoma as part of the Institute's Single Technology Appraisal process. The Liverpool Reviews and Implementation Group (LRiG) at the University of Liverpool was commissioned to act as the Evidence Review Group (ERG). This article presents a summary of the company's submission of T-VEC, the ERG review and the resulting NICE guidance (TA410), issued in September 2016. T-VEC is an oncolytic virus therapy granted a marketing authorisation by the European Commission for the treatment of adults with unresectable melanoma that is regionally or distantly metastatic (stage IIIB, IIIC and IVM1a) with no bone, brain, lung or other visceral disease. Clinical evidence for T-VEC versus granulocyte-macrophage colony-stimulating factor (GM-CSF) was derived from the multinational, open-label randomised controlled OPTiM trial [Oncovex (GM-CSF) Pivotal Trial in Melanoma]. In accordance with T-VEC's marketing authorisation, the company's submission focused primarily on 249 patients with stage IIIB to stage IV/M1a disease who constituted 57% of the overall trial population (T-VEC, n = 163 and GM-CSF, n = 86). Results from analyses of durable response rate, objective response rate, time to treatment failure and overall survival all showed marked and statistically significant improvements for patients treated with T-VEC compared with those treated with GM-CSF. However, GM-CSF is not used to treat melanoma in clinical practice. It was not possible to compare treatment with T-VEC with an appropriate comparator using conventionally accepted methods due to the absence of comparative head-to-head data or trials with sufficient common comparators. Therefore, the company compared T-VEC with ipilimumab using what it described as modified Korn and two-step Korn methods. Results from these analyses suggested that treatment with T-VEC was at least as effective as treatment with ipilimumab. Using the discounted patient access scheme (PAS) price for T-VEC and list price for ipilimumab, the company reported incremental cost-effectiveness ratios (ICERs) per quality-adjusted life-year (QALY) gained. For the comparison of treatment with T-VEC versus ipilimumab, the ICER per QALY gained was -£16,367 using the modified Korn method and -£60,271 using the two-step Korn method. The NICE Appraisal Committee (AC) agreed with the ERG that the company's methods for estimating clinical effectiveness of T-VEC versus ipilimumab were flawed and therefore produced unreliable results for modelling progression in stage IIIB to stage IVM1a melanoma. The AC concluded that the clinical and cost effectiveness of treatment with T-VEC compared with ipilimumab is unknown in patients with stage IIIB to stage IV/M1a disease. However, the AC considered that T-VEC may be a reasonable option for treating patients who are unsuitable for treatment with systemically administered immunotherapies (such as ipilimumab). T-VEC was therefore recommended by NICE as a treatment option for adults with unresectable, regionally or distantly metastatic (stage IIIB to stage IVM1a) melanoma that has not spread to bone, brain, lung or other internal organs, only if treatment with systemically administered immunotherapies is not suitable and the company provides T-VEC at the agreed discounted PAS price.
Pharmacokinetic drug evaluation of talimogene laherparepvec for the treatment of advanced melanoma.
Burke, Erin E; Zager, Jonathan S
2018-04-01
Current treatment of advanced melanoma is rapidly changing with the introduction of new and effective therapies including systemic as well as locoregional therapies. An example of one such locoregional therapy is intralesional injection with talimogene laherparepvec (T-VEC). Areas covered: T-VEC has been shown in a number of studies to be an effective treatment for patients with stage IIIB, IIIC and IVM1a melanoma. In this article the effectiveness, pharmacokinetics and safety profile of T-VEC is reviewed. Additionally, new research looking at combinations of T-VEC and systemic immunotherapies is reviewed. Expert opinion: Overall, T-VEC is an easily administered, safe, well tolerated and effective oncolytic viral therapy for the treatment of stage IIIB, IIIC, IVM1a unresectable and injectable metastatic melanoma. Recently published studies are showing promising results when T-VEC is combined with systemic therapy and this may be the way of the not too distant future in how we treat metastatic melanoma. Continued work regarding the use of T-VEC with other systemic agents will provide new and more effective treatment strategies for advanced melanoma.
Learning atoms for materials discovery.
Zhou, Quan; Tang, Peizhe; Liu, Shenxiu; Pan, Jinbo; Yan, Qimin; Zhang, Shou-Cheng
2018-06-26
Exciting advances have been made in artificial intelligence (AI) during recent decades. Among them, applications of machine learning (ML) and deep learning techniques brought human-competitive performances in various tasks of fields, including image recognition, speech recognition, and natural language understanding. Even in Go, the ancient game of profound complexity, the AI player has already beat human world champions convincingly with and without learning from the human. In this work, we show that our unsupervised machines (Atom2Vec) can learn the basic properties of atoms by themselves from the extensive database of known compounds and materials. These learned properties are represented in terms of high-dimensional vectors, and clustering of atoms in vector space classifies them into meaningful groups consistent with human knowledge. We use the atom vectors as basic input units for neural networks and other ML models designed and trained to predict materials properties, which demonstrate significant accuracy. Copyright © 2018 the Author(s). Published by PNAS.
Comparisons and Selections of Features and Classifiers for Short Text Classification
NASA Astrophysics Data System (ADS)
Wang, Ye; Zhou, Zhi; Jin, Shan; Liu, Debin; Lu, Mi
2017-10-01
Short text is considerably different from traditional long text documents due to its shortness and conciseness, which somehow hinders the applications of conventional machine learning and data mining algorithms in short text classification. According to traditional artificial intelligence methods, we divide short text classification into three steps, namely preprocessing, feature selection and classifier comparison. In this paper, we have illustrated step-by-step how we approach our goals. Specifically, in feature selection, we compared the performance and robustness of the four methods of one-hot encoding, tf-idf weighting, word2vec and paragraph2vec, and in the classification part, we deliberately chose and compared Naive Bayes, Logistic Regression, Support Vector Machine, K-nearest Neighbor and Decision Tree as our classifiers. Then, we compared and analysed the classifiers horizontally with each other and vertically with feature selections. Regarding the datasets, we crawled more than 400,000 short text files from Shanghai and Shenzhen Stock Exchanges and manually labeled them into two classes, the big and the small. There are eight labels in the big class, and 59 labels in the small class.
Alloying and Properties of C14–NbCr2 and A15–Nb3X (X = Al, Ge, Si, Sn) in Nb–Silicide-Based Alloys
Tsakiropoulos, Panos
2018-01-01
The oxidation of Nb–silicide-based alloys is improved with Al, Cr, Ge or Sn addition(s). Depending on addition(s) and its(their) concentration(s), alloyed C14-AB2 Laves and A15-A3X phases can be stable in the microstructures of the alloys. In both phases, A is the transition metal(s), and B and X respectively can be Cr, Al, Ge, Si or Sn, and Al, Ge, Si or Sn. The alloying, creep and hardness of these phases were studied using the composition weighted differences in electronegativity (∆χ), average valence electron concentrations (VEC) and atomic sizes. For the Laves phase (i) the VEC and ∆χ were in the ranges 4.976 < VEC < 5.358 and −0.503 < ∆χ < −0.107; (ii) the concentration of B (=Al + Cr + Ge + Si + Sn) varied from 50.9 to 64.5 at %; and (iii) the Cr concentration was in the range of 35.8 < Cr < 51.6 at %. Maps of ∆χ versus Cr, ∆χ versus VEC, and VEC versus atomic size separated the alloying behaviours of the elements. Compared with unalloyed NbCr2, the VEC decreased and ∆χ increased in Nb(Cr,Si)2, and the changes in both parameters increased when Nb was substituted by Ti, and Cr by Si and Al, or Si and Ge, or Si and Sn. For the A15 phase (i) the VEC and ∆χ were in the ranges 4.38 < VEC < 4.89 and 0.857 < ∆χ < 1.04, with no VEC values between 4.63 and 4.72 and (ii) the concentration of X (=Al + Ge + Si + Sn) varied from 16.3 to 22.7 at %. The VEC versus ∆χ map separated the alloying behaviours of elements. The hardness of A15-Nb3X was correlated with the parameters ∆χ and VEC. The hardness increased with increases in ∆χ and VEC. Compared with Nb3Sn, the ∆χ and hardness of Nb3(Si,Sn) increased. The substitution of Nb by Cr had the same effect on ∆χ and hardness as Hf or Ti. The ∆χ and hardness increased with Ti concentration. The addition of Al in Nb3(Si,Sn,Al) decreased the ∆χ and increased the hardness. When Ti and Hf, or Ti, Hf and Cr, were simultaneously present with Al, the ∆χ was decreased and the hardness was unchanged. The better creep of Nb(Cr,Si)2 compared with the unalloyed Laves phase was related to the decrease in the VEC and ∆χ parameters. PMID:29518920
Alloying and Properties of C14-NbCr₂ and A15-Nb₃X (X = Al, Ge, Si, Sn) in Nb-Silicide-Based Alloys.
Tsakiropoulos, Panos
2018-03-07
The oxidation of Nb-silicide-based alloys is improved with Al, Cr, Ge or Sn addition(s). Depending on addition(s) and its(their) concentration(s), alloyed C14-AB₂ Laves and A15-A₃X phases can be stable in the microstructures of the alloys. In both phases, A is the transition metal(s), and B and X respectively can be Cr, Al, Ge, Si or Sn, and Al, Ge, Si or Sn. The alloying, creep and hardness of these phases were studied using the composition weighted differences in electronegativity (∆χ), average valence electron concentrations (VEC) and atomic sizes. For the Laves phase (i) the VEC and ∆χ were in the ranges 4.976 < VEC < 5.358 and -0.503 < ∆χ < -0.107; (ii) the concentration of B (=Al + Cr + Ge + Si + Sn) varied from 50.9 to 64.5 at %; and (iii) the Cr concentration was in the range of 35.8 < Cr < 51.6 at %. Maps of ∆χ versus Cr, ∆χ versus VEC, and VEC versus atomic size separated the alloying behaviours of the elements. Compared with unalloyed NbCr₂, the VEC decreased and ∆χ increased in Nb(Cr,Si)₂, and the changes in both parameters increased when Nb was substituted by Ti, and Cr by Si and Al, or Si and Ge, or Si and Sn. For the A15 phase (i) the VEC and ∆χ were in the ranges 4.38 < VEC < 4.89 and 0.857 < ∆χ < 1.04, with no VEC values between 4.63 and 4.72 and (ii) the concentration of X (=Al + Ge + Si + Sn) varied from 16.3 to 22.7 at %. The VEC versus ∆χ map separated the alloying behaviours of elements. The hardness of A15-Nb₃X was correlated with the parameters ∆χ and VEC. The hardness increased with increases in ∆χ and VEC. Compared with Nb₃Sn, the ∆χ and hardness of Nb₃(Si,Sn) increased. The substitution of Nb by Cr had the same effect on ∆χ and hardness as Hf or Ti. The ∆χ and hardness increased with Ti concentration. The addition of Al in Nb₃(Si,Sn,Al) decreased the ∆χ and increased the hardness. When Ti and Hf, or Ti, Hf and Cr, were simultaneously present with Al, the ∆χ was decreased and the hardness was unchanged. The better creep of Nb(Cr,Si)₂ compared with the unalloyed Laves phase was related to the decrease in the VEC and ∆χ parameters.
The large-scale gravitational bias from the quasi-linear regime.
NASA Astrophysics Data System (ADS)
Bernardeau, F.
1996-08-01
It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by <δ^p^({vec}(x)_1_)δ^q^({vec}(x)_2_)>_c_=C_p,q_ <δ^2^({vec}(x))>^p+q-2^ <δ({vec}(x)_1_)δ({vec}(x)_2_)> where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.
Identification of Sources with Unknown Wavefronts.
1988-03-31
as p p [zi~ -it (1.28)ti Thle scalar (l-) is precisel thle " projection " of V oil tilie iiiit vec(tor of lie pt h axis. So (.1.28) simiply means thle t...Arbitrarily forget about t,’ and 0 by deciding I:’ 0 anud assiuuule I ie two So urce vectoIrs are oithog iial (very large arra s) a rather cruile...be thle mlodel. Reiterating a remark already Iii a de a 1 i t lie colit inlis p~ lanle waves andL large parametric dlescri ptimons, Lii’ phsca meaning
Liu, Ya-rong; Chen, Jun-jun; Dai, Min
2014-01-01
Aim: Paeonol (2′-hydroxy-4′-methoxyacetophenone) from Cortex moutan root is a potential therapeutic agent for atherosclerosis. This study sought to investigate the mechanisms underlying anti-inflammatory effects of paeonol in rat vascular endothelial cells (VECs) in vitro. Methods: VECs were isolated from rat thoracic aortas. The cells were pretreated with paeonol for 24 h, and then stimulated with ox-LDL for another 24 h. The expression of microRNA-21 (miR-21) and PTEN in VECs was analyzed using qRT-PCR. The expression of PTEN protein was detected by Western blotting. TNF-α release by VECs was measured by ELISA. Results: Ox-LDL treatment inhibited VEC growth in dose- and time-dependent manners (the value of IC50 was about 20 mg/L at 24 h). Furthermore, ox-LDL (20 mg/L) significantly increased miR-21 expression and inhibited the expression of PTEN, one of downstream target genes of miR-21 in VECs. In addition, ox-LDL (20 mg/L) significantly increased the release of TNF-α from VECs. Pretreatment with paeonol increased the survival rate of ox-LDL-treated VECs in dose- and time-dependent manners. Moreover, paeonol (120 μmol/L) prevented ox-LDL-induced increases in miR-21 expression and TNF-α release, and ox-LDL-induced inhibition in PTEN expression. A dual-luciferase reporter assay showed that miR-21 bound directly to PTEN's 3′-UTR, thus inhibiting PTEN expression. In ox-LDL treated VECs, transfection with a miR-21 mimic significantly increased miR-21 expression and inhibited PTEN expression, and attenuated the protective effects of paeonol pretreatment, whereas transfection with an miR-21 inhibitor significantly decreased miR-21 expression and increased PTEN expression, thus enhanced the protective effects of paeonol pretreatment. Conclusion: miR-21 is an important target of paeonol for its protective effects against ox-LDL-induced VEC injury, which may play critical roles in development of atherosclerosis. PMID:24562307
Neuronal Effects of Sugammadex in combination with Rocuronium or Vecuronium
Aldasoro, Martin; Jorda, Adrian; Aldasoro, Constanza; Marchio, Patricia; Guerra-Ojeda, Sol; Gimeno-Raga, Marc; Mauricio, Mª Dolores; Iradi, Antonio; Obrador, Elena; Vila, Jose Mª; Valles, Soraya L.
2017-01-01
Rocuronium (ROC) and Vecuronium (VEC) are the most currently used steroidal non-depolarizing neuromuscular blocking (MNB) agents. Sugammadex (SUG) rapidly reverses steroidal NMB agents after anaesthesia. The present study was conducted in order to evaluate neuronal effects of SUG alone and in combination with both ROC and VEC. Using MTT, CASP-3 activity and Western-blot we determined the toxicity of SUG, ROC or VEC in neurons in primary culture. SUG induces apoptosis/necrosis in neurons in primary culture and increases cytochrome C (CytC), apoptosis-inducing factor (AIF), Smac/Diablo and Caspase 3 (CASP-3) protein expression. Our results also demonstrated that both ROC and VEC prevent these SUG effects. The protective role of both ROC and VEC could be explained by the fact that SUG encapsulates NMB drugs. In BBB impaired conditions it would be desirable to control SUG doses to prevent the excess of free SUG in plasma that may induce neuronal damage. A balance between SUG, ROC or VEC would be necessary to prevent the risk of cell damage. PMID:28367082
Relation extraction for biological pathway construction using node2vec.
Kim, Munui; Baek, Seung Han; Song, Min
2018-06-13
Systems biology is an important field for understanding whole biological mechanisms composed of interactions between biological components. One approach for understanding complex and diverse mechanisms is to analyze biological pathways. However, because these pathways consist of important interactions and information on these interactions is disseminated in a large number of biomedical reports, text-mining techniques are essential for extracting these relationships automatically. In this study, we applied node2vec, an algorithmic framework for feature learning in networks, for relationship extraction. To this end, we extracted genes from paper abstracts using pkde4j, a text-mining tool for detecting entities and relationships. Using the extracted genes, a co-occurrence network was constructed and node2vec was used with the network to generate a latent representation. To demonstrate the efficacy of node2vec in extracting relationships between genes, performance was evaluated for gene-gene interactions involved in a type 2 diabetes pathway. Moreover, we compared the results of node2vec to those of baseline methods such as co-occurrence and DeepWalk. Node2vec outperformed existing methods in detecting relationships in the type 2 diabetes pathway, demonstrating that this method is appropriate for capturing the relatedness between pairs of biological entities involved in biological pathways. The results demonstrated that node2vec is useful for automatic pathway construction.
New structure of high-pressure body-centered orthorhombic Fe 2SiO 4
Yamanaka, Takamitsu; Kyono, Atsushi; Nakamoto, Yuki; ...
2015-08-01
Here, a structural change in Fe 2SiO 4 spinel and the structure of a new high pressure phase are determined by Rietveld 26 profile fitting of x-ray diffraction data up to 64 GPa at ambient temperature. The compression curve of the spinel is discontinuous at approximately 20 GPa. Fe Kβ x-ray emission measurements at high pressure show that the transition from a high spin (HS) to an intermediate spin (IS) state begins at 17 GPa in the spinel phase. The IS electronic state is gradually enhanced with pressure, which results in an isostructural phase transition. A transition from the cubic spinel structure to a body centered orthorhombic phase (I-Fe 2SiO 4) with space group Imma and Z=4 was observed at approximately 34 GPa. The structure of I-Fe 2SiO 4 has two crystallographically distinct FeO 6 octahedra, which are arranged in layers parallel to (101) and (011) and are very similar to the layers of FeO 6 octahedra that constitute the spinel structure. Silicon also exists in six-fold coordination in I-Fe 2SiO 4. The transformation to the new high-pressure phase is reversible under decompression at ambient temperature. A Martensitic transformation of each slab of the spinel structure with translation vector [more » $$\\vec{1/8}$$ $$\\vec{1/8}$$ $$\\vec{1/8}$$] generates the I-Fe 2SiO 4 structure. Laser heating of I-Fe 2SiO 4 at 1500 K results in a decomposition of the material to rhombohedral FeO and SiO 2 stishovite.« less
Schvartsman, Gustavo; Perez, Kristen; Flynn, Jill E; Myers, Jeffrey N; Tawbi, Hussein
2017-01-01
Immunotherapy plays a key role in the treatment of metastatic melanoma. Patients with autoimmune conditions and/or on immunosuppressive therapy due to orthotropic transplants, however, are systematically excluded from clinical trials. Talimogene laherparepvec (T-VEC) is the first oncolytic virus to be approved by the FDA for cancer therapy. To our knowledge, this is the first report of T-VEC being administered in the setting of an organ transplant recipient. Here we present the case of a patient with recurrent locally advanced cutaneous melanoma receiving salvage T-VEC therapy in the setting of orthotropic heart transplantation. After 5 cycles of therapy, no evidence of graft rejection has been observed to date, and the patient achieved a complete remission, and is currently off therapy. This case advocates for further investigation on the safety and efficacy of immunotherapeutic approaches, such as T-VEC, in solid organ transplant recipients.
Rosenthal, Gideon; Váša, František; Griffa, Alessandra; Hagmann, Patric; Amico, Enrico; Goñi, Joaquín; Avidan, Galia; Sporns, Olaf
2018-06-05
Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections. The functional roles of nodes are defined by their direct and indirect connectivity with the rest of the network. However, the network context is not directly accessible at the level of individual nodes. Similar problems in language processing have been addressed with algorithms such as word2vec that create embeddings of words and their relations in a meaningful low-dimensional vector space. Here we apply this approach to create embedded vector representations of brain networks or connectome embeddings (CE). CE can characterize correspondence relations among brain regions, and can be used to infer links that are lacking from the original structural diffusion imaging, e.g., inter-hemispheric homotopic connections. Moreover, we construct predictive deep models of functional and structural connectivity, and simulate network-wide lesion effects using the face processing system as our application domain. We suggest that CE offers a novel approach to revealing relations between connectome structure and function.
BioVEC: a program for biomolecule visualization with ellipsoidal coarse-graining.
Abrahamsson, Erik; Plotkin, Steven S
2009-09-01
Biomolecule Visualization with Ellipsoidal Coarse-graining (BioVEC) is a tool for visualizing molecular dynamics simulation data while allowing coarse-grained residues to be rendered as ellipsoids. BioVEC reads in configuration files, which may be output from molecular dynamics simulations that include orientation output in either quaternion or ANISOU format, and can render frames of the trajectory in several common image formats for subsequent concatenation into a movie file. The BioVEC program is written in C++, uses the OpenGL API for rendering, and is open source. It is lightweight, allows for user-defined settings for and texture, and runs on either Windows or Linux platforms.
Circulating fibrocytes stabilize blood vessels during angiogenesis in a paracrine manner.
Li, Jinqing; Tan, Hong; Wang, Xiaolin; Li, Yuejun; Samuelson, Lisa; Li, Xueyong; Cui, Caibin; Gerber, David A
2014-02-01
Accumulating evidence supports that circulating fibrocytes play important roles in angiogenesis. However, the specific role of fibrocytes in angiogenesis and the underlying mechanisms remain unclear. In this study, we found that fibrocytes stabilized newly formed blood vessels in a mouse wound-healing model by inhibiting angiogenesis during the proliferative phase and inhibiting blood vessel regression during the remodeling phase. Fibrocytes also inhibited angiogenesis in a Matrigel mouse model. In vitro study showed that fibrocytes inhibited both the apoptosis and proliferation of vascular endothelial cells (VECs) in a permeable support (Transwell) co-culture system. In a three-dimensional collagen gel, fibrocytes stabilized the VEC tubes by decreasing VEC tube density on stimulation with growth factors and preventing VEC tube regression on withdrawal of growth factors. Further mechanistic investigation revealed that fibrocytes expressed many prosurvival factors that are responsible for the prosurvival effect of fibrocytes on VECs and blood vessels. Fibrocytes also expressed angiogenesis inhibitors, including thrombospondin-1 (THBS1). THBS1 knockdown partially blocked the fibrocyte-induced inhibition of VEC proliferation in the Transwell co-culture system and recovered the fibrocyte-induced decrease of VEC tube density in collagen gel. Purified fibrocytes transfected with THBS1 siRNA partially recovered the fibrocyte-induced inhibition of angiogenesis in both the wound-healing and Matrigel models. In conclusion, our findings reveal that fibrocytes stabilize blood vessels via prosurvival factors and anti-angiogenic factors, including THBS1. Copyright © 2014 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Guo, Jianyou; Li, ChangYu; Wang, Jie; Liu, Yongmei; Zhang, Jiahui
2011-01-01
This article studies a contemporary treatment approach toward both diabetes and depression management by vanadium-enriched Cordyceps sinensis (VECS). Streptozotocin-induced hyperglycemic rats were used in the study. After the rats were administered with VECS, a significant reduction in blood glucose levels was seen (P < .05) and the levels of serum insulin increased significantly (P < .05). At the same time, the study revealed a significant decrease in immobility with a corresponding increase in the swimming and climbing behavior in hyperglycemic rats following VECS treatment. The results described herein demonstrate that VECS is a contemporary treatment approach that advocates an aggressive stance toward both diabetes and depression management. PMID:21799679
NASA Astrophysics Data System (ADS)
Mashood, K. K.; Singh, Vijay A.
2012-09-01
Student difficulties regarding the angular velocity (\\vec{\\omega }) and angular acceleration (\\vec{\\alpha }) of a particle have remained relatively unexplored in contrast to their linear counterparts. We present an inventory comprising multiple choice questions aimed at probing misconceptions and eliciting ill-suited reasoning patterns. The development of the inventory was based on interactions with students, teachers and experts. We report misconceptions, some of which are parallel to those found earlier in linear kinematics. Fixations with inappropriate prototypes were uncovered. Many students and even teachers mistakenly assume that all rotational motion is necessarily circular. A persistent notion that the direction of \\vec{\\omega } and \\vec{\\alpha } should be ‘along’ the motion exists. Instances of indiscriminate usage of equations were identified.
Li, Yanan; Liu, Fangfang; Zhang, Zhiqiang; Zhang, Mingle; Cao, Shanjin; Li, Yachai; Zhang, Lin; Huang, Xianghua; Xu, Yanfang
2015-11-01
Grafting material for vaginal reconstruction commonly includes the bowel, peritoneum, skin, and amniotic membrane. Bone marrow mesenchymal stem cells (MSCs) have the potential of multilineage differentiation into a variety of cells and have been widely explored in tissue engineering. In the current study, we examined whether MSCs could be differentiated to vaginal epithelial cells (VECs) upon co-culturing with VECs. We also examined whether Wnt/β-catenin signaling pathway is implicated in such differentiation. Co-culture of MSCs with VECs using a transwell insert system (with no direct contact) induced the expression of VECs marker AE1/AE3 in MSCs. MSCs combined with small intestinal submucosa (SIS) scaffold were implanted in place of the native vagina in rats to observe the implications for vaginal reconstruction in vivo. Anatomic repair of neovagina was assessed by histological staining for H/E and Masson's Trichrome. GSK-3β and β-catenin, main members of Wnt/β-catenin signaling pathway, in MSCs were increased upon co-culturing with VECs. Exposure of co-cultured MSCs to a Wnt/β-catenin signaling activator, lithium chloride (LiCl, 20 µM) increased phosphorylated GSK-3β and β-catenin and enhanced expression of AE1/AE3. In vivo-grafted cells displayed significant matrix infiltration and expressed epithelial markers in neovagina. These findings suggest that MSCs could acquire the phenotype of VECs when co-cultured with VECs, possibly via activation of Wnt/β-catenin signaling. MSCs provide an alternative cell source for potential use in vaginal tissue engineering. © 2015 International Federation for Cell Biology.
Mikhail, Marianne; Vachon, Pierre H; D'Orléans-Juste, Pedro; Jacques, Danielle; Bkaily, Ghassan
2017-10-01
Our previous work showed the presence of endothelin-1 (ET-1) receptors, ET A and ET B , in human vascular endothelial cells (hVECs). In this study, we wanted to verify whether ET-1 plays a role in the survival of hVECs via the activation of its receptors ET A and (or) ET B (ET A R and ET B R, respectively). Our results showed that treatment of hVECs with ET-1 prevented apoptosis induced by genistein, an effect that was mimicked by treatment with ET B R-specific agonist IRL1620. Furthermore, blockade of ET B R with the selective ET B R antagonist A-192621 prevented the anti-apoptotic effect of ET-1 in hVECs. However, activation of ET A receptor alone did not seem to contribute to the anti-apoptotic effect of ET-1. In addition, the anti-apoptotic effect of ET B R was found to be associated with caspase 3 inhibition and does not depend on the density of this type of receptor. In conclusion, our results showed that ET-1 possesses an anti-apoptotic effect in hVECs and that this effect is mediated, to a great extent, via the activation of ET B R. This study revealed a new role for ET B R in the survival of hVECs.
Burcoglu-O'Ral, Arsinur; Erkan, Doruk; Asherson, Ronald
2002-09-01
To define at the molecular level the vascular endothelial cell (VEC) injury characteristics of catastrophic antiphospholipid syndrome (CAPS) and to report successful therapeutic use of a VEC modulator, defibrotide. We describe a 55-year-old man with primary APS with an intractable prothrombotic state (CAPS) resistant to combined therapy with heparin, warfarin, aspirin, and dipyridamole. Treatment with defibrotide was conducted in the context of an investigational phase II protocol where the dose was regulated and individualized by disease/patient-specific molecular and clinical markers. The patient entered complete remission with defibrotide treatment. During treatment, dose dependent pharmacological actions of defibrotide and key stress markers for VEC injury were identified. Evidence of defibrotide's polypharmacology included downregulation of cytokines, notably tumor necrosis factor-alpha, as the earliest effect, cellular differentiation of VEC, possibly with direct regulatory effect over cellular genes, and the reversal of platelet consumption and prothrombotic state. Von Willebrand antigen levels were used as the sole marker to guide therapy. This case demonstrates effective remission of CAPS with defibrotide treatment. In contrast to theories that CAPS is triggered by ischemic and thrombotic tissue damage, these data present VEC injury as the primary and representative lesion of CAPS. The pathogenesis may involve concurrent impairment of different VEC functions. Achieving remission may require a polypharmacologic approach, represented here by use of defibrotide.
Global phase diagram and quantum spin liquids in a spin- 1 2 triangular antiferromagnet
Gong, Shou-Shu; Zhu, Wei; Zhu, Jianxin; ...
2017-08-09
For this research, we study the spin-1/2 Heisenberg model on the triangular lattice with the nearest-neighbor J 1 > 0 , the next-nearest-neighobr J 2 > 0 Heisenberg interactions, and the additional scalar chiral interaction Jχ (more » $$\\vec{S}$$ i × $$\\vec{S}$$ j ) · $$\\vec{S}$$ k for the three spins in all the triangles using large-scale density matrix renormalization group calculation on cylinder geometry. With increasing J 2 (J 2 / J 1 ≤ 0.3 ) and Jχ (Jχ / J 1 ≤ 1.0 ) interactions, we establish a quantum phase diagram with the magnetically ordered 120°, stripe, and noncoplanar tetrahedral phase. In between these magnetic order phases, we find a chiral spin liquid (CSL) phase, which is identified as a ν = 1/2 bosonic fractional quantum Hall state with possible spontaneous rotational symmetry breaking. By switching on the chiral interaction, we find that the previously identified spin liquid in the J 1 - J 2 triangular model (0.08 ≲ J 2 / J 1 ≲ 0.15) shows a phase transition to the CSL phase at very small Jχ. We also compute the spin triplet gap in both spin liquid phases, and our finite-size results suggest a large gap in the odd topological sector but a small or vanishing gap in the even sector. Lastly, we discuss the implications of our results on the nature of the spin liquid phases.« less
Shroff, Ankit; Sequeira, Roicy; Reddy, Kudumula Venkata Rami
2017-04-01
Autophagy plays an important role in clearance of intracellular pathogens. However, no information is available on its involvement in vaginal infections such as vulvo-vaginal candidiasis (VVC). VVC is intimately associated with the immune status of the human vaginal epithelial cells (VECs). The objective of our study is to decipher if autophagy process is involved during Candida albicans infection of VECs. In this study, C. albicans infection system was established using human VEC line (VK2/E6E7). Infection-induced change in the expression of autophagy markers like LC3 and LAMP-1 were analyzed by RT-PCR, q-PCR, Western blot, immunofluorescence and transmission electron microscopy (TEM) studies were carried out to ascertain the localization of autophagosomes. Multiplex ELISA was carried out to determine the cytokine profiles. Analysis of LC3 and LAMP-1 expression at mRNA and protein levels at different time points revealed up-regulation of these markers 6 hours post C. albicans infection. LC3 and LAMP-1 puncti were observed in infected VECs after 12 hours. TEM studies showed C. albicans entrapped in autophagosomes. Cytokines-TNF-α and IL-1β were up-regulated in culture supernatants of VECs at 12 hours post-infection. The results suggest that C. albicans invasion led to the activation of autophagy as a host defense mechanism of VECs. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhenyu; Lin, Yu; Wang, Xueyi
The eigenmode stability properties of three-dimensional lower-hybrid-drift-instabilities (LHDI) in a Harris current sheet with a small but finite guide magnetic field have been systematically studied by employing the gyrokinetic electron and fully kinetic ion (GeFi) particle-in-cell (PIC) simulation model with a realistic ion-to-electron mass ratio m i/m e. In contrast to the fully kinetic PIC simulation scheme, the fast electron cyclotron motion and plasma oscillations are systematically removed in the GeFi model, and hence one can employ the realistic m i/m e. The GeFi simulations are benchmarked against and show excellent agreement with both the fully kinetic PIC simulation and the analytical eigenmode theory. Our studies indicate that, for small wavenumbers, ky, along the current direction, the most unstable eigenmodes are peaked at the location wheremore » $$\\vec{k}$$• $$\\vec{B}$$ =0, consistent with previous analytical and simulation studies. Here, $$\\vec{B}$$ is the equilibrium magnetic field and $$\\vec{k}$$ is the wavevector perpendicular to the nonuniformity direction. As ky increases, however, the most unstable eigenmodes are found to be peaked at $$\\vec{k}$$ •$$\\vec{B}$$ ≠0. Additionally, the simulation results indicate that varying m i/m e, the current sheet width, and the guide magnetic field can affect the stability of LHDI. Simulations with the varying mass ratio confirm the lower hybrid frequency and wave number scalings.« less
Wang, Zhenyu; Lin, Yu; Wang, Xueyi; ...
2016-07-07
The eigenmode stability properties of three-dimensional lower-hybrid-drift-instabilities (LHDI) in a Harris current sheet with a small but finite guide magnetic field have been systematically studied by employing the gyrokinetic electron and fully kinetic ion (GeFi) particle-in-cell (PIC) simulation model with a realistic ion-to-electron mass ratio m i/m e. In contrast to the fully kinetic PIC simulation scheme, the fast electron cyclotron motion and plasma oscillations are systematically removed in the GeFi model, and hence one can employ the realistic m i/m e. The GeFi simulations are benchmarked against and show excellent agreement with both the fully kinetic PIC simulation and the analytical eigenmode theory. Our studies indicate that, for small wavenumbers, ky, along the current direction, the most unstable eigenmodes are peaked at the location wheremore » $$\\vec{k}$$• $$\\vec{B}$$ =0, consistent with previous analytical and simulation studies. Here, $$\\vec{B}$$ is the equilibrium magnetic field and $$\\vec{k}$$ is the wavevector perpendicular to the nonuniformity direction. As ky increases, however, the most unstable eigenmodes are found to be peaked at $$\\vec{k}$$ •$$\\vec{B}$$ ≠0. Additionally, the simulation results indicate that varying m i/m e, the current sheet width, and the guide magnetic field can affect the stability of LHDI. Simulations with the varying mass ratio confirm the lower hybrid frequency and wave number scalings.« less
Peptide-modified PELCL electrospun membranes for regulation of vascular endothelial cells.
Zhou, Fang; Jia, Xiaoling; Yang, Yang; Yang, Qingmao; Gao, Chao; Zhao, Yunhui; Fan, Yubo; Yuan, Xiaoyan
2016-11-01
The efficiency of biomaterials used in small vascular repair depends greatly on their ability to interact with vascular endothelial cells (VECs). Rapid endothelialization of the vascular grafts is a promising way to prevent thrombosis and intimal hyperplasia. In this work, modification of electrospun membranes of poly(ethylene glycol)-b-poly(l-lactide-co-ε-caprolactone) (PELCL) by three different peptides for regulation of VECs were studied in order to obtain ideal bioactive biomaterials as small diameter vascular grafts. QK (a mimetic peptide to vascular endothelial growth factor), Arg-Glu-Asp-Val (REDV, a specific adhesive peptide to VECs) and Val-Ala-Pro-Gly (VAPG, a specific adhesive peptide to vascular smooth muscle cells) were investigated. Surface properties of the modified membranes and the response of VECs were verified. It was found that protein adsorption and platelet adhesion were effectively suppressed with the introduction of QK, REDV or VAPG peptides on the PELCL electrospun membranes. Both QK- and REDV-modified electrospun membranes could accelerate the proliferation of VECs in the first 9days, and the QK-modified electrospun membrane promoted cell proliferation more significantly than the REDV-modified one. The REDV-modified PELCL membrane was the most favorable for VECs adhesion than QK- and VAPG-modified membranes. It was suggested that QK- or REDV-modified PELCL electrospun membranes may have great potential applications in cardiovascular biomaterials for rapid endothelialization in situ. Copyright © 2016 Elsevier B.V. All rights reserved.
Multiscale Architectures and Parallel Algorithms for Video Object Tracking
2011-10-01
0 4 : if FIFO1 contains nDt frames then 5: Partition data into blocks. 6: Put SPE control block information...char buf 4 = FF; vec to r unsigned char buf 5 = FF; vec to r unsigned char buf 6 = FF; vec to r unsigned char buf 7 = FF; for ( j = 0 ; j < s i z e ; j...Public Release; Distribution Unlimited 8 7 u 6 ill :J (;) 5 E -;::; c 0 4 ~ u Q) X 3 Q) 8 7 6 u Q) Ul 5 :J (;) E :;::; 4 c 0
NASA Astrophysics Data System (ADS)
Monthus, Cécile; Garel, Thomas
2007-03-01
We consider the low temperature T
Santos-Ciminera, Patricia D; Acheé, Nicole L; Quinnan, Gerald V; Roberts, Donald R
2004-09-01
We evaluated polymerase chain reaction (PCR) to confirm immunoassays for malaria parasites in mosquito pools after a failure to detect malaria with PCR during an outbreak in which pools tested positive using VecTest and enzyme-linked immunosorbent assay (ELISA). We combined VecTest, ELISA, and PCR to detect Plasmodium falciparum and Plasmodium vivax VK 210. Each mosquito pool, prepared in triplicate, consisted of 1 exposed Anopheles stephensi and up to 9 unfed mosquitoes. The results of VecTest and ELISA were concordant. DNA from a subset of the pools, 1 representative of each ratio of infected to uninfected mosquitoes, was extracted and used as template in PCR. All P. vivax pools were PCR positive but some needed additional processing for removal of apparent inhibitors before positive results were obtained. One of the pools selected for P. falciparum was negative by PCR, probably because of losses or contamination during DNA extraction; 2 remaining pools at this ratio were PCR positive. Testing pools by VecTest, ELISA, and PCR is feasible, and PCR is useful for confirmation of immunoassays. An additional step might be needed to remove potential inhibitors from pools prior to PCR.
Generalized Jastrow Variational Method for Liquid HELIUM-3-HELIUM-4 Mixtures at T = 0 K.
NASA Astrophysics Data System (ADS)
Mirabbaszadeh, Kavoos
Microscopic theory of dilute liquid { ^3 He}-{^4 He} mixtures is of great interest, because it provides a physical realization of a nearly degenerate weakly interacting Fermion system. An understanding of properties of the mixtures has received considerable attention both theoretically and experimentally over the past thirty years. We present here a variational procedure based on the Jastrow function for the ground state of {^3 He}- {^4 He} mixtures by minimizing the total energy of the mixture using the hypernetted-chain (HNC) approximation and the Percus-Yevick (PY) approximation for the two body correlation functions. Our goal is to compute from first principles the internal energy of the system and the various two body correlation functions at various densities and compare the results with experiment. The Jastrow variational method for the ground state energy of liquid {^4 He} consists of the following ansatz for the wave function Psi_alpha {rm(vec r_{1 alpha},} {vec r_{2alpha},} dots, {vec r_{N _alpha})} = prod _{rm i < j} {rm f_ {alphaalpha}(r_{ij}). } For a {^3 He } system the corresponding ansatz is Psi_beta {rm( vec r_{1beta},} {vec r_{2beta },} dots, {vec r_{N_beta})} = {[prod _{i < j} f_{betabeta }(r_{ij})]} Phi {rm( vec r_{1beta},} {vec r_{2beta },} dots, {vec r_{Nbeta}),} where Phi is a Slater determinant of plane waves for the ground state of the Fermion system. The total energy per particle can be written in the form: E = x_sp{alpha}{2} E_{alphaalpha} + x_sp{beta}{2 }E_{betabeta } + 2x_{alpha} x_{beta}E _{alphabeta}, where E_{alphaalpha} , E_{betabeta} , E_{alphabeta} are unknown parameters to be determined from a microscopic theory. Using the Jastrow wave function Psi for the mixture, a general expression is given for the ground state energy in terms of the two body potential and two and three body correlation functions. The Kirkwood Super-position Approximation (KSA) is used for the three-body correlation functions. The antisymmetry of the wave function for Fermions is incorporated following the procedure given earlier by Lado, Inguva and Smith. This procedure for treating the antisymmetry of the wave function simplifies the equations for the two-body correlation functions considerably. The equations for the correlation functions are solved in the hypernetted-chain approximation. Once the two-particle correlation functions for the mixture ( ^3He-^4He) have been obtained, the energy is minimized with respect to the variational parameters involved in the Jastrow wave function. The binding energy and the optimal correlation functions are then obtained as a function of the concentration of ^3He atoms in the mixture. (Abstract shortened with permission of author.).
The Natural Protective Mechanism Against Hyperglycemia in Vascular Endothelial Cells
Riahi, Yael; Sin-Malia, Yoav; Cohen, Guy; Alpert, Evgenia; Gruzman, Arie; Eckel, Juergen; Staels, Bart; Guichardant, Michel; Sasson, Shlomo
2010-01-01
OBJECTIVE Vascular endothelial cells (VECs) downregulate their rate of glucose uptake in response to hyperglycemia by decreasing the expression of their typical glucose transporter GLUT-1. Hitherto, we discovered critical roles for the protein calreticulin and the arachidonic acid–metabolizing enzyme 12-lipoxygenase in this autoregulatory process. The hypothesis that 4-hydroxydodeca-(2E,6Z)-dienal (4-HDDE), the peroxidation product of 12-lipoxygenase, mediates this downregulatory mechanism by activating peroxisome proliferator–activated receptor (PPAR) δ was investigated. RESEARCH DESIGN AND METHODS Effects of 4-HDDE and PPARδ on the glucose transport system and calreticulin expression in primary bovine aortic endothelial cells were evaluated by pharmacological and molecular interventions. RESULTS Using GW501516 (PPARδ agonist) and GSK0660 (PPARδ antagonist), we discovered that high-glucose–induced downregulation of the glucose transport system in VECs is mediated by PPARδ. A PPAR-sensitive luciferase reporter assay in VECs revealed that high glucose markedly increased luciferase activity, while GSK0660 abolished it. High-performance liquid chromatography analysis showed that high-glucose incubation substantially elevated the generation of 4-HDDE in VECs. Treatment of VECs, exposed to normal glucose, with 4-HDDE mimicked high glucose and downregulated the glucose transport system and increased calreticulin expression. Like high glucose, 4-HDDE significantly activated PPARδ in cells overexpressing human PPAR (hPPAR)δ but not hPPARα, -γ1, or -γ2. Moreover, silencing of PPARδ prevented high-glucose–dependent alterations in GLUT-1 and calreticulin expression. Finally, specific binding of PPARδ to a PPAR response element in the promoter region of the calreticulin gene was identified by utilizing a specific chromatin immunoprecipitation assay. CONCLUSIONS Collectively, our data show that 4-HDDE plays a central role in the downregulation of glucose uptake in VECs by activating PPARδ. PMID:20107107
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
Virtue Existential Career Model: A Dialectic and Integrative Approach Echoing Eastern Philosophy
Liu, Shu-Hui; Hung, Jui-Ping; Peng, Hsin-I; Chang, Chia-Hui; Lu, Yi-Jen
2016-01-01
Our Virtue Existential Career (VEC) model aims at complementing western modernism and postmodernism career theories with eastern philosophy. With dialectical philosophy and virtue-practice derived from the Classic of Changes, the VEC theoretical foundation incorporates merits from Holland typology, Minnesota Theory of Work Adjustment, Social Cognitive Career Theory, Meaning Therapy, Narrative Approach Career Counseling, and Happenstance Learning Theory. While modernism considers a matched job as an ideal career vision and prefers rational strategies (controlling and realizing) to achieve job security; postmodernism prefers appreciating and adapting strategies toward openness and appreciates multiple possible selves and occupations, our model pursues a blending of security and openness via controlling-and-realizing and appreciating-and-adapting interwoven with each other in a dialectical and harmonious way. Our VEC counseling prototype aims at a secular goal of living on the earth with ways and harmony (安身以法以和) and an ultimate end to spiral up to the wisdom of living up to the way of heaven (天道) with mind and virtue (立命以心以德). A VEC counseling process of five major career strategies, metaphorical stories of qian and kun, and experiential activities are developed to deliver VEC concepts. The VEC model and prototype presented in this research is the product of an action research following Lewin's (1946) top-to-down model. Situated structure analyses were conducted to further investigate the adequacy of this version of VEC model and prototype. Data from two groups (one for stranded college graduates and the other for growing college students) revealed empirical supports. Yang type of career praxes tends to induce actualization, which resulting in realistic goals and concrete action plans; yin type of career praxes tends to increase self-efficacy, which resulting in positive attitude toward current situatedness and future development. Acceptance and dialectic thinking often result from yin-yang-blending career praxes. Growing developers benefit from a strategy sequence of yang-yin-synthesized; stranded developers from a strategy sequence of yin-yang-synthesized. Our contributions and limitations are discussed in the context of developing indigenous career theories and practices for a globalized and ever-changing world. PMID:27895604
Virtue Existential Career Model: A Dialectic and Integrative Approach Echoing Eastern Philosophy.
Liu, Shu-Hui; Hung, Jui-Ping; Peng, Hsin-I; Chang, Chia-Hui; Lu, Yi-Jen
2016-01-01
Our Virtue Existential Career (VEC) model aims at complementing western modernism and postmodernism career theories with eastern philosophy. With dialectical philosophy and virtue-practice derived from the Classic of Changes , the VEC theoretical foundation incorporates merits from Holland typology, Minnesota Theory of Work Adjustment, Social Cognitive Career Theory, Meaning Therapy, Narrative Approach Career Counseling, and Happenstance Learning Theory. While modernism considers a matched job as an ideal career vision and prefers rational strategies ( controlling and realizing ) to achieve job security; postmodernism prefers appreciating and adapting strategies toward openness and appreciates multiple possible selves and occupations, our model pursues a blending of security and openness via controlling-and-realizing and appreciating-and-adapting interwoven with each other in a dialectical and harmonious way. Our VEC counseling prototype aims at a secular goal of living on the earth with ways and harmony () and an ultimate end to spiral up to the wisdom of living up to the way of heaven () with mind and virtue (). A VEC counseling process of five major career strategies, metaphorical stories of qian and kun , and experiential activities are developed to deliver VEC concepts. The VEC model and prototype presented in this research is the product of an action research following Lewin's (1946) top-to-down model. Situated structure analyses were conducted to further investigate the adequacy of this version of VEC model and prototype. Data from two groups (one for stranded college graduates and the other for growing college students) revealed empirical supports. Y ang type of career praxes tends to induce actualization, which resulting in realistic goals and concrete action plans; yin type of career praxes tends to increase self-efficacy, which resulting in positive attitude toward current situatedness and future development. Acceptance and dialectic thinking often result from yin -y ang- blending career praxes. Growing developers benefit from a strategy sequence of yang-yin -synthesized; stranded developers from a strategy sequence of yin-yang -synthesized. Our contributions and limitations are discussed in the context of developing indigenous career theories and practices for a globalized and ever-changing world.
Ginsberg, Michael; Schachterle, William; Shido, Koji; Rafii, Shahin
2016-01-01
Endothelial cells (ECs) have essential roles in organ development and regeneration, and therefore they could be used for regenerative therapies. However, generation of abundant functional endothelium from pluripotent stem cells has been difficult because ECs generated by many existing strategies have limited proliferative potential and display vascular instability. The latter difficulty is of particular importance because cells that lose their identity over time could be unsuitable for therapeutic use. Here, we describe a 3-week platform for directly converting human mid-gestation lineage-committed amniotic fluid–derived cells (ACs) into a stable and expandable population of vascular ECs (rAC-VECs) without using pluripotency factors. By transient expression of the ETS transcription factor ETV2 for 2 weeks and constitutive expression the ETS transcription factors FLI1 and ERG1, concomitant with TGF-β inhibition for 3 weeks, epithelial and mesenchymal ACs are converted, with high efficiency, into functional rAC-VECs. These rAC-VECs maintain their vascular repertoire and morphology over numerous passages in vitro, and they form functional vessels when implanted in vivo. rAC-VECs can be detected in recipient mice months after implantation. Thus, rAC-VECs can be used to establish a cellular platform to uncover the molecular determinants of vascular development and heterogeneity and potentially represent ideal ECs for the treatment of regenerative disorders. PMID:26540589
Aortic calcified particles modulate valvular endothelial and interstitial cells.
van Engeland, Nicole C A; Bertazzo, Sergio; Sarathchandra, Padmini; McCormack, Ann; Bouten, Carlijn V C; Yacoub, Magdi H; Chester, Adrian H; Latif, Najma
Normal and calcified human valve cusps, coronary arteries, and aortae harbor spherical calcium phosphate microparticles of identical composition and crystallinity, and their role remains unknown. The objective was to examine the direct effects of isolated calcified particles on human valvular cells. Calcified particles were isolated from healthy and diseased aortae, characterized, quantitated, and applied to valvular endothelial cells (VECs) and interstitial cells (VICs). Cell differentiation, viability, and proliferation were analyzed. Particles were heterogeneous, differing in size and shape, and were crystallized as calcium phosphate. Diseased donors had significantly more calcified particles compared to healthy donors (P<.05), but there were no differences between the composition of the particles from healthy and diseased donors. VECs treated with calcified particles showed a significant decrease in CD31 and VE-cadherin and an increase in von Willebrand factor expression, P<.05. There were significantly increased α-SMA and osteopontin in treated VICs (P<.05), significantly decreased VEC and VIC viability (P<.05), and significantly increased number of terminal deoxynucleotidyl transferase dUTP nick end labeling-positive VECs (P<.05) indicating apoptosis when treated with the calcified particles. Isolated calcified particles from human aortae are not innocent bystanders but induce a phenotypical and pathological change of VECs and VICs characteristic of activated and pathological cells. Therapy tailored to reduce these calcified particles should be investigated. Copyright © 2017 Elsevier Inc. All rights reserved.
Side-Specific Endothelial-Dependent Regulation of Aortic Valve Calcification
Richards, Jennifer; El-Hamamsy, Ismail; Chen, Si; Sarang, Zubair; Sarathchandra, Padmini; Yacoub, Magdi H.; Chester, Adrian H.; Butcher, Jonathan T.
2014-01-01
Arterial endothelial cells maintain vascular homeostasis and vessel tone in part through the secretion of nitric oxide (NO). In this study, we determined how aortic valve endothelial cells (VEC) regulate aortic valve interstitial cell (VIC) phenotype and matrix calcification through NO. Using an anchored in vitro collagen hydrogel culture system, we demonstrate that three-dimensionally cultured porcine VIC do not calcify in osteogenic medium unless under mechanical stress. Co-culture with porcine VEC, however, significantly attenuated VIC calcification through inhibition of myofibroblastic activation, osteogenic differentiation, and calcium deposition. Incubation with the NO donor DETA-NO inhibited VIC osteogenic differentiation and matrix calcification, whereas incubation with the NO blocker l-NAME augmented calcification even in 3D VIC–VEC co-culture. Aortic VEC, but not VIC, expressed endothelial NO synthase (eNOS) in both porcine and human valves, which was reduced in osteogenic medium. eNOS expression was reduced in calcified human aortic valves in a side-specific manner. Porcine leaflets exposed to the soluble guanylyl cyclase inhibitor ODQ increased osteocalcin and α-smooth muscle actin expression. Finally, side-specific shear stress applied to porcine aortic valve leaflet endothelial surfaces increased cGMP production in VEC. Valve endothelial-derived NO is a natural inhibitor of the early phases of valve calcification and therefore may be an important regulator of valve homeostasis and pathology. PMID:23499458
47 CFR Appendix 2 to Part 97 - VEC Regions
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICE Pt. 97, App. 2 Appendix 2 to Part 97—VEC Regions 1. Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. 2. New Jersey and New York. 3. Delaware, District of Columbia, Maryland... and Virginia. 5. Arkansas, Louisiana, Mississippi, New Mexico, Oklahoma and Texas. 6. California. 7...
Quantum Computational Geodesics
2010-01-01
dtU (t)† unvec κt t ∫ 0 drκ−1r vec C(r...U(t). (209) If J(T ) = 0 in equation 209, then d dt J(0) = j−1T T ∫ 0 dtU (t)† unvec κt t ∫ 0 drκ−1r vec C(r) U(t) . (210...equation 211, one obtains the so-called geodesic derivative (1) d dq Hq(0) = j −1 T T ∫ 0 dtU (t)†unvec κt t ∫ 0 drκ−1r vec C(r)
Photoproduction of Λ and Σ 0 hyperons using linearly polarized photons
Paterson, C. A.; Ireland, D. G.; Livingston, K.; ...
2016-06-08
Measurements of polarization observables for the reactionsmore » $$\\vec{\\gamma} p \\rightarrow K^+ \\Lambda$$ and $$\\vec{\\gamma} p \\rightarrow K^+ \\Sigma^0$$ have been performed. This is part of a programme of measurements designed to study the spectrum of baryon resonances. The accurate measurement of several polarization observables provides tight constraints for phenomenological fits. Beam-recoil observables for the $$\\vec{\\gamma} p \\rightarrow K^+ \\Sigma^0$$ reaction have not been reported before now. Furthermore, the measurements were carried out using linearly polarized photon beams and the CLAS detector at the Thomas Jefferson National Accelerator Facility. The energy range of the results is 1.71GeV.« less
Are Bred Vectors The Same As Lyapunov Vectors?
NASA Astrophysics Data System (ADS)
Kalnay, E.; Corazza, M.; Cai, M.
Regional loss of predictability is an indication of the instability of the underlying flow, where small errors in the initial conditions (or imperfections in the model) grow to large amplitudes in finite times. The stability properties of evolving flows have been studied using Lyapunov vectors (e.g., Alligood et al, 1996, Ott, 1993, Kalnay, 2002), singular vectors (e.g., Lorenz, 1965, Farrell, 1988, Molteni and Palmer, 1993), and, more recently, with bred vectors (e.g., Szunyogh et al, 1997, Cai et al, 2001). Bred vectors (BVs) are, by construction, closely related to Lyapunov vectors (LVs). In fact, after an infinitely long breeding time, and with the use of infinitesimal ampli- tudes, bred vectors are identical to leading Lyapunov vectors. In practical applications, however, bred vectors are different from Lyapunov vectors in two important ways: a) bred vectors are never globally orthogonalized and are intrinsically local in space and time, and b) they are finite-amplitude, finite-time vectors. These two differences are very significant in a dynamical system whose size is very large. For example, the at- mosphere is large enough to have "room" for several synoptic scale instabilities (e.g., storms) to develop independently in different regions (say, North America and Aus- tralia), and it is complex enough to have several different possible types of instabilities (such as barotropic, baroclinic, convective, and even Brownian motion). Bred vectors share some of their properties with leading LVs (Corazza et al, 2001a, 2001b, Toth and Kalnay, 1993, 1997, Cai et al, 2001). For example, 1) Bred vectors are independent of the norm used to define the size of the perturba- tion. Corazza et al. (2001) showed that bred vectors obtained using a potential enstro- phy norm were indistinguishable from bred vectors obtained using a streamfunction squared norm, in contrast with singular vectors. 2) Bred vectors are independent of the length of the rescaling period as long as the perturbations remain approximately linear (for example, for atmospheric models the interval for rescaling could be varied between a single time step and 1 day without affecting qualitatively the characteristics of the bred vectors. However, the finite-amplitude, finite-time, and lack of orthogonalization of the BVs introduces important differences with LVs: 1) In regions that undergo strong instabilities, the bred vectors tend to be locally domi- 1 nated by simple, low-dimensional structures. Patil et al (2001) showed that the BV-dim (appendix) gives a good estimate of the number of dominant directions (shapes) of the local k bred vectors. For example, if half of them are aligned in one direction, and half in a different direction, the BV-dim is about two. If the majority of the bred vectors are aligned predominantly in one direction and only a few are aligned in a second direction, then the BV-dim is between 1 and 2. Patil et al., (2001) showed that the regions with low dimensionality cover about 20% of the atmosphere. They also found that these low-dimensionality regions have a very well defined vertical structure, and a typical lifetime of 3-7 days. The low dimensionality identifies regions where the in- stability of the basic flow has manifested itself in a low number of preferred directions of perturbation growth. 2) Using a Quasi-Geostrophic simulation system of data assimilation developed by Morss (1999), Corazza et al (2001a, b) found that bred vectors have structures that closely resemble the background (short forecasts used as first guess) errors, which in turn dominate the local analysis errors. This is especially true in regions of low dimensionality, which is not surprising if these are unstable regions where errors grow in preferred shapes. 3) The number of bred vectors needed to represent the unstable subspace in the QG system is small (about 6-10). This was shown by computing the local BV-dim as a function of the number of independent bred vectors. Convergence in the local dimen- sion starts to occur at about 6 BVs, and is essentially complete when the number of vectors is about 10-15 (Corazza et al, 2001a). This should be contrasted with the re- sults of Snyder and Joly (1998) and Palmer et al (1998) who showed that hundreds of Lyapunov vectors with positive Lyapunov exponents are needed to represent the attractor of the system in quasi-geostrophic models. 4) Since only a few bred vectors are needed, and background errors project strongly in the subspace of bred vectors, Corazza et al (2001b) were able to develop cost-efficient methods to improve the 3D-Var data assimilation by adding to the background error covariance terms proportional to the outer product of the bred vectors, thus represent- ing the "errors of the day". This approach led to a reduction of analysis error variance of about 40% at very low cost. 5) The fact that BVs have finite amplitude provides a natural way to filter out instabil- ities present in the system that have fast growth, but saturate nonlinearly at such small amplitudes that they are irrelevant for ensemble perturbations. As shown by Lorenz (1996) Lyapunov vectors (and singular vectors) of models including these physical phenomena would be dominated by the fast but small amplitude instabilities, unless they are explicitly excluded from the linearized models. Bred vectors, on the other 2 hand, through the choice of an appropriate size for the perturbation, provide a natural filter based on nonlinear saturation of fast but irrelevant instabilities. 6) Every bred vector is qualitatively similar to the *leading* LV. LVs beyond the leading LV are obtained by orthogonalization after each time step with respect to the previous LVs subspace. The orthogonalization requires the introduction of a norm. With an enstrophy norm, the successive LVs have larger and larger horizontal scales, and a choice of a stream function norm would lead to successively smaller scales in the LVs. Beyond the first few LVs, there is little qualitative similarity between the background errors and the LVs. In summary, in a system like the atmosphere with enough physical space for several independent local instabilities, BVs and LVs share some properties but they also have significant differences. BV are finite-amplitude, finite-time, and because they are not globally orthogonalized, they have local properties in space. Bred vectors are akin to the leading LV, but bred vectors derived from different arbitrary initial perturba- tions remain distinct from each other, instead of collapsing into a single leading vec- tor, presumably because the nonlinear terms and physical parameterizations introduce sufficient stochastic forcing to avoid such convergence. As a result, there is no need for global orthogonalization, and the number of bred vectors required to describe the natural instabilities in an atmospheric system (from a local point of view) is much smaller than the number of Lyapunov vectors with positive Lyapunov exponents. The BVs are independent of the norm, whereas the LVs beyond the first one do depend on the choice of norm: for example, they become larger in scale with a vorticity norm, and smaller with a stream function norm. These properties of BVs result in significant advantages for data assimilation and en- semble forecasting for the atmosphere. Errors in the analysis have structures very similar to bred vectors, and it is found that they project very strongly on the subspace of a few bred vectors. This is not true for either Lyapunov vectors beyond the lead- ing LVs, or for singular vectors unless they are constructed with a norm based on the analysis error covariance matrix (or a bred vector covariance). The similarity between bred vectors and analysis errors leads to the ability to include "errors of the day" in the background error covariance and a significant improvement of the analysis beyond 3D-Var at a very low cost (Corazza, 2001b). References Alligood K. T., T. D. Sauer and J. A. Yorke, 1996: Chaos: an introduction to dynamical systems. Springer-Verlag, New York. Buizza R., J. Tribbia, F. Molteni and T. Palmer, 1993: Computation of optimal unstable 3 structures for numerical weather prediction models. Tellus, 45A, 388-407. Cai, M., E. Kalnay and Z. Toth, 2001: Potential impact of bred vectors on ensemble forecasting and data assimilation in the Zebiak-Cane model. Submitted to J of Climate. Corazza, M., E. Kalnay, D. J. Patil, R. Morss, M. Cai, I. Szunyogh, B. R. Hunt, E. Ott and J. Yorke, 2001: Use of the breeding technique to determine the structure of the "errors of the day". Submitted to Nonlinear Processes in Geophysics. Corazza, M., E. Kalnay, DJ Patil, E. Ott, J. Yorke, I Szunyogh and M. Cai, 2001: Use of the breeding technique in the estimation of the background error covariance matrix for a quasigeostrophic model. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Farrell, B., 1988: Small error dynamics and the predictability of atmospheric flow, J. Atmos. Sciences, 45, 163-172. Kalnay, E 2002: Atmospheric modeling, data assimilation and predictability. Chapter 6. Cambridge University Press, UK. In press. Kalnay E and Z Toth 1994: Removing growing errors in the analysis. Preprints, Tenth Conference on Numerical Weather Prediction, pp 212-215. Amer. Meteor. Soc., July 18-22, 1994. Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 21, 289-307. Lorenz, E.N., 1996: Predictability- A problem partly solved. Proceedings of the ECMWF Seminar on Predictability, Reading, England, Vol. 1 1-18. Molteni F. and TN Palmer, 1993: Predictability and finite-time instability of the north- ern winter circulation. Q. J. Roy. Meteorol. Soc. 119, 269-298. Morss, R.E.: 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. Thesis, Massachussetts Institute of Technology, 225pp. Ott, E., 1993: Chaos in Dynamical Systems. Cambridge University Press. New York. Palmer, TN, R. Gelaro, J. Barkmeijer and R. Buizza, 1998: Singular vectors, metrics and adaptive observations. J. Atmos Sciences, 55, 633-653. Patil, DJ, BR Hunt, E Kalnay, J. Yorke, and E. Ott, 2001: Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett., 86, 5878. Patil, DJ, I. Szunyogh, BR Hunt, E Kalnay, E Ott, and J. Yorke, 2001: Using large 4 member ensembles to isolate local low dimensionality of atmospheric dynamics. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Snyder, C. and A. Joly, 1998: Development of perturbations within growing baroclinic waves. Q. J. Roy. Meteor. Soc., 124, pp 1961. Szunyogh, I, E. Kalnay and Z. Toth, 1997: A comparison of Lyapunov and Singular vectors in a low resolution GCM. Tellus, 49A, 200-227. Toth, Z and E Kalnay 1993: Ensemble forecasting at NMC - the generation of pertur- bations. Bull. Amer. Meteorol. Soc., 74, 2317-2330. Toth, Z and E Kalnay 1997: Ensemble forecasting at NCEP and the breeding method. Mon Wea Rev, 125, 3297-3319. * Corresponding author address: Eugenia Kalnay, Meteorology Depart- ment, University of Maryland, College Park, MD 20742-2425, USA; email: ekalnay@atmos.umd.edu Appendix: BV-dimension Patil et al., (2001) defined local bred vectors around a point in the 3-dimensional grid of the model by taking the 24 closest horizontal neighbors. If there are k bred vectors available, and N model variables for each grid point, the k local bred vectors form the columns of a 25Nxk matrix B. The kxk covariance matrix is C=B^T B. Its eigen- values are positive, and its eigenvectors v(i) are the singular vectors of the local bred vector subspace. The Bred Vector dimension (BV-dim) measures the local effective dimension: BV-dim[s,s,...,s(k)]={SUM[s(i)]}^2/SUM[s(i)]^2 where s(i) are the square roots of the eigenvalues of the covariance matrix. 5
Effect of Herbal Medicine on Vaginal Epithelial Cells: A Systematic Review and Meta-analysis.
Rahmani, Yousef; Chaleh, Khadijeh Chaleh; Shahmohammadi, Afshar; Safari, Shahla
2018-04-01
The present meta-analysis aimed to assess the effect of the herbal medicine on the vaginal epithelial cells (VECs) among the menopausal subjects. The literature related to VECs exposed to various herbal medicines in menopausal women were searched on three databases, MEDLINE (1966-August 2017), Scopus (1990-August 2017) and Cochrane Library (Cochrane Central Register of Controlled Trials; 2014). Totally, the meta-analysis was conducted on 11 randomised controlled trials. Based on the findings, the standardized mean difference (SMD) of maturation value (MV) was observed to be elevated by 0.48% (95% interval confidence [CI], 0.108-0.871; P = 0.012), as well as the heterogeneity was high (I 2 = 84%; P < 0.001). The MV revealed a significant increase in soy group (SMD, 0.358; 95% CI, 0.073-0.871; P = 0.014) compared to the control group. The herbal medicines exhibited a statistically significant effect on the VECs. A significant effect on the VECs was also found in the subgroup analysis of the patients, who received soy. However, further and extensive studies are required to achieve reliable outcomes.
Patel, Mickey V; Fahey, John V; Rossoll, Richard M; Wira, Charles R
2013-05-01
Vaginal epithelial cells (VEC) are the first line of defense against incoming pathogens in the female reproductive tract. Their ability to produce the anti-HIV molecules elafin and HBD2 under hormonal stimulation is unknown. Vaginal epithelial cells were recovered using a menstrual cup and cultured overnight prior to treatment with estradiol (E₂), progesterone (P₄) or a panel of selective estrogen response modulators (SERMs). Conditioned media were recovered and analyzed for protein concentration and anti-HIV activity. E₂ significantly decreased the secretion of HBD2 and elafin by VEC over 48 hrs, while P4 and the SERMs (tamoxifen, PHTTP, ICI or Y134) had no effect. VEC conditioned media from E₂ -treated cells had no anti-HIV activity, while that from E₂ /P₄ -treated cells significantly inhibited HIV-BaL infection. The menstrual cup allows for effective recovery of primary VEC. Their production of HBD2 and elafin is sensitive to E₂, suggesting that innate immune protection varies in the vagina across the menstrual cycle. © 2013 John Wiley & Sons A/S.
Li, Yunlun; Zhang, Xinya; Yang, Wenqing; Li, Chao; Chu, Yanjun; Jiang, Haiqiang; Shen, Zhenzhen
2017-01-01
The aim of the present study was to investigate the effect and the underlying mechanism of the combined treatment of rhynchophylla total alkaloids (RTA) and sinapine thiocyanate for protection against a prothrombotic state (PTS) associated with the tumor necrosis factor-alpha (TNF-α)-induced inflammatory injury of vascular endothelial cells (VECs). A TNF-α-induced VEC inflammatory injury model was established, and cell morphology of VECs was evaluated using scanning electron microscopy. In addition, reverse transcription-quantitative polymerase chain reaction and western blot analysis were performed to examine the mRNA and protein expression of coagulation-related factors, including nuclear factor-κB (NF-κB), transforming growth factor-β1 (TGF-β1), tissue factor (TF), plasminogen activator inhibitor (PAI-1), protease-activation receptors (PAR-1) and protein kinase C (PKC-α) in VECs. Combined treatment with RTA and sinapine thiocyanate was demonstrated to reduce, to a varying extent, the mRNA and protein expression of NF-κB, TGF-β1, TF, PAR-1, PKC-α and PAI-1. Furthermore, combined treatment with RTA and sinapine thiocyanate was able to downregulate the expression of coagulation-related factors in injured VECs, thereby inhibiting the PTS induced by vascular endothelial injury. The underlying mechanism is partially associated with the TF-mediated activation of the thrombin-receptor signaling pathway that suppresses coagulation during inflammation and balances fibrinolysis in order to inhibit fibrin generation and deposition. PMID:28587383
Tisato, Veronica; Zauli, Giorgio; Rimondi, Erika; Gianesini, Sergio; Brunelli, Laura; Menegatti, Erica; Zamboni, Paolo; Secchiero, Paola
2013-01-01
Large vein endothelium plays important roles in clinical diseases such as chronic venous disease (CVD) and thrombosis; thus to characterize CVD vein endothelial cells (VEC) has a strategic role in identifying specific therapeutic targets. On these bases we evaluated the effect of the natural anti-inflammatory compounds α-Lipoic acid and Ginkgoselect phytosome on cytokines/chemokines released by CVD patient-derived VEC. For this purpose, we characterized the levels of a panel of cytokines/chemokines (n = 31) in CVD patients' plasma compared to healthy controls and their release by VEC purified from the same patients, in unstimulated and TNF-α stimulated conditions. Among the cytokines/chemokines released by VEC, which recapitulated the systemic profile (IL-8, TNF-α, GM-CSF, INF-α2, G-CSF, MIP-1β, VEGF, EGF, Eotaxin, MCP-1, CXCL10, PDGF, and RANTES), we identified those targeted by ex vivo treatment with α-Lipoic acid and/or Ginkgoselect phytosome (GM-CSF, G-CSF, CXCL10, PDGF, and RANTES). Finally, by investigating the intracellular pathways involved in promoting the VEC release of cytokines/chemokines, which are targeted by natural anti-inflammatory compounds, we documented that α-Lipoic acid significantly counteracted TNF-α-induced NF-κB and p38/MAPK activation while the effects of Ginkgo biloba appeared to be predominantly mediated by Akt. Our data provide new insights into the molecular mechanisms of CVD pathogenesis, highlighting new potential therapeutic targets. PMID:24489443
Li, Yunlun; Zhang, Xinya; Yang, Wenqing; Li, Chao; Chu, Yanjun; Jiang, Haiqiang; Shen, Zhenzhen
2017-06-01
The aim of the present study was to investigate the effect and the underlying mechanism of the combined treatment of rhynchophylla total alkaloids (RTA) and sinapine thiocyanate for protection against a prothrombotic state (PTS) associated with the tumor necrosis factor-alpha (TNF-α)-induced inflammatory injury of vascular endothelial cells (VECs). A TNF-α-induced VEC inflammatory injury model was established, and cell morphology of VECs was evaluated using scanning electron microscopy. In addition, reverse transcription-quantitative polymerase chain reaction and western blot analysis were performed to examine the mRNA and protein expression of coagulation-related factors, including nuclear factor-κB (NF-κB), transforming growth factor-β1 (TGF-β1), tissue factor (TF), plasminogen activator inhibitor (PAI-1), protease-activation receptors (PAR-1) and protein kinase C (PKC-α) in VECs. Combined treatment with RTA and sinapine thiocyanate was demonstrated to reduce, to a varying extent, the mRNA and protein expression of NF-κB, TGF-β1, TF, PAR-1, PKC-α and PAI-1. Furthermore, combined treatment with RTA and sinapine thiocyanate was able to downregulate the expression of coagulation-related factors in injured VECs, thereby inhibiting the PTS induced by vascular endothelial injury. The underlying mechanism is partially associated with the TF-mediated activation of the thrombin-receptor signaling pathway that suppresses coagulation during inflammation and balances fibrinolysis in order to inhibit fibrin generation and deposition.
Vaginal innate immune mediators are modulated by a water extract of Houttuynia cordata Thunb.
Satthakarn, Surada; Hladik, Florian; Promsong, Aornrutai; Nittayananta, Wipawee
2015-06-16
Vaginal epithelial cells (VECs) produce antimicrobial peptides including human β-defensin 2 (hBD2) and secretory leukocyte protease inhibitor (SLPI), as well as cytokines and chemokines that play vital roles in mucosal innate immunity of the female reproductive tract. Houttuynia cordata Thunb (H. cordata), a herbal plant found in Asia, possesses various activities including antimicrobial activity and anti-inflammation. As inflammation and infection are commonly found in female reproductive tract, we aimed to investigate the effects of H. cordata water extract in modulating innate immune factors produced by VECs. Primary human VECs were cultured and treated with H. cordata at a concentration ranging from 25-200 μg/ml for 6 or 18 h. After treatment, the cells and culture supernatants were harvested. The expression of hBD2 and SLPI mRNA was evaluated by quantitative real-time reverse transcription PCR. Levels of secreted hBD2 and SLPI as well as cytokines and chemokines in the supernatants were measured by ELISA and Luminex assay, respectively. Cytotoxicity of the extract on VECs was assessed by CellTiter-Blue Cell Viability Assay. H. cordata did not cause measurable toxicity on VECs after exposure for 18 h. The expression of hBD2 and SLPI mRNA as well as the secreted hBD2 protein were increased in response to H. cordata exposure for 18 h when compared to the untreated controls. However, treatment with the extract for 6 h had only slight effects on the mRNA expression of hBD2 and SLPI. The secretion of IL-2 and IL-6 proteins by VECs was also increased, while the secretion of CCL5 was decreased after treatment with the extract for 18 h. Treatment with H. cordata extract had some effects on the secretion of IL-4, IL-8, CCL2, and TNF-α, but not statistically significant. H. cordata water extract modulates the expression of antimicrobial peptides and cytokines produced by VECs, which play an important role in the mucosal innate immunity in the female reproductive tract. Our findings suggest that H. cordata may have immunomodulatory effects on the vaginal mucosa. Further studies should be performed in vivo to determine if it can enhance mucosal immune defenses against microbial pathogens.
An online analytical processing multi-dimensional data warehouse for malaria data
Madey, Gregory R; Vyushkov, Alexander; Raybaud, Benoit; Burkot, Thomas R; Collins, Frank H
2017-01-01
Abstract Malaria is a vector-borne disease that contributes substantially to the global burden of morbidity and mortality. The management of malaria-related data from heterogeneous, autonomous, and distributed data sources poses unique challenges and requirements. Although online data storage systems exist that address specific malaria-related issues, a globally integrated online resource to address different aspects of the disease does not exist. In this article, we describe the design, implementation, and applications of a multi-dimensional, online analytical processing data warehouse, named the VecNet Data Warehouse (VecNet-DW). It is the first online, globally-integrated platform that provides efficient search, retrieval and visualization of historical, predictive, and static malaria-related data, organized in data marts. Historical and static data are modelled using star schemas, while predictive data are modelled using a snowflake schema. The major goals, characteristics, and components of the DW are described along with its data taxonomy and ontology, the external data storage systems and the logical modelling and physical design phases. Results are presented as screenshots of a Dimensional Data browser, a Lookup Tables browser, and a Results Viewer interface. The power of the DW emerges from integrated querying of the different data marts and structuring those queries to the desired dimensions, enabling users to search, view, analyse, and store large volumes of aggregated data, and responding better to the increasing demands of users. Database URL https://dw.vecnet.org/datawarehouse/ PMID:29220463
Visualization of Circuit Card EM Fields.
1998-03-31
0.010 - Photoimageable (Min.): 0.005 «Located at 7845 -J Airpark Road; Gaithersburg, Maryland 20879. 301-977-0303, http://www.capitalelectro.com...rlabelC«’); tit-. Ch .3SO; print -dep« fig_conn.h36.ep«; fignre; X differential node rel.diff_vec-0; h_vec«0.6:0.2:6.0; len_h«length(h_ve_); for i
Effect of Herbal Medicine on Vaginal Epithelial Cells: A Systematic Review and Meta-analysis
Rahmani, Yousef; Chaleh, Khadijeh Chaleh; Shahmohammadi, Afshar
2018-01-01
Objectives The present meta-analysis aimed to assess the effect of the herbal medicine on the vaginal epithelial cells (VECs) among the menopausal subjects. Methods The literature related to VECs exposed to various herbal medicines in menopausal women were searched on three databases, MEDLINE (1966–August 2017), Scopus (1990–August 2017) and Cochrane Library (Cochrane Central Register of Controlled Trials; 2014). Results Totally, the meta-analysis was conducted on 11 randomised controlled trials. Based on the findings, the standardized mean difference (SMD) of maturation value (MV) was observed to be elevated by 0.48% (95% interval confidence [CI], 0.108–0.871; P = 0.012), as well as the heterogeneity was high (I2 = 84%; P < 0.001). The MV revealed a significant increase in soy group (SMD, 0.358; 95% CI, 0.073–0.871; P = 0.014) compared to the control group. Conclusions The herbal medicines exhibited a statistically significant effect on the VECs. A significant effect on the VECs was also found in the subgroup analysis of the patients, who received soy. However, further and extensive studies are required to achieve reliable outcomes. PMID:29765922
Patel, Mickey V.; Fahey, John V.; Rossoll, Richard M.; Wira, Charles R.
2013-01-01
Problem Vaginal epithelial cells (VEC) are the first line of defense against incoming pathogens in the female reproductive tract. Their ability to produce the anti-HIV molecules elafin and HBD2 under hormonal stimulation is unknown. Method of study Vaginal epithelial cells were recovered using a menstrual cup and cultured overnight prior to treatment with estradiol (E2), progesterone (P4) or a panel of selective estrogen response modulators (SERMs). Conditioned media were recovered and analyzed for protein concentration and anti-HIV activity. Results E2 significantly decreased the secretion of HBD2 and elafin by VEC over 48 hrs, while P4 and the SERMs (tamoxifen, PHTTP, ICI or Y134) had no effect. VEC conditioned media from E2-treated cells had no anti-HIV activity, while that from E2/P4-treated cells significantly inhibited HIV-BaL infection. Conclusion The menstrual cup allows for effective recovery of primary VEC. Their production of HBD2 and elafin is sensitive to E2, suggesting that innate immune protection varies in the vagina across the menstrual cycle. PMID:23398087
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, X.; Adhikari, K. P.; Bosted, P.
We report measurements of target- and double-spin asymmetries for the exclusive channelmore » $$\\vec e\\vec p\\to e\\pi^+ (n)$$ in the nucleon resonance region at Jefferson Lab using the CEBAF Large Acceptance Spectrometer (CLAS). These asymmetries were extracted from data obtained using a longitudinally polarized NH$$_3$$ target and a longitudinally polarized electron beam with energies 1.1, 1.3, 2.0, 2.3 and 3.0 GeV. The new results are consistent with previous CLAS publications but are extended to a low $Q^2$ range from $0.0065$ to $0.35$ (GeV$/c$)$^2$. The $Q^2$ access was made possible by a custom-built Cherenkov detector that allowed the detection of electrons for scattering angles as low as $$6^\\circ$$. These results are compared with the unitary isobar models JANR and MAID, the partial-wave analysis prediction from SAID and the dynamic model DMT. In many kinematic regions our results, in particular results on the target asymmetry, help to constrain the polarization-dependent components of these models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dieterich, Sonja
2002-05-01
There has been a longstanding issue concerning possible nucleon modifications in a (dense) nuclear medium. Polarization transfer data for exclusive quasielastic electron scattering are a sensitive to the ratio of the electric and magnetic nucleon form factors in the medium. Although proper interpretation of the results requires accounting for such effects as final state interactions and meson exchange currents, their effect on polarization transfer is predicted to be small. Studies of model dependencies, e.g., the off-shell current operator and spinor distortions, have been done. Final results of a measurement of polarization transfer in the 4He(more » $$\\vec{v}$$,e'$$\\vec{p}$$) 3H reaction will be discussed. The experiments were carried out at MAMI, Mainz at a Q 2 of 0.4 GeV 2 and at the Thomas Jefferson Lab, Newport News, Virginia at the Q 2 values 0.5, 1.0, 1.6 and 2.6 GeV 2. Measured values of the transferred and induced polarization are compared with various theoretical calculations. The experiment showed a difference between the fully relativistic model with may indicate medium modifications of the form factor.« less
Zheng, X.; Adhikari, K. P.; Bosted, P.; ...
2016-10-19
We report measurements of target- and double-spin asymmetries for the exclusive channelmore » $$\\vec e\\vec p\\to e\\pi^+ (n)$$ in the nucleon resonance region at Jefferson Lab using the CEBAF Large Acceptance Spectrometer (CLAS). These asymmetries were extracted from data obtained using a longitudinally polarized NH$$_3$$ target and a longitudinally polarized electron beam with energies 1.1, 1.3, 2.0, 2.3 and 3.0 GeV. The new results are consistent with previous CLAS publications but are extended to a low $Q^2$ range from $0.0065$ to $0.35$ (GeV$/c$)$^2$. The $Q^2$ access was made possible by a custom-built Cherenkov detector that allowed the detection of electrons for scattering angles as low as $$6^\\circ$$. These results are compared with the unitary isobar models JANR and MAID, the partial-wave analysis prediction from SAID and the dynamic model DMT. In many kinematic regions our results, in particular results on the target asymmetry, help to constrain the polarization-dependent components of these models.« less
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
A Novel Recommendation System to Match College Events and Groups to Students
NASA Astrophysics Data System (ADS)
Qazanfari, K.; Youssef, A.; Keane, K.; Nelson, J.
2017-10-01
With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users’ keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website.
On the Alloying and Properties of Tetragonal Nb₅Si₃ in Nb-Silicide Based Alloys.
Tsakiropoulos, Panos
2018-01-04
The alloying of Nb₅Si₃ modifies its properties. Actual compositions of (Nb,TM)₅X₃ silicides in developmental alloys, where X = Al + B + Ge + Si + Sn and TM is a transition and/or refractory metal, were used to calculate the composition weighted differences in electronegativity (Δχ) and an average valence electron concentration (VEC) and the solubility range of X to study the alloying and properties of the silicide. The calculations gave 4.11 < VEC < 4.45, 0.103 < Δχ < 0.415 and 33.6 < X < 41.6 at.%. In the silicide in Nb-24Ti-18Si-5Al-5Cr alloys with single addition of 5 at.% B, Ge, Hf, Mo, Sn and Ta, the solubility range of X decreased compared with the unalloyed Nb₅Si₃ or exceeded 40.5 at.% when B was with Hf or Mo or Sn and the Δχ decreased with increasing X. The Ge concentration increased with increasing Ti and the Hf concentration increased and decreased with increasing Ti or Nb respectively. The B and Sn concentrations respectively decreased and increased with increasing Ti and also depended on other additions in the silicide. The concentration of Sn was related to VEC and the concentrations of B and Ge were related to Δχ. The alloying of Nb₅Si₃ was demonstrated in Δχ versus VEC maps. Effects of alloying on the coefficient of thermal expansion (CTE) anisotropy, Young's modulus, hardness and creep data were discussed. Compared with the hardness of binary Nb₅Si₃ (1360 HV), the hardness increased in silicides with Ge and dropped below 1360 HV when Al, B and Sn were present without Ge. The Al effect on hardness depended on other elements substituting Si. Sn reduced the hardness. Ti or Hf reduced the hardness more than Cr in Nb₅Si₃ without Ge. The (Nb,Hf)₅(Si,Al)₃ had the lowest hardness. VEC differentiated the effects of additions on the hardness of Nb₅Si₃ alloyed with Ge. Deterioration of the creep of alloyed Nb₅Si₃ was accompanied by decrease of VEC and increase or decrease of Δχ depending on alloying addition(s).
Wheel speed management control system for spacecraft
NASA Technical Reports Server (NTRS)
Goodzeit, Neil E. (Inventor); Linder, David M. (Inventor)
1991-01-01
A spacecraft attitude control system uses at least four reaction wheels. In order to minimize reaction wheel speed and therefore power, a wheel speed management system is provided. The management system monitors the wheel speeds and generates a wheel speed error vector. The error vector is integrated, and the error vector and its integral are combined to form a correction vector. The correction vector is summed with the attitude control torque command signals for driving the reaction wheels.
FBST for Cointegration Problems
NASA Astrophysics Data System (ADS)
Diniz, M.; Pereira, C. A. B.; Stern, J. M.
2008-11-01
In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.
Exact solutions to force-free electrodynamics in black hole backgrounds
NASA Astrophysics Data System (ADS)
Brennan, T. Daniel; Gralla, Samuel E.; Jacobson, Ted
2013-10-01
A shared property of several of the known exact solutions to the equations of force-free electrodynamics is that their charge-current four-vector is null. We examine the general properties of null-current solutions and then focus on the principal congruences of the Kerr black hole spacetime. We obtain a large class of exact solutions, which are in general time-dependent and non-axisymmetric. These solutions include waves that, surprisingly, propagate without scattering on the curvature of the black hole’s background. They may be understood as generalizations to Robinson’s solutions to vacuum electrodynamics associated with a shear-free congruence of null geodesics. When stationary and axisymmetric, our solutions reduce to those of Menon and Dermer, the only previously known solutions in Kerr. In Kerr, all of our solutions have null electromagnetic fields (\\vec{E} \\cdot \\vec{B} = 0 and E2 = B2). However, in Schwarzschild or flat spacetime there is freedom to add a magnetic monopole field, making the solutions magnetically dominated (B2 > E2). This freedom may be used to reproduce the various flat-spacetime and Schwarzschild-spacetime (split) monopole solutions available in the literature (due to Michel and later authors), and to obtain a large class of time-dependent, non-axisymmetric generalizations. These generalizations may be used to model the magnetosphere of a conducting star that rotates with arbitrary prescribed time-dependent rotation axis and speed. We thus significantly enlarge the class of known exact solutions, while organizing and unifying previously discovered solutions in terms of their null structure.
The Involvement of Lipid Peroxide-Derived Aldehydes in Aluminum Toxicity of Tobacco Roots1[W][OA
Yin, Lina; Mano, Jun'ichi; Wang, Shiwen; Tsuji, Wataru; Tanaka, Kiyoshi
2010-01-01
Oxidative injury of the root elongation zone is a primary event in aluminum (Al) toxicity in plants, but the injuring species remain unidentified. We verified the hypothesis that lipid peroxide-derived aldehydes, especially highly electrophilic α,β-unsaturated aldehydes (2-alkenals), participate in Al toxicity. Transgenic tobacco (Nicotiana tabacum) overexpressing Arabidopsis (Arabidopsis thaliana) 2-alkenal reductase (AER-OE plants), wild-type SR1, and an empty vector-transformed control line (SR-Vec) were exposed to AlCl3 on their roots. Compared with the two controls, AER-OE plants suffered less retardation of root elongation under AlCl3 treatment and showed more rapid regrowth of roots upon Al removal. Under AlCl3 treatment, the roots of AER-OE plants accumulated Al and H2O2 to the same levels as did the sensitive controls, while they accumulated lower levels of aldehydes and suffered less cell death than SR1 and SR-Vec roots. In SR1 roots, AlCl3 treatment markedly increased the contents of the highly reactive 2-alkenals acrolein, 4-hydroxy-(E)-2-hexenal, and 4-hydroxy-(E)-2-nonenal and other aldehydes such as malondialdehyde and formaldehyde. In AER-OE roots, accumulation of these aldehydes was significantly less. Growth of the roots exposed to 4-hydroxy-(E)-2-nonenal and (E)-2-hexenal were retarded more in SR1 than in AER-OE plants. Thus, the lipid peroxide-derived aldehydes, formed downstream of reactive oxygen species, injured root cells directly. Their suppression by AER provides a new defense mechanism against Al toxicity. PMID:20023145
Virtual Evidence Cart - RP (VEC-RP).
Liu, Fang; Fontelo, Paul; Muin, Michael; Ackerman, Michael
2005-01-01
VEC-RP (Virtual Evident Cart) is an open, Web-based, searchable collection of clinical questions and relevant references from MEDLINE/PubMed for healthcare professionals. The architecture consists of four parts: clinical questions, relevant articles from MEDLINE/PubMed, "bottom-line" answers, and peer reviews of entries. Only registered users can add reviews but unregistered users can read them. Feedback from physicians, mostly in the Philippines (RP) who tested the system, is positive.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Commensal Bacteria Modulate Innate Immune Responses of Vaginal Epithelial Cell Multilayer Cultures
Rose, William A.; McGowin, Chris L.; Spagnuolo, Rae Ann; Eaves-Pyles, Tonyia D.; Popov, Vsevolod L.; Pyles, Richard B.
2012-01-01
The human vaginal microbiome plays a critical but poorly defined role in reproductive health. Vaginal microbiome alterations are associated with increased susceptibility to sexually-transmitted infections (STI) possibly due to related changes in innate defense responses from epithelial cells. Study of the impact of commensal bacteria on the vaginal mucosal surface has been hindered by current vaginal epithelial cell (VEC) culture systems that lack an appropriate interface between the apical surface of stratified squamous epithelium and the air-filled vaginal lumen. Therefore we developed a reproducible multilayer VEC culture system with an apical (luminal) air-interface that supported colonization with selected commensal bacteria. Multilayer VEC developed tight-junctions and other hallmarks of the vaginal mucosa including predictable proinflammatory cytokine secretion following TLR stimulation. Colonization of multilayers by common vaginal commensals including Lactobacillus crispatus, L. jensenii, and L. rhamnosus led to intimate associations with the VEC exclusively on the apical surface. Vaginal commensals did not trigger cytokine secretion but Staphylococcus epidermidis, a skin commensal, was inflammatory. Lactobacilli reduced cytokine secretion in an isolate-specific fashion following TLR stimulation. This tempering of inflammation offers a potential explanation for increased susceptibility to STI in the absence of common commensals and has implications for testing of potential STI preventatives. PMID:22412914
node2vec: Scalable Feature Learning for Networks
Grover, Aditya; Leskovec, Jure
2016-01-01
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626
Superrotation charge and supertranslation hair on black holes
Hawking, Stephen W.; Perry, Malcolm J.; Strominger, Andrew
2017-05-31
It is shown that black hole spacetimes in classical Einstein gravity are characterized by, in addition to their ADM mass M, momentummore » $$\\vec{P}$$, angular momentum $$\\vec{J}$$ and boost charge $$\\vec{/k}$$ , an infinite head of supertranslation hair. Furthermore, the distinct black holes are distinguished by classical superrotation charges measured at infinity. Solutions with supertranslation hair are diffeomorphic to the Schwarzschild spacetime, but the diffeomorphisms are part of the BMS subgroup and act nontrivially on the physical phase space. It is shown that a black hole can be supertranslated by throwing in an asymmetric shock wave. We derive a leading-order Bondi-gauge expression for the linearized horizon supertranslation charge and shown to generate, via the Dirac bracket, supertranslations on the linearized phase space of gravitational excitations of the horizon. The considerations of this paper are largely classical augmented by comments on their implications for the quantum theory.« less
Superrotation charge and supertranslation hair on black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawking, Stephen W.; Perry, Malcolm J.; Strominger, Andrew
It is shown that black hole spacetimes in classical Einstein gravity are characterized by, in addition to their ADM mass M, momentummore » $$\\vec{P}$$, angular momentum $$\\vec{J}$$ and boost charge $$\\vec{/k}$$ , an infinite head of supertranslation hair. Furthermore, the distinct black holes are distinguished by classical superrotation charges measured at infinity. Solutions with supertranslation hair are diffeomorphic to the Schwarzschild spacetime, but the diffeomorphisms are part of the BMS subgroup and act nontrivially on the physical phase space. It is shown that a black hole can be supertranslated by throwing in an asymmetric shock wave. We derive a leading-order Bondi-gauge expression for the linearized horizon supertranslation charge and shown to generate, via the Dirac bracket, supertranslations on the linearized phase space of gravitational excitations of the horizon. The considerations of this paper are largely classical augmented by comments on their implications for the quantum theory.« less
Testing neoclassical and turbulent effects on poloidal rotation in the core of DIII-D
Chrystal, Colin; Burrell, Keith H.; Grierson, Brian A.; ...
2014-07-09
Experimental tests of ion poloidal rotation theories have been performed on DIII-D using a novel impurity poloidal rotation diagnostic. These tests show significant disagreements with theoretical predictions in various conditions, including L-mode plasmas with internal transport barriers (ITB), H-mode plasmas, and QH-mode plasmas. The theories tested include standard neoclassical theory, turbulence driven Reynolds stress, and fast-ion friction on the thermal ions. Poloidal rotation is observed to spin up at the formation of an ITB and makes a significant contribution to the measurement of themore » $$\\vec{E}$$ × $$\\vec{B}$$ shear that forms the ITB. In ITB cases, neoclassical theory agrees quantitatively with the experimental measurements only in the steep gradient region. Significant quantitative disagreement with neoclassical predictions is seen in the cores of ITB, QH-, and H-mode plasmas, demonstrating that neoclassical theory is an incomplete description of poloidal rotation. The addition of turbulence driven Reynolds stress does not remedy this disagreement; linear stability calculations and Doppler backscattering measurements show that disagreement increases as turbulence levels decline. Furthermore, the effect of fast-ion friction, by itself, does not lead to improved agreement; in QH-mode plasmas, neoclassical predictions are closest to experimental results in plasmas with the largest fast ion friction. Finally, predictions from a new model that combines all three effects show somewhat better agreement in the H-mode case, but discrepancies well outside the experimental error bars remain.« less
On the Alloying and Properties of Tetragonal Nb5Si3 in Nb-Silicide Based Alloys
Tsakiropoulos, Panos
2018-01-01
The alloying of Nb5Si3 modifies its properties. Actual compositions of (Nb,TM)5X3 silicides in developmental alloys, where X = Al + B + Ge + Si + Sn and TM is a transition and/or refractory metal, were used to calculate the composition weighted differences in electronegativity (Δχ) and an average valence electron concentration (VEC) and the solubility range of X to study the alloying and properties of the silicide. The calculations gave 4.11 < VEC < 4.45, 0.103 < Δχ < 0.415 and 33.6 < X < 41.6 at.%. In the silicide in Nb-24Ti-18Si-5Al-5Cr alloys with single addition of 5 at.% B, Ge, Hf, Mo, Sn and Ta, the solubility range of X decreased compared with the unalloyed Nb5Si3 or exceeded 40.5 at.% when B was with Hf or Mo or Sn and the Δχ decreased with increasing X. The Ge concentration increased with increasing Ti and the Hf concentration increased and decreased with increasing Ti or Nb respectively. The B and Sn concentrations respectively decreased and increased with increasing Ti and also depended on other additions in the silicide. The concentration of Sn was related to VEC and the concentrations of B and Ge were related to Δχ. The alloying of Nb5Si3 was demonstrated in Δχ versus VEC maps. Effects of alloying on the coefficient of thermal expansion (CTE) anisotropy, Young’s modulus, hardness and creep data were discussed. Compared with the hardness of binary Nb5Si3 (1360 HV), the hardness increased in silicides with Ge and dropped below 1360 HV when Al, B and Sn were present without Ge. The Al effect on hardness depended on other elements substituting Si. Sn reduced the hardness. Ti or Hf reduced the hardness more than Cr in Nb5Si3 without Ge. The (Nb,Hf)5(Si,Al)3 had the lowest hardness. VEC differentiated the effects of additions on the hardness of Nb5Si3 alloyed with Ge. Deterioration of the creep of alloyed Nb5Si3 was accompanied by decrease of VEC and increase or decrease of Δχ depending on alloying addition(s). PMID:29300327
Evolution of aerosol vertical distribution during particulate pollution events in Shanghai
NASA Astrophysics Data System (ADS)
Zhang, Yunwei; Zhang, Qun; Leng, Chunpeng; Zhang, Deqin; Cheng, Tiantao; Tao, Jun; Zhang, Renjian; He, Qianshan
2015-06-01
A set of micro pulse lidar (MPL) systems operating at 532 nm was used for ground-based observation of aerosols in Shanghai in 2011. Three typical particulate pollution events (e.g., haze) were examined to determine the evolution of aerosol vertical distribution and the planetary boundary layer (PBL) during these pollution episodes. The aerosol vertical extinction coefficient (VEC) at any given measured altitude was prominently larger during haze periods than that before or after the associated event. Aerosols originating from various source regions exerted forcing to some extent on aerosol loading and vertical layering, leading to different aerosol vertical distribution structures. Aerosol VECs were always maximized near the surface owing to the potential influence of local pollutant emissions. Several peaks in aerosol VECs were found at altitudes above 1 km during the dust- and bioburning-influenced haze events. Aerosol VECs decreased with increasing altitude during the local-polluted haze event, with a single maximum in the surface atmosphere. PM2.5 increased slowly while PBL and visibility decreased gradually in the early stages of haze events; subsequently, PM2.5 accumulated and was exacerbated until serious pollution bursts occurred in the middle and later stages. The results reveal that aerosols from different sources impact aerosol vertical distributions in the atmosphere and that the relationship between PBL and pollutant loadings may play an important role in the formation of pollution.
Dai, Hanjun; Umarov, Ramzan; Kuwahara, Hiroyuki; Li, Yu; Song, Le; Gao, Xin
2017-11-15
An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these hidden Markov models into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA datasets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods. Our program is freely available at https://github.com/ramzan1990/sequence2vec. xin.gao@kaust.edu.sa or lsong@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Gui-bo; Qin, Meng; Ye, Jing-xue
Atherosclerosis (AS) is a state of heightened oxidative stress characterized by lipid and protein oxidation in vascular walls. Oxidative stress-induced vascular endothelial cell (VEC) injury is a major factor in the pathogenesis of AS. Myricitrin, a natural flavonoid isolated from the root bark of Myrica cerifera, was recently found to have a strong antioxidative effect. However, its use for treating cardiovascular diseases, especially AS is still unreported. Consequently, we evaluated the cytoprotective effect of myricitrin on AS by assessing oxidative stress-induced VEC damage. The in vivo study using an ApoE −/− mouse model of AS demonstrated that myricitrin treatment protectsmore » against VEC damage and inhibits early AS plaque formation. This effect is associated with the antioxidative effect of myricitrin, as observed in a hydrogen peroxide (H{sub 2}O{sub 2})-induced rat model of artery endothelial injury and primary cultured human VECs. Myricitrin treatment also prevents and attenuates H{sub 2}O{sub 2}-induced endothelial injury. Further investigation of the cytoprotective effects of myricitrin demonstrated that myricitrin exerts its function by scavenging for reactive oxygen species, as well as reducing lipid peroxidation, blocking NO release, and maintaining mitochondrial transmembrane potential. Myricitrin treatment also significantly decreased H{sub 2}O{sub 2}-induced apoptosis in VECs, which was associated with significant inhibition of p53 gene expression, activation of caspase-3 and the MAPK signaling pathway, and alteration of the patterns of pro-apoptotic and anti-apoptotic gene expression. The resulting significantly increased bcl-2/bax ratio indicates that myricitrin may prevent the apoptosis induced by oxidative stress injury. - Highlights: • Myricitrin prevents early atherosclerosis in ApoE−/− mice. • Myricitrin protects endothelial cell from H{sub 2}O{sub 2} induced injury in rat and HUVECs. • Myricitrin enhanced NO release and up regulates eNOS activity in HUVECs. • Myricitrin down regulates P53 expression and MAPKs phosphorylation in HUVECs.« less
Measurement of the Neutron Electric Form Factor G
NASA Astrophysics Data System (ADS)
McCormick, Kathy
2003-01-01
Experiment E02-0131 at Thomas Jefferson National Accelerator Facility (Jefferson Lab) will measure the neutron electric form factor GEn at the high four-momentum transfer values of Q2 ≈ 1.3, 2.4 and 3.4 (GeV/c)2 via a measurement of the cross section asymmetry AT in the reaction {}3vec He(vec e, e'n)pp . This measurement was approved for 32 days of running by Jefferson Lab PAC 212 in January 2002.
Lorenz, Dominic; Knöpfle, Anna; Akil, Youssef; Saake, Bodo
2017-11-01
The chemical structures obtained by the modification of arabinoxylans with the cyclic carbonates propylene carbonate (PC) and 4-vinyl-1,3-dioxolan-2-one (VEC) with varying degrees of substitution were investigated. Therefore, a new analytical method was developed that is based on a microwave-assisted hydrolysis of the polysaccharides with trifluoroacetic acid and the reductive amination with 2-aminobenzoic acid. The peak assignment was achieved by HPLC-MS and the carbohydrate derivatives were quantified by HPLC-fluorescence. The obtained maximum molar substitution of PC-derivatized xylan (X HP ) was 1.8; the molar substitution of VEC-derivatized xylan (X HVE ) was 2.3. Investigations of xylose and arabinose based mono- and disubstituted derivatives revealed a preferred reaction of the cyclic carbonates with arabinose. Conversion rates were up to 2.4 times higher for monosubstitution and up to 3.0 times for disubstitution compared to xylose. Furthermore, the reaction with VEC was preferred due to higher reactivity of the newly introduced side chains. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Michelet-Habchi, C.; Barberet, Ph.; Dutta, R. K.; Guiet-Bara, A.; Bara, M.; Moretto, Ph.
2003-09-01
Regulation of vascular tone in the fetal extracorporeal circulation most likely depends on circulating hormones, local paracrine mechanisms and changes in membrane potential of vascular smooth muscle cells (VSMCs) and of vascular endothelial cells (VECs). The membrane potential is a function of the physiological activities of ionic channels (particularly, K + and Ca 2+ channels in these cells). These channels regulate the ionic distribution into these cells. Micro-particle induced X-ray emission (PIXE) analysis was applied to determine the ionic composition of VSMC and of VEC in the placental human allantochorial vessels in a physiological survival medium (Hanks' solution) modified by the addition of acetylcholine (ACh: which opens the calcium-sensitive K + channels, K Ca) and of high concentration of K + (which blocks the voltage-sensitive K + channels, K df). In VSMC (media layer), the addition of ACh induced no modification of the Na, K, Cl, P, S, Mg and Ca concentrations and high K + medium increased significantly the Cl and K concentrations, the other ion concentrations remaining constant. In endothelium (VEC), ACh addition implicated a significant increase of Na and K concentration, and high K + medium, a significant increase in Cl and K concentration. These results indicated the importance of K df, K Ca and K ATP channels in the regulation of K + intracellular distribution in VSMC and VEC and the possible intervention of a Na-K-2Cl cotransport and corroborated the previous electrophysiological data.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Alloying and Hardness of Eutectics with Nbss and Nb5Si3 in Nb-silicide Based Alloys
Tsakiropoulos, Panos
2018-01-01
In Nb-silicide based alloys, eutectics can form that contain the Nbss and Nb5Si3 phases. The Nb5Si3 can be rich or poor in Ti, the Nb can be substituted with other transition and refractory metals, and the Si can be substituted with simple metal and metalloid elements. For the production of directionally solidified in situ composites of multi-element Nb-silicide based alloys, data about eutectics with Nbss and Nb5Si3 is essential. In this paper, the alloying behaviour of eutectics observed in Nb-silicide based alloys was studied using the parameters ΔHmix, ΔSmix, VEC (valence electron concentration), δ (related to atomic size), Δχ (related to electronegativity), and Ω (= Tm ΔSmix/|ΔHmix|). The values of these parameters were in the ranges −41.9 < ΔHmix <−25.5 kJ/mol, 4.7 < ΔSmix < 15 J/molK, 4.33 < VEC < 4.89, 6.23 < δ < 9.44, 0.38 < Ω < 1.35, and 0.118 < Δχ < 0.248, with a gap in Δχ values between 0.164 and 0.181. Correlations between ΔSmix, Ω, ΔSmix, and VEC were found for all of the eutectics. The correlation between ΔHmix and δ for the eutectics was the same as that of the Nbss, with more negative ΔHmix for the former. The δ versus Δχ map separated the Ti-rich eutectics from the Ti-poor eutectics, with a gap in Δχ values between 0.164 and 0.181, which is within the Δχ gap of the Nbss. Eutectics were separated according to alloying additions in the Δχ versus VEC, Δχ versus
Alloying and Hardness of Eutectics with Nbss and Nb₅Si₃ in Nb-silicide Based Alloys.
Tsakiropoulos, Panos
2018-04-11
In Nb-silicide based alloys, eutectics can form that contain the Nb ss and Nb₅Si₃ phases. The Nb₅Si₃ can be rich or poor in Ti, the Nb can be substituted with other transition and refractory metals, and the Si can be substituted with simple metal and metalloid elements. For the production of directionally solidified in situ composites of multi-element Nb-silicide based alloys, data about eutectics with Nb ss and Nb₅Si₃ is essential. In this paper, the alloying behaviour of eutectics observed in Nb-silicide based alloys was studied using the parameters ΔH mix , ΔS mix , VEC (valence electron concentration), δ (related to atomic size), Δχ (related to electronegativity), and Ω (= T m ΔS mix /|ΔH mix |). The values of these parameters were in the ranges -41.9 < ΔH mix <-25.5 kJ/mol, 4.7 < ΔS mix < 15 J/molK, 4.33 < VEC < 4.89, 6.23 < δ < 9.44, 0.38 < Ω < 1.35, and 0.118 < Δχ < 0.248, with a gap in Δχ values between 0.164 and 0.181. Correlations between ΔS mix , Ω, ΔS mix , and VEC were found for all of the eutectics. The correlation between ΔH mix and δ for the eutectics was the same as that of the Nb ss , with more negative ΔH mix for the former. The δ versus Δχ map separated the Ti-rich eutectics from the Ti-poor eutectics, with a gap in Δχ values between 0.164 and 0.181, which is within the Δχ gap of the Nb ss . Eutectics were separated according to alloying additions in the Δχ versus VEC, Δχ versus
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Kaowinn, Sirichat; Jun, Seung Won; Kim, Chang Seok; Shin, Dong-Myeong; Hwang, Yoon-Hwae; Kim, Kyujung; Shin, Bosung; Kaewpiboon, Chutima; Jeong, Hyeon Hee; Koh, Sang Seok; Krämer, Oliver H; Johnston, Randal N; Chung, Young-Hwa
2017-12-01
Previously, it has been found that the cancer upregulated gene 2 (CUG2) and the epidermal growth factor receptor (EGFR) both contribute to drug resistance of cancer cells. Here, we explored whether CUG2 may exert its anticancer drug resistance by increasing the expression of EGFR. EGFR expression was assessed using Western blotting, immunofluorescence and capacitance assays in A549 lung cancer and immortalized bronchial BEAS-2B cells, respectively, stably transfected with a CUG2 expression vector (A549-CUG2; BEAS-CUG2) or an empty control vector (A549-Vec; BEAS-Vec). After siRNA-mediated EGFR, Stat1 and HDAC4 silencing, antioxidant and multidrug resistance protein and mRNA levels were assessed using Western blotting and RT-PCR. In addition, the respective cells were treated with doxorubicin after which apoptosis and reactive oxygen species (ROS) levels were measured. Stat1 acetylation was assessed by immunoprecipitation. We found that exogenous CUG2 overexpression induced EGFR upregulation in A549 and BEAS-2B cells, whereas EGFR silencing sensitized these cells to doxorubicin-induced apoptosis. In addition, we found that exogenous CUG2 overexpression reduced the formation of ROS during doxorubicin treatment by enhancing the expression of antioxidant and multidrug resistant proteins such as MnSOD, Foxo1, Foxo4, MRP2 and BCRP, whereas EGFR silencing congruently increased the levels of ROS by decreasing the expression of these proteins. We also found that EGFR silencing and its concomitant Akt, ERK, JNK and p38 MAPK inhibition resulted in a decreased Stat1 phosphorylation and, thus, a decreased activation. Since also acetylation can affect Stat1 activation via a phospho-acetyl switch, HDAC inhibition may sensitize cells to doxorubicin-induced apoptosis. Interestingly, we found that exogenous CUG2 overexpression upregulated HDAC4, but not HDAC2 or HDAC3. Conversely, we found that HDAC4 silencing sensitized the cells to doxorubicin resistance by decreasing Stat1 phosphorylation and EGFR expression, thus indicating an interplay between HDAC4, Stat1 and EGFR. Taken together, we conclude that CUG2-induced EGFR upregulation confers doxorubicin resistance to lung (cancer) cells through Stat1-HDAC4 signaling.
Efficient Parallel Formulations of Hierarchical Methods and Their Applications
NASA Astrophysics Data System (ADS)
Grama, Ananth Y.
1996-01-01
Hierarchical methods such as the Fast Multipole Method (FMM) and Barnes-Hut (BH) are used for rapid evaluation of potential (gravitational, electrostatic) fields in particle systems. They are also used for solving integral equations using boundary element methods. The linear systems arising from these methods are dense and are solved iteratively. Hierarchical methods reduce the complexity of the core matrix-vector product from O(n^2) to O(n log n) and the memory requirement from O(n^2) to O(n). We have developed highly scalable parallel formulations of a hybrid FMM/BH method that are capable of handling arbitrarily irregular distributions. We apply these formulations to astrophysical simulations of Plummer and Gaussian galaxies. We have used our parallel formulations to solve the integral form of the Laplace equation. We show that our parallel hierarchical mat-vecs yield high efficiency and overall performance even on relatively small problems. A problem containing approximately 200K nodes takes under a second to compute on 256 processors and yet yields over 85% efficiency. The efficiency and raw performance is expected to increase for bigger problems. For the 200K node problem, our code delivers about 5 GFLOPS of performance on a 256 processor T3D. This is impressive considering the fact that the problem has floating point divides and roots, and very little locality resulting in poor cache performance. A dense matrix-vector product of the same dimensions would require about 0.5 TeraBytes of memory and about 770 TeraFLOPS of computing speed. Clearly, if the loss in accuracy resulting from the use of hierarchical methods is acceptable, our code yields significant savings in time and memory. We also study the convergence of a GMRES solver built around this mat-vec. We accelerate the convergence of the solver using three preconditioning techniques: diagonal scaling, block-diagonal preconditioning, and inner-outer preconditioning. We study the performance and parallel efficiency of these preconditioned solvers. Using this solver, we solve dense linear systems with hundreds of thousands of unknowns. Solving a 105K unknown problem takes about 10 minutes on a 64 processor T3D. Until very recently, boundary element problems of this magnitude could not even be generated, let alone solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
Zhu, Yongjun; Yan, Erjia; Wang, Fei
2017-07-03
Understanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec's ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec. We download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects. Models trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task). Increasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article bodies, abstracts excel in accuracy but lose in coverage of identifiable relations.
Li, Zhan-Chun; Tang, Lu-Min; Shao, Jiang; Li, He
2017-01-01
In this study, to investigate the effects of naringin on vascular endothelial cell (VEC) function, proliferation, apoptosis, and angiogenesis, rat VECs were cultured in vitro and randomly divided into four groups: control, serum-starved, low-concentration naringin treatment, and high-concentration naringin treatment. MTT assay was used to detect cell proliferation while Hoechst 33258 staining and flow cytometry were used to detect apoptosis. Changes in the expression of apoptosis-associated proteins [GRP78, CHOP, caspase-12, and cytochrome c (Cyt.c)] were detected using western blotting. JC-1 staining was employed to detect changes in mitochondrial membrane potential. Intracellular caspase-3, -8, and -9 activity was determined by spectrophotometry. ELISA was used to detect endothelin (ET), and a Griess assay was used to detect changes in the expression of nitric oxide (NO) in culture medium. The study further divided an ovariectomized (OVX) rat model of osteoporosis randomly into four groups: OVX, sham-operated, low-concentration naringin treatment (100 mg/kg), and high-concentration naringin treatment (200 mg/kg). After 3 months of treatment, changes in serum ET and NO expression, bone mineral density (BMD), and microvessel density of the distal femur (using CD34 labeling of VECs) were determined. At each concentration, naringin promoted VEC proliferation in a time- and dose-dependent manner. Naringin also significantly reduced serum starvation-induced apoptosis in endothelial cells, inhibited the expression of GRP78, CHOP, caspase-12, and Cyt.c proteins, and reduced mitochondrial membrane potential as well as reduced the activities of caspase-3 and -9. Furthermore, naringin suppressed ET in vitro and in vivo while enhancing NO synthesis. Distal femoral microvascular density assessment showed that the naringin treatment groups had a significantly higher number of microvessels than the OVX group, and that microvascular density was positively correlated with BMD. In summary, naringin inhibits apoptosis in VECs by blocking the endoplasmic reticulum (ER) stress- and mitochondrial-mediated pathways. Naringin also regulates endothelial cell function and promotes angiogenesis to exert its anti-osteoporotic effect. PMID:29039439
Shangguan, Wen-Ji; Zhang, Yue-Hui; Li, Zhan-Chun; Tang, Lu-Min; Shao, Jiang; Li, He
2017-12-01
In this study, to investigate the effects of naringin on vascular endothelial cell (VEC) function, proliferation, apoptosis, and angiogenesis, rat VECs were cultured in vitro and randomly divided into four groups: control, serum‑starved, low‑concentration naringin treatment, and high‑concentration naringin treatment. MTT assay was used to detect cell proliferation while Hoechst 33258 staining and flow cytometry were used to detect apoptosis. Changes in the expression of apoptosis‑associated proteins [GRP78, CHOP, caspase‑12, and cytochrome c (Cyt.c)] were detected using western blotting. JC‑1 staining was employed to detect changes in mitochondrial membrane potential. Intracellular caspase‑3, ‑8, and ‑9 activity was determined by spectrophotometry. ELISA was used to detect endothelin (ET), and a Griess assay was used to detect changes in the expression of nitric oxide (NO) in culture medium. The study further divided an ovariectomized (OVX) rat model of osteoporosis randomly into four groups: OVX, sham‑operated, low‑concentration naringin treatment (100 mg/kg), and high‑concentration naringin treatment (200 mg/kg). After 3 months of treatment, changes in serum ET and NO expression, bone mineral density (BMD), and microvessel density of the distal femur (using CD34 labeling of VECs) were determined. At each concentration, naringin promoted VEC proliferation in a time‑ and dose‑dependent manner. Naringin also significantly reduced serum starvation‑induced apoptosis in endothelial cells, inhibited the expression of GRP78, CHOP, caspase‑12, and Cyt.c proteins, and reduced mitochondrial membrane potential as well as reduced the activities of caspase‑3 and ‑9. Furthermore, naringin suppressed ET in vitro and in vivo while enhancing NO synthesis. Distal femoral microvascular density assessment showed that the naringin treatment groups had a significantly higher number of microvessels than the OVX group, and that microvascular density was positively correlated with BMD. In summary, naringin inhibits apoptosis in VECs by blocking the endoplasmic reticulum (ER) stress‑ and mitochondrial‑mediated pathways. Naringin also regulates endothelial cell function and promotes angiogenesis to exert its anti‑osteoporotic effect.
Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.
ERIC Educational Resources Information Center
Taghva, Kazem; And Others
1996-01-01
Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Meng, Ning; Zhao, Jing; Su, Le; Zhao, Baoxiang; Zhang, Yun; Zhang, Shangli; Miao, Junying
2012-02-01
Lipopolysaccharide (LPS)-induced vascular endothelial cell (VEC) dysfunction is an important contributing factor in vascular diseases. Recently, we found that LPS impaired VEC by inducing autophagy. Our previous researches showed that a butyrolactone derivative, 3-benzyl-5-((2-nitrophenoxy) methyl)-dihydrofuran-2(3H)-one (3BDO) selectively protected VEC function. The objective of the present study is to investigate whether and how 3BDO inhibits LPS-induced VEC autophagic injury. Our results showed that LPS induced autophagy and led to increase of reactive oxygen species (ROS) and decrease of mitochondrial membrane potential (MMP) in Human umbilical vein vascular endothelial cells (HUVECs). Furthermore, LPS significantly increased p8 and p53 protein levels and the nuclear translocation of p53. All of these effects of LPS on HUVECs were strongly inhibited by 3BDO. Importantly, the ROS scavenger N-acetylcysteine (NAC) could inhibited LPS-induced autophagy and knockdown of p8 by RNA interference inhibited the autophagy, p53 protein level increase, the translocation of p53 into nuclei and the ROS level increase induced by LPS in HUVECs. The data suggested that 3BDO inhibited LPS-induced autophagy in HUVECs through inhibiting the ROS overproduction, the increase of p8 and p53 expression and the nuclear translocation of p53. Our findings provide a potential tool for understanding the mechanism underlying LPS-induced autophagy in HUVECs and open the door to a novel therapeutic drug for LPS-induced vascular diseases. Copyright © 2011 Elsevier Ltd. All rights reserved.
Chen, QingSong; Chen, GuiPing; Xiao, Bin; Lin, HanSheng; Qu, HongYing; Zhang, DanYing; Shi, MaoGong; Lang, Li; Yang, Bei; Yan, MaoSheng
2016-01-01
Objective The purpose of this study was to investigate the characteristics of nailfold capillaroscopy associated with hand-arm vibration syndrome (HAVS). Methods In total, 113 male gold miners were recruited: 35 workers who were chronically exposed to vibration and developed vibration-induced white finger were defined as the HAVS group, 39 workers who were exposed to vibration but did not have HAVS were classified as the vibration-exposed controls (VEC) group, and 39 workers without vibration exposure were categorised as the non-VEC (NVEC) group. Video capillaroscopy was used to capture images of the 2nd, 3rd and 4th fingers of both hands. The following nailfold capillary characteristics were included: number of capillaries/mm, avascular areas, haemorrhages and enlarged capillaries. The experiments were carried out in the same winter. All characteristics were evaluated under blinded conditions. Results Significant differences in all morphological characteristics existed between the groups (p<0.05). Avascular areas in the HAVS, VEC and NVEC groups appeared in 74.3%, 43.6% and 25.0% of participants, respectively. A higher percentage of participants had haemorrhages in the HAVS group (65.7%) compared with the other groups (VEC: 7.7% and NVEC: 7.5%). The number of capillaries/mm, input limb width, output limb width, apical width, and ratio of output limb and input limb all had more than 70% sensitivity or specificity of their cut-off value. Conclusions Nailfold capillary characteristics, especially the number of capillaries/mm, avascular areas, haemorrhages, output limb width, input limb width and apical width alterations, revealed significant associations with HAVS. PMID:27888176
Polarization observables in deuteron photodisintegration below 360 MeV
Glister, J.; Ron, G.; Lee, B. W.; ...
2011-02-03
We performed high precision measurements of induced and transferred recoil proton polarization in d(more » $$\\vec{γ}$$, $$\\vec{p}$$)n for photon energies of 277--357 MeV and θ cm = 20 ° -- 120 °. The measurements were motivated by a longstanding discrepancy between meson-baryon model calculations and data at higher energies. Moreover, at the low energies of this experiment, theory continues to fail to reproduce the data, indicating that either something is missing in the calculations and/or there is a problem with the accuracy of the nucleon-nucleon potential being used.« less
Zheng, Yi; Liu, Song-Qiao; Sun, Qin; Xie, Jian-Feng; Xu, Jing-Yuan; Li, Qing; Pan, Chun; Liu, Ling; Huang, Ying-Zi
2018-02-13
Mesenchymal stem cells (MSC) obviously alleviate the damage of the structure and function of pulmonary vascular endothelial cells (VEC). The therapeutic effects of MSC are significantly different between pulmonary ARDS (ARDSp) and extrapulmonary ARDS (ARDSexp). MicroRNAs (miRNAs), as important media of MSC regulating VEC, are not studied between ARDSp and ARDSexp. We aimed to explore the plasma levels difference of miRNAs that regulate VEC function and are associated with MSC (MSC-VEC-miRNAs) between ARDSp and ARDSexp patients. MSC-VEC-miRNAs were obtained through reviewing relevant literatures screened in PubMed database. We enrolled 57 ARDS patients within 24 h of admission to the ICU and then collected blood samples, extracted plasma supernatant. Patients' clinical data were collected. Then, plasma expression of MSC-VEC-miRNAs was measured by real-time fluorescence quantitative PCR. Simultaneously, plasma endothelial injury markers VCAM-1, vWF and inflammatory factors TNF-α, IL-10 were detected by ELISA method. Fourteen miRNAs were picked out after screening. A total of 57 ARDS patients were included in this study, among which 43 cases pertained to ARDSp group and 14 cases pertained to ARDSexp group. Plasma miR-221 and miR-27b levels in ARDSexp group exhibited significantly lower than that in ARDSp group (miR-221, 0.22 [0.12-0.49] vs. 0.57 [0.22-1.57], P = 0.008, miR-27b, 0.34 [0.10-0.46] vs. 0.60 [0.20-1.46], P = 0.025). Plasma vWF concentration in ARDSexp group exhibited significantly lower than that in ARDSp group (0.77 [0.29-1.54] vs. 1.80 [0.95-3.51], P = 0.048). Significant positive correlation was found between miR-221 and vWF in plasma levels (r = 0.688, P = 0.022). Plasma miR-26a and miR-27a levels in non-survival group exhibited significantly lower than that in survival group (miR-26a, 0.17 [0.08-0.20] vs. 0.69 [0.24-2.33] P = 0.018, miR-27a, 0.23 [0.16-0.58] vs. 1.45 [0.38-3.63], P = 0.021) in ARDSp patients. Plasma miR-221, miR-27b and vWF levels in ARDSexp group are significantly lower than that in ARDSp group. Plasma miR-26a and miR-27a levels in non-survival group are significantly lower than that in survival group in ARDSp patients.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N
2012-01-01
Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.
Qin, Kai-Rong; Xiang, Cheng; Cao, Ling-Ling
2011-10-01
In this paper, a dynamic model is proposed to quantify the relationship between fluid flow and Cl(-)-selective membrane current in vascular endothelial cells (VECs). It is assumed that the external shear stress would first induce channel deformation in VECs. This deformation could activate the Cl(-) channels on the membrane, thus allowing Cl(-) transport across the membrane. A modified Hodgkin-Huxley model is embedded into our dynamic system to describe the electrophysiological properties of the membrane, such as the Cl(-)-selective membrane current (I), voltage (V) and conductance. Three flow patterns, i. e., steady flow, oscillatory flow, and pulsatile flow, are applied in our simulation studies. When the extracellular Cl(-) concentration is constant, the I-V characteristics predicted by our dynamic model shows strong consistency with the experimental observations. It is also interesting to note that the Cl(-) currents under different flow patterns show some differences, indicating that VECs distinguish among and respond differently to different types of flows. When the extracellular Cl(-) concentration keeps constant or varies slowly with time (i.e. oscillates at 0.02 Hz), the convection and diffusion of Cl(-) in extracellular space can be ignored and the Cl(-) current is well captured by the modified Hodgkin-Huxley model alone. However, when the extracellular Cl(-) varies fast (i.e., oscillates at 0.2 Hz), the convection and diffusion effect should be considered because the Cl(-) current dynamics is different from the case where the convection-diffusion effect is simply ignored. The proposed dynamic model along with the simulation results could not only provide more insights into the flow-regulated electrophysiological behavior of the cell membrane but also help to reveal new findings in the electrophysiological experimental investigations of VECs in response to dynamic flow and biochemical stimuli.
Chen, QingSong; Chen, GuiPing; Xiao, Bin; Lin, HanSheng; Qu, HongYing; Zhang, DanYing; Shi, MaoGong; Lang, Li; Yang, Bei; Yan, MaoSheng
2016-11-25
The purpose of this study was to investigate the characteristics of nailfold capillaroscopy associated with hand-arm vibration syndrome (HAVS). In total, 113 male gold miners were recruited: 35 workers who were chronically exposed to vibration and developed vibration-induced white finger were defined as the HAVS group, 39 workers who were exposed to vibration but did not have HAVS were classified as the vibration-exposed controls (VEC) group, and 39 workers without vibration exposure were categorised as the non-VEC (NVEC) group. Video capillaroscopy was used to capture images of the 2nd, 3rd and 4th fingers of both hands. The following nailfold capillary characteristics were included: number of capillaries/mm, avascular areas, haemorrhages and enlarged capillaries. The experiments were carried out in the same winter. All characteristics were evaluated under blinded conditions. Significant differences in all morphological characteristics existed between the groups (p<0.05). Avascular areas in the HAVS, VEC and NVEC groups appeared in 74.3%, 43.6% and 25.0% of participants, respectively. A higher percentage of participants had haemorrhages in the HAVS group (65.7%) compared with the other groups (VEC: 7.7% and NVEC: 7.5%). The number of capillaries/mm, input limb width, output limb width, apical width, and ratio of output limb and input limb all had more than 70% sensitivity or specificity of their cut-off value. Nailfold capillary characteristics, especially the number of capillaries/mm, avascular areas, haemorrhages, output limb width, input limb width and apical width alterations, revealed significant associations with HAVS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Regulation of Plasminogen Activation on Cell Surfaces and Fibrin.
Urano, Tetsumei; Castellino, Francis J; Suzuki, Yuko
2018-05-20
The fibrinolytic system dissolves fibrin and maintains vascular patency. Recent advances in imaging analyses allowed visualization of the spatiotemporal regulatory mechanism of fibrinolysis, as well as its regulation by other plasma haemostasis cofactors. Vascular endothelial cells (VECs) retain tissue-type plasminogen activator (tPA) after secretion and maintain high plasminogen (plg) activation potential on their surfaces. As in plasma, the serpin, plasminogen activator inhibitor type 1 (PAI-1), regulates fibrinolytic potential via inhibition of the VEC surface-bound plg activator, tPA. Once fibrin is formed, plg activation by tPA is initiated and effectively amplified on the surface of fibrin, and fibrin is rapidly degraded. The specific binding of plg and tPA to lytic edges of partly degraded fibrin via newly generated C-terminal lysine residues, which amplifies fibrin digestion, is a central aspect of this pathophysiological mechanism. Thrombomodulin (TM) plays a role in the attenuation of the plg binding on fibrin and the associated fibrinolysis, which is reversed by a carboxypeptidase B inhibitor. This suggests that the plasma procarboxypeptidase B, thrombin activatable fibrinolysis inhibitor (TAFI), which is activated by thrombin bound to TM on VECs, is a critical aspect of the regulation of plg activation on VECs and subsequent fibrinolysis. Platelets also contain PAI-1, TAFI, TM and the fibrin crosslinking enzyme, Factor (F) XIIIa, and either secrete or expose these agents upon activation in order to regulate fibrinolysis. In this review, the native machinery of plg activation and fibrinolysis, as well as their spatiotemporal regulatory mechanisms, as revealed by imaging analyses, are discussed. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Yu, Hai-Ling; Hong, Bo; Yang, Ning; Zhao, Hong-Yan
2015-09-01
The photoinduced proton-coupled electron transfer chemistry is very crucial to the development of nonlinear optical (NLO) materials with large first hyperpolarizability contrast. We have performed a systematic investigation on the geometric structures, NLO switching, and simulated absorption spectra of rhenium(I) complexes via density functional theory (DFT). The results show that the first hyperpolarizabilities (βvec) increase remarkably with further extending of the organic connectors. In addition, the solvent leads to a slight enhancement of the hyperpolarizability and frequency dependent hyperpolarizability. Furthermore, the proton abstraction plays an important role in tuning the second-order NLO response. It is found that deprotonation not only increases the absolute value of βvec but also changes the sign of βvec from positive to negative. This different sign can be explained by the opposite dipole moments. The efficient enhancement of first hyperpolarizability is attributed to the better delocalization of the π-electron system and the more obvious degree of charge transfer. Therefore, these kinds of complexes might be promising candidates for designed as proton driven molecular second-order NLO switching. Copyright © 2015 Elsevier Inc. All rights reserved.
A median filter approach for correcting errors in a vector field
NASA Technical Reports Server (NTRS)
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Inherent Error in Asynchronous Digital Flight Controls.
1980-02-01
IMAXJ DIIMENSION AP(2,2) PBIP(2v2),CP(lv2) ,FC(2,2) iGC(2,1) , PHIT ’(4,4), I HC(2v2)vEC(2w1)wPHITI(4,4,l01),PSITI(4,4),PHTAU(4,4) ,PSTAU(4,4),I 4 INJ3EX(4...1.0 4 02. CON’T I NUE [ 4 11 :: 2 7NT TI =T14-E:’LTA PHITl(2yJ1,11) =0.0 4 FHIT1(2y2vlJ.) EXP(-10.*TI) [DO 400 11 = lNP DO0 400 JJ = INP 400 PHIT (IlJJ...PHIT1(IIr,J.JYN’T) WRITE(6y860) 860 FORMAT (5X Y’PH IT’) [DO 861 1 -IYiNP 1361 WRITE(6v802) ( PHIT (I9vJ) ,J:=1,NP) DO :1800 KK2 1,J.6 IAU N1
Effects of protein tyrosine phosphatase-PEST are reversed by Akt in T cells.
Arimura, Yutaka; Shimizu, Kazuhiko; Koyanagi, Madoka; Yagi, Junji
2014-12-01
T cell activation is regulated by a balance between phosphorylation and dephosphorylation that is under the control of kinases and phosphatases. Here, we examined the role of a non-receptor-type protein tyrosine phosphatase, PTP-PEST, using retrovirus-mediated gene transduction into murine T cells. Based on observations of vector markers (GFP or Thy1.1), exogenous PTP-PEST-positive CD4(+) T cells appeared within 2 days after gene transduction; the percentage of PTP-PEST-positive cells tended to decrease during a resting period in the presence of IL-2 over the next 2 days. These vector markers also showed much lower expression intensities, compared with control cells, suggesting a correlation between the percent reduction and the low marker expression intensity. A catalytically inactive PTP-PEST mutant also showed the same tendency, and stepwise deletion mutants gradually lost their ability to induce the above phenomenon. On the other hand, these PTP-PEST-transduced cells did not have an apoptotic phenotype. No difference in the total cell numbers was found in the wells of a culture plate containing VEC- and PTP-PEST-transduced T cells. Moreover, serine/threonine kinase Akt, but not the anti-apoptotic molecules Bcl-2 and Bcl-XL, reversed the phenotype induced by PTP-PEST. We discuss the novel mechanism by which Akt interferes with PTP-PEST. Copyright © 2014 Elsevier Inc. All rights reserved.
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
Application of Bred Vectors To Data Assimilation
NASA Astrophysics Data System (ADS)
Corazza, M.; Kalnay, E.; Patil, Dj
We introduced a statistic, the BV-dimension, to measure the effective local finite-time dimensionality of the atmosphere. We show that this dimension is often quite low, and suggest that this finding has important implications for data assimilation and the accuracy of weather forecasting (Patil et al, 2001). The original database for this study was the forecasts of the NCEP global ensemble forecasting system. The initial differences between the control forecast and the per- turbed forecasts are called bred vectors. The control and perturbed initial conditions valid at time t=n(t are evolved using the forecast model until time t=(n+1) (t. The differences between the perturbed and the control forecasts are scaled down to their initial amplitude, and constitute the bred vectors valid at (n+1) (t. Their growth rate is typically about 1.5/day. The bred vectors are similar by construction to leading Lya- punov vectors except that they have small but finite amplitude, and they are valid at finite times. The original NCEP ensemble data set has 5 independent bred vectors. We define a local bred vector at each grid point by choosing the 5 by 5 grid points centered at the grid point (a region of about 1100km by 1100km), and using the north-south and east- west velocity components at 500mb pressure level to form a 50 dimensional column vector. Since we have k=5 global bred vectors, we also have k local bred vectors at each grid point. We estimate the effective dimensionality of the subspace spanned by the local bred vectors by performing a singular value decomposition (EOF analysis). The k local bred vector columns form a 50xk matrix M. The singular values s(i) of M measure the extent to which the k column unit vectors making up the matrix M point in the direction of v(i). We define the bred vector dimension as BVDIM={Sum[s(i)]}^2/{Sum[s(i)]^2} For example, if 4 out of the 5 vectors lie along v, and one lies along v, the BV- dimension would be BVDIM[sqrt(4), 1, 0,0,0]=1.8, less than 2 because one direction is more dominant than the other in representing the original data. The results (Patil et al, 2001) show that there are large regions where the bred vectors span a subspace of substantially lower dimension than that of the full space. These low dimensionality regions are dominant in the baroclinic extratropics, typically have a lifetime of 3-7 days, have a well-defined horizontal and vertical structure that spans 1 most of the atmosphere, and tend to move eastward. New results with a large number of ensemble members confirm these results and indicate that the low dimensionality regions are quite robust, and depend only on the verification time (i.e., the underlying flow). Corazza et al (2001) have performed experiments with a data assimilation system based on a quasi-geostrophic model and simulated observations (Morss, 1999, Hamill et al, 2000). A 3D-variational data assimilation scheme for a quasi-geostrophic chan- nel model is used to study the structure of the background error and its relationship to the corresponding bred vectors. The "true" evolution of the model atmosphere is defined by an integration of the model and "rawinsonde observations" are simulated by randomly perturbing the true state at fixed locations. It is found that after 3-5 days the bred vectors develop well organized structures which are very similar for the two different norms considered in this paper (potential vorticity norm and streamfunction norm). The results show that the bred vectors do indeed represent well the characteristics of the data assimilation forecast errors, and that the subspace of bred vectors contains most of the forecast error, except in areas where the forecast errors are small. For example, the angle between the 6hr forecast error and the subspace spanned by 10 bred vectors is less than 10o over 90% of the domain, indicating a pattern correlation of more than 98.5% between the forecast error and its projection onto the bred vector subspace. The presence of low-dimensional regions in the perturbations of the basic flow has important implications for data assimilation. At any given time, there is a difference between the true atmospheric state and the model forecast. Assuming that model er- rors are not the dominant source of errors, in a region of low BV-dimensionality the difference between the true state and the forecast should lie substantially in the low dimensional unstable subspace of the few bred vectors that contribute most strongly to the low BV-dimension. This information should yield a substantial improvement in the forecast: the data assimilation algorithm should correct the model state by moving it closer to the observations along the unstable subspace, since this is where the true state most likely lies. Preliminary experiments have been conducted with the quasi-geostrophic data assim- ilation system testing whether it is possible to add "errors of the day" based on bred vectors to the standard (constant) 3D-Var background error covariance in order to capture these important errors. The results are extremely encouraging, indicating a significant reduction (about 40%) in the analysis errors at a very low computational cost. References: 2 Corazza, M., E. Kalnay, DJ Patil, R. Morss, M Cai, I. Szunyogh, BR Hunt, E Ott and JA Yorke, 2001: Use of the breeding technique to estimate the structure of the analysis "errors of the day". Submitted to Nonlinear Processes in Geophysics. Hamill, T.M., Snyder, C., and Morss, R.E., 2000: A Comparison of Probabilistic Fore- casts from Bred, Singular-Vector and Perturbed Observation Ensembles, Mon. Wea. Rev., 128, 18351851. Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints of the Tenth Conference on Numerical Weather Prediction, Amer. Meteor. Soc., 1994, 212-215. Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improv- ing numerical weather prediction. PHD thesis, Massachussetts Institute of technology, 225pp. Patil, D. J. S., B. R. Hunt, E. Kalnay, J. A. Yorke, and E. Ott., 2001: Local Low Dimensionality of Atmospheric Dynamics. Phys. Rev. Lett., 86, 5878. 3
Design of thrust vectoring exhaust nozzles for real-time applications using neural networks
NASA Technical Reports Server (NTRS)
Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.
1991-01-01
Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Plasma detachment in divertor tokamaks
Leonard, A. W.
2018-02-07
In this study, observations of divertor plasma detachment in tokamaks are reviewed. Plasma detachment is characterized in terms of transport and dissipation of power, momentum and particle flux along the open field lines from the midplane to the divertor. Asymmetries in detachment onset and other characteristics between the inboard and outboard divertor plasmas is found to be primarily driven by plasmamore » $$\\vec{E}$$ x $$\\vec{B}$$ drifts. The effect of divertor plate geometry and magnetic configuration on divertor detachment is summarized. Control of divertor detachment has progressed with a development of a number of diagnostics to characterize the detached state in real-time. Finally the compatibility of detached divertor operation with high performance core plasmas is examined.« less
Plasma detachment in divertor tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonard, A. W.
In this study, observations of divertor plasma detachment in tokamaks are reviewed. Plasma detachment is characterized in terms of transport and dissipation of power, momentum and particle flux along the open field lines from the midplane to the divertor. Asymmetries in detachment onset and other characteristics between the inboard and outboard divertor plasmas is found to be primarily driven by plasmamore » $$\\vec{E}$$ x $$\\vec{B}$$ drifts. The effect of divertor plate geometry and magnetic configuration on divertor detachment is summarized. Control of divertor detachment has progressed with a development of a number of diagnostics to characterize the detached state in real-time. Finally the compatibility of detached divertor operation with high performance core plasmas is examined.« less
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
An evaluation of the environmental and health effects of vehicle exhaust catalysts in the UK.
Hutchinson, Emma J; Pearson, Peter J G
2004-01-01
Since 1993, all new gasoline-engine automobiles in the United Kingdom have been supplied with three-way vehicle exhaust catalytic converters (VECs) containing platinum, palladium, and rhodium, to comply with European Commission Stage I limits on emissions of regulated pollutants: carbon monoxide, hydrocarbons, and oxides of nitrogen. We conducted a physical and economic evaluation of the environmental and health benefits from a reduction in emissions through this mandated environmental technology against the costs, with reference to urban areas in Great Britain. We made both an ex post assessment--based on available data to 1998--and an ex ante assessment--projected to 2005, the year when full penetration of VECs into the fleet is expected. Substantial health benefits in excess of the costs of VECs were indicated: By 1998 the estimated net societal health benefits were approximately 500 million British pounds, and by 2005 they were estimated to rise to as much as 2 billion British pounds. We also found through environmental surveys that although lead in road dust has fallen by 50% in urban areas, platinum accumulations near roads have risen significantly, up to 90-fold higher than natural background levels. This rapid accumulation of platinum suggests further monitoring is warranted, although as yet there is no evidence of adverse health effects. PMID:14754566
In vitro activity of farnesol against vaginal Lactobacillus spp.
Wang, Fengjuan; Liu, Zhaohui; Zhang, Dai; Niu, Xiaoxi
2017-05-01
Farnesol, a quorum-sensing molecule in Candida albicans, can affect the growth of certain microorganisms. The objective of this study was to evaluate the in vitro activity of farnesol against vaginal Lactobacillus spp., which play a crucial role in the maintenance of vaginal health. Growth and metabolic viability of vaginal Lactobacillus spp. incubated with different concentrations of farnesol were determined by measuring the optical density of the cultures and with the MTT assay. Morphology of the farnesol-treated cells was evaluated using a scanning electron microscope. In vitro adherence of vaginal Lactobacillus cells treated with farnesol was determined by co-incubating with vaginal epithelial cells (VECs). The minimum inhibitory concentration (MIC) of farnesol for vaginal Lactobacillus spp. was 1500μM. No morphological changes were observed when the farnesol-treated Lactobacillus cells were compared with farnesol-free cells, and 100μM farnesol would reduce the adherence of vaginal Lactobacillus to VECs. Farnesol acted as a potential antimicrobial agent, had little impact on the growth, metabolism, and cytomorphology of the vaginal Lactobacillus spp.; however, it affected their adhering capacity to VECs. The safety of farnesol as an adjuvant for antimicrobial agents during the treatment of vaginitis needs to be studied further. Copyright © 2017 Elsevier B.V. All rights reserved.
An evaluation of the environmental and health effects of vehicle exhaust catalysts in the UK.
Hutchinson, Emma J; Pearson, Peter J G
2004-02-01
Since 1993, all new gasoline-engine automobiles in the United Kingdom have been supplied with three-way vehicle exhaust catalytic converters (VECs) containing platinum, palladium, and rhodium, to comply with European Commission Stage I limits on emissions of regulated pollutants: carbon monoxide, hydrocarbons, and oxides of nitrogen. We conducted a physical and economic evaluation of the environmental and health benefits from a reduction in emissions through this mandated environmental technology against the costs, with reference to urban areas in Great Britain. We made both an ex post assessment--based on available data to 1998--and an ex ante assessment--projected to 2005, the year when full penetration of VECs into the fleet is expected. Substantial health benefits in excess of the costs of VECs were indicated: By 1998 the estimated net societal health benefits were approximately 500 million British pounds, and by 2005 they were estimated to rise to as much as 2 billion British pounds. We also found through environmental surveys that although lead in road dust has fallen by 50% in urban areas, platinum accumulations near roads have risen significantly, up to 90-fold higher than natural background levels. This rapid accumulation of platinum suggests further monitoring is warranted, although as yet there is no evidence of adverse health effects.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Targeting of AUF1 to vascular endothelial cells as a novel anti-aging therapy.
He, Jian; Jiang, Ya-Feng; Liang, Liu; Wang, Du-Jin; Wei, Wen-Xin; Ji, Pan-Pan; Huang, Yao-Chan; Song, Hui; Lu, Xiao-Ling; Zhao, Yong-Xiang
2017-08-01
Inhibition of aging of vascular endothelial cells (VECs) may delay aging and prolong life. The goal of this study was to prepare anti-CD31 monoclonal antibody conjugated PEG-modified liposomes containing the AU-rich region connecting factor 1 (AUF1) gene (CD31-PILs-AUF1) and to explore the effects of targeting CD31-PILs-AUF1 to aging VECs. The mean particle sizes of various PEGylated immunoliposomes (PILs) were measured using a Zetasizer Nano ZS. Gel retardation assay was used to confirm whether PILs had encapsulated the AUF1 plasmid successfully. Fluorescence microscopy and flow cytometry were used to quantify binding of CD31-PILs-AUF1 to target cells. Flow cytometry was also used to analyze the cell cycles of aging bEnd3 cells treated with CD31-PILs-AUF1. We also developed an aging mouse model by treating mice with D-galactose. Enzyme-linked immunosorbent assay (ELISA) was used to evaluate the levels of interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α). The malondialdehyde (MDA) and the superoxide dismutase (SOD) levels were detected by commercial kits. Hematoxylin-eosin (HE) staining was used to determine whether treatment with CD31-PILs-AUF1 was toxic to the mice. CD31-PILs-AUF1 specifically could targeted bEnd3 VECs and increased the percentage of cells in the S and G2/M phases of aging bEnd3 cells. ELISA showed that content of the IL-6 and TNF-α decreased in CD31-PILs-AUF1 group. The level of SOD increased, whereas MDA decreased in the CD31-PILs-AUF1 group. Additionally, CD31-PILs-AUF1 was not toxic to the mice. CD31-PILs-AUF1 targets VECs and may delay their senescence.
Targeting of AUF1 to vascular endothelial cells as a novel anti-aging therapy
He, Jian; Jiang, Ya-Feng; Liang, Liu; Wang, Du-Jin; Wei, Wen-Xin; Ji, Pan-Pan; Huang, Yao-Chan; Song, Hui; Lu, Xiao-Ling; Zhao, Yong-Xiang
2017-01-01
Background Inhibition of aging of vascular endothelial cells (VECs) may delay aging and prolong life. The goal of this study was to prepare anti-CD31 monoclonal antibody conjugated PEG-modified liposomes containing the AU-rich region connecting factor 1 (AUF1) gene (CD31-PILs-AUF1) and to explore the effects of targeting CD31-PILs-AUF1 to aging VECs. Methods The mean particle sizes of various PEGylated immunoliposomes (PILs) were measured using a Zetasizer Nano ZS. Gel retardation assay was used to confirm whether PILs had encapsulated the AUF1 plasmid successfully. Fluorescence microscopy and flow cytometry were used to quantify binding of CD31-PILs-AUF1 to target cells. Flow cytometry was also used to analyze the cell cycles of aging bEnd3 cells treated with CD31-PILs-AUF1. We also developed an aging mouse model by treating mice with D-galactose. Enzyme-linked immunosorbent assay (ELISA) was used to evaluate the levels of interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α). The malondialdehyde (MDA) and the superoxide dismutase (SOD) levels were detected by commercial kits. Hematoxylin-eosin (HE) staining was used to determine whether treatment with CD31-PILs-AUF1 was toxic to the mice. Results CD31-PILs-AUF1 specifically could targeted bEnd3 VECs and increased the percentage of cells in the S and G2/M phases of aging bEnd3 cells. ELISA showed that content of the IL-6 and TNF-α decreased in CD31-PILs-AUF1 group. The level of SOD increased, whereas MDA decreased in the CD31-PILs-AUF1 group. Additionally, CD31-PILs-AUF1 was not toxic to the mice. Conclusion CD31-PILs-AUF1 targets VECs and may delay their senescence. PMID:29089968
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
NASA Astrophysics Data System (ADS)
Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong
2014-11-01
Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.
Design of analytical failure detection using secondary observers
NASA Technical Reports Server (NTRS)
Sisar, M.
1982-01-01
The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.
Feedback controlled optics with wavefront compensation
NASA Technical Reports Server (NTRS)
Breckenridge, William G. (Inventor); Redding, David C. (Inventor)
1993-01-01
The sensitivity model of a complex optical system obtained by linear ray tracing is used to compute a control gain matrix by imposing the mathematical condition for minimizing the total wavefront error at the optical system's exit pupil. The most recent deformations or error states of the controlled segments or optical surfaces of the system are then assembled as an error vector, and the error vector is transformed by the control gain matrix to produce the exact control variables which will minimize the total wavefront error at the exit pupil of the optical system. These exact control variables are then applied to the actuators controlling the various optical surfaces in the system causing the immediate reduction in total wavefront error observed at the exit pupil of the optical system.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Chigurupati, Srinivasulu; Mughal, Mohamed R.; Okun, Eitan; Das, Soumen; Kumar, Amit; McCaffery, Michael; Seal, Sudipta; Mattson, Mark P.
2012-01-01
Rapid and effective wound healing requires a coordinated cellular response involving fibroblasts, keratinocytes and vascular endothelial cells (VECs). Impaired wound healing can result in multiple adverse health outcomes and, although antibiotics can forestall infection, treatments that accelerate wound healing are lacking. We now report that topical application of water soluble cerium oxide nanoparticles (Nanoceria) accelerates the healing of full-thickness dermal wounds in mice by a mechanism that involves enhancement of the proliferation and migration of fibroblasts, keratinocytes and VECs. The Nanoceria penetrated into the wound tissue and reduced oxidative damage to cellular membranes and proteins, suggesting a therapeutic potential for topical treatment of wounds with antioxidant nanoparticles. PMID:23266256
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
A new method for distortion magnetic field compensation of a geomagnetic vector measurement system
NASA Astrophysics Data System (ADS)
Liu, Zhongyan; Pan, Mengchun; Tang, Ying; Zhang, Qi; Geng, Yunling; Wan, Chengbiao; Chen, Dixiang; Tian, Wugang
2016-12-01
The geomagnetic vector measurement system mainly consists of three-axis magnetometer and an INS (inertial navigation system), which have many ferromagnetic parts on them. The magnetometer is always distorted by ferromagnetic parts and other electric equipments such as INS and power circuit module within the system, which can lead to geomagnetic vector measurement error of thousands of nT. Thus, the geomagnetic vector measurement system has to be compensated in order to guarantee the measurement accuracy. In this paper, a new distortion magnetic field compensation method is proposed, in which a permanent magnet with different relative positions is used to change the ambient magnetic field to construct equations of the error model parameters, and the parameters can be accurately estimated by solving linear equations. In order to verify effectiveness of the proposed method, the experiment is conducted, and the results demonstrate that, after compensation, the components errors of measured geomagnetic field are reduced significantly. It demonstrates that the proposed method can effectively improve the accuracy of the geomagnetic vector measurement system.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1979-01-01
In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.
Implications of a light “dark Higgs” solution to the g μ - 2 discrepancy
Chen, Chien-Yi; Davoudiasl, Hooman; Marciano, William J.; ...
2016-02-05
A light scalar Φ with mass ≲1 GeV and muonic coupling O(10 -3) would explain the 3.5σ discrepancy between the Standard Model (SM) muon g-2 prediction and experiment. Such a scalar can be associated with a light remnant of the Higgs mechanism in the “dark” sector. We suggest Φ→l +l - bump hunting in μ→eνmore » $$\\vec{v}$$Φ, μ-p→ν μnΦ (muon capture), and K ±→μ ±νΦ decays as direct probes of this scenario. In a general setup, a potentially observable muon electric dipole moment ≲10 -23 e cm and lepton-flavor-violating decays τ→μ(e)Φ or μ→eΦ can also arise. A deviation in BR(H→μ +μ -) from SM expectations, due to Higgs coupling misalignment, can result depending on certain parameters. Here, we illustrate how the requisite interactions can be mediated by weak-scale vector-like leptons that typically lie within the reach of future LHC measurements.« less
NASA Astrophysics Data System (ADS)
Agresti, Juri; De Pietri, Roberto; Lusanna, Luca; Martucci, Luca
2004-05-01
In the framework of the rest-frame instant form of tetrad gravity, where the Hamiltonian is the weak ADM energy {\\hat E}ADM, we define a special completely fixed 3-orthogonal Hamiltonian gauge, corresponding to a choice of non-harmonic 4-coordinates, in which the independent degrees of freedom of the gravitational field are described by two pairs of canonically conjugate Dirac observables (DO) r_{\\bar a}(\\tau ,\\vec \\sigma ), \\pi_{\\bar a}(\\tau ,\\vec \\sigma ), \\bar a = 1,2. We define a Hamiltonian linearization of the theory, i.e. gravitational waves, without introducing any background 4-metric, by retaining only the linear terms in the DO's in the super-hamiltonian constraint (the Lichnerowicz equation for the conformal factor of the 3-metric) and the quadratic terms in the DO's in {\\hat E}ADM. We solve all the constraints of the linearized theory: this amounts to work in a well defined post-Minkowskian Christodoulou-Klainermann space-time. The Hamilton equations imply the wave equation for the DO's r_{\\bar a}(\\tau ,\\vec \\sigma ), which replace the two polarizations of the TT harmonic gauge, and that linearized Einstein's equations are satisfied. Finally we study the geodesic equation, both for time-like and null geodesics, and the geodesic deviation equation.
Zhang, Dajin; Qu, Jia; Xiong, Ming; Qiao, Yuanyuan; Wang, Dapeng; Liu, Fengjiao; Li, Dandan; Hu, Ming; Zhang, Jiashu
2017-01-01
Trauma complicated by seawater immersion is a complex pathophysiological process with higher mortality than trauma occurring on land. This study investigated the role of vascular endothelial cells (VECs) in trauma development in a seawater environment. An open abdominal injury rat model was used. The rat core temperatures in the seawater (SW, 22°C) group and normal sodium (NS, 22°C) group declined equivalently. No rats died within 12 hours in the control and NS groups. However, the median lethal time of the rats in the SW group was only 260 minutes. Among the 84 genes involved in rat VEC biology, the genes exhibiting the high expression changes (84.62%, 11/13) on a qPCR array were associated with thrombin activity. The plasma activated partial thromboplastin time and fibrinogen and vWF levels decreased, whereas the prothrombin time and TFPI levels increased, indicating intrinsic and extrinsic coagulation pathway activation and inhibition, respectively. The plasma plasminogen, FDP, and D-dimer levels were elevated after 2 hours, and those of uPA, tPA, and PAI-1 exhibited marked changes, indicating disseminated intravascular coagulation (DIC). Additionally, multiorgan haemorrhagia was observed. It indicated that seawater immersion during trauma may increase DIC, elevating mortality. VECs injury might play an essential role in this process. PMID:28744465
Joshi, Molishree; Keith Pittman, H; Haisch, Carl; Verbanac, Kathryn
2008-09-01
Quantitative real-time PCR (qPCR) is a sensitive technique for the detection and quantitation of specific DNA sequences. Here we describe a Taqman qPCR assay for quantification of tissue-localized, adoptively transferred enhanced green fluorescent protein (EGFP)-transgenic cells. A standard curve constructed from serial dilutions of a plasmid containing the EGFP transgene was (i) highly reproducible, (ii) detected as few as two copies, and (iii) was included in each qPCR assay. qPCR analysis of genomic DNA was used to determine transgene copy number in several mouse strains. Fluorescent microscopy of tissue sections showed that adoptively transferred vascular endothelial cells (VEC) from EGFP-transgenic mice specifically localized to tissue with metastatic tumors in syngeneic recipients. VEC microscopic enumeration of liver metastases strongly correlated with qPCR analysis of identical sections (Pearson correlation 0.81). EGFP was undetectable in tissue from control mice by qPCR. In another study using intra-tumor EGFP-VEC delivery to subcutaneous tumors, manual cell count and qPCR analysis of alternating sections also strongly correlated (Pearson correlation 0.82). Confocal microscopy of the subcutaneous tumor sections determined that visual fluorescent signals were frequently tissue artifacts. This qPCR methodology offers specific, objective, and rapid quantitation, uncomplicated by tissue autofluorescence, and should be readily transferable to other in vivo models to quantitate the biolocalization of transplanted cells.
Algorithm research for user trajectory matching across social media networks based on paragraph2vec
NASA Astrophysics Data System (ADS)
Xu, Qian; Chen, Hongchang; Zhi, Hongxin; Wang, Yanchuan
2018-04-01
Identifying users across different social media networks (SMN) is to link accounts of the same user that belong to the same individual across SMNs. The problem is fundamental and important, and its results can benefit many applications such as cross SMN user modeling and recommendation. With the development of GPS technology and mobile communication, more and more social networks provide location services. This provides a new opportunity for cross SMN user identification. In this paper, we solve cross SMN user identification problem in an unsupervised manner by utilizing user trajectory data in SMNs. A paragraph2vec based algorithm is proposed in which location sequence feature of user trajectory is captured in temporal and spatial dimensions. Our experimental results validate the effectiveness and efficiency of our algorithm.
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
NASA Technical Reports Server (NTRS)
Mishchenko, M. I.; Lacis, A. A.; Travis, L. D.
1994-01-01
Although neglecting polarization and replacing the rigorous vector radiative transfer equation by its approximate scalar counterpart has no physical background, it is a widely used simplification when the incident light is unpolarized and only the intensity of the reflected light is to be computed. We employ accurate vector and scalar multiple-scattering calculations to perform a systematic study of the errors induced by the neglect of polarization in radiance calculations for a homogeneous, plane-parallel Rayleigh-scattering atmosphere (with and without depolarization) above a Lambertian surface. Specifically, we calculate percent errors in the reflected intensity for various directions of light incidence and reflection, optical thicknesses of the atmosphere, single-scattering albedos, depolarization factors, and surface albedos. The numerical data displayed can be used to decide whether or not the scalar approximation may be employed depending on the parameters of the problem. We show that the errors decrease with increasing depolarization factor and/or increasing surface albedo. For conservative or nearly conservative scattering and small surface albedos, the errors are maximum at optical thicknesses of about 1. The calculated errors may be too large for some practical applications, and, therefore, rigorous vector calculations should be employed whenever possible. However, if approximate scalar calculations are used, we recommend to avoid geometries involving phase angles equal or close to 0 deg and 90 deg, where the errors are especially significant. We propose a theoretical explanation of the large vector/scalar differences in the case of Rayleigh scattering. According to this explanation, the differences are caused by the particular structure of the Rayleigh scattering matrix and come from lower-order (except first-order) light scattering paths involving right scattering angles and right-angle rotations of the scattering plane.
Li, H; Huang, S; Wang, S; Zhao, J; Su, L; Zhao, B; Zhang, Y; Zhang, S; Miao, J
2013-09-19
Phosphatidylcholine-specific phospholipase C (PC-PLC) is a key factor in apoptosis and autophagy of vascular endothelial cells (VECs), and involved in atherosclerosis in apolipoprotein E⁻/⁻ (apoE⁻/⁻) mice. But the endogenous regulators of PC-PLC are not known. We recently found a small chemical molecule (6-amino-2, 3-dihydro-3-hydroxymethyl-1, 4-benzoxazine, ABO) that could inhibit oxidized low-density lipoprotein (oxLDL)-induced apoptosis and promote autophagy in VECs, and further identified ABO as an inhibitor of annexin A7 (ANXA7) GTPase. Based on these findings, we hypothesize that ANXA7 is an endogenous regulator of PC-PLC, and targeting ANXA7 by ABO may inhibit atherosclerosis in apoE⁻/⁻ mice. In this study, we tested our hypothesis. The results showed that ABO suppressed oxLDL-induced increase of PC-PLC level and activity and promoted the co-localization of ANXA7 and PC-PLC in VECs. The experiments of ANXA7 knockdown and overexpression demonstrated that the action of ABO was ANXA7-dependent in cultured VECs. To investigate the relation of ANXA7 with PC-PLC in atherosclerosis, apoE⁻/⁻ mice fed with a western diet were treated with 50 or 100 mg/kg/day ABO. The results showed that ABO decreased PC-PLC levels in the mouse aortic endothelium and PC-PLC activity in serum, and enhanced the protein levels of ANXA7 in the mouse aortic endothelium. Furthermore, both dosages of ABO significantly enhanced autophagy and reduced apoptosis in the mouse aortic endothelium. As a result, ABO significantly reduced atherosclerotic plaque area and effectively preserved a stable plaques phenotype, including reduced lipid deposition and pro-inflammatory macrophages, increased anti-inflammatory macrophages, collagen content and smooth muscle cells, and less cell death in the plaques. In conclusion, ANXA7 was an endogenous regulator of PC-PLC, and targeting ANXA7 by ABO inhibited atherosclerosis in apoE⁻/⁻ mice.
Test of Understanding of Vectors: A Reliable Multiple-Choice Vector Concept Test
ERIC Educational Resources Information Center
Barniol, Pablo; Zavala, Genaro
2014-01-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
Role of MUC4-NIDO domain in the MUC4-mediated metastasis of pancreatic cancer cells
Senapati, Shantibhusan; Gnanapragassam, Vinayaga Srinivasan; Moniaux, Nicolas; Momi, Navneet; Batra, Surinder K.
2011-01-01
MUC4 is a large transmembrane type I glycoprotein that is overexpressed in pancreatic cancer (PC) and has been shown to be associated with its progression and metastasis. However, the exact cellular and molecular mechanism(s) through which MUC4 promotes metastasis of PC cells has been sparsely studied. Here we showed that the NIDO domain of MUC4, which is similar to the G1-domain present in the nidogen or entactin (an extracellular matrix protein), contributes to the protein-protein interaction property of MUC4. By this interaction, MUC4 promotes breaching of basement membrane integrity, and spreading of cancer cells. These observations are corroborated with the data from our study using an engineered MUC4 protein without the NIDO domain, which was ectopically expressed in the MiaPaCa PC cells, lacking endogenous MUC4 and nidogen protein. The in vitro studies demonstrated an enhanced invasiveness of MiaPaCa cells expressing MUC4 (MiaPaCa-MUC4) compared to vector-transfected cells (MiaPaCa-Vec; p=0.003) or cells expressing MUC4 without the NIDO domain (MiaPaCa-MUC4-NIDOΔ; p=0.03). However, the absence of NIDO-domain has no significant role on cell growth and motility (p=0.93). In the in-vivo studies, all the mice orthotopically implanted with MiPaCa-MUC4 cells developed metastasis to the liver as compared to MiaPaCa-Vec or the MiaPaCa-MUC4-NIDOΔ group, hence, supporting our in vitro observations. Additionally, a reduced binding (p=0.0004) of MiaPaCa-MUC4-NIDOΔ cells to the fibulin-2 coated plates compared to MiaPaCa-MUC4 cells indicated a possible interaction between the MUC4-NIDO domain and fibulin-2, a nidogen-interacting protein. Furthermore, in PC tissue samples, MUC4 colocalized with the fibulin-2 present in the basement membrane. Altogether, our findings demonstrate that the MUC4-NIDO domain significantly contributes to the MUC4-mediated metastasis of PC cells. This may be partly due to the interaction between the MUC4-NIDO domain and fibulin-2. PMID:22105367
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
NASA Astrophysics Data System (ADS)
Li, Tianxing; Zhou, Junxiang; Deng, Xiaozhong; Li, Jubo; Xing, Chunrong; Su, Jianxin; Wang, Huiliang
2018-07-01
A manufacturing error of a cycloidal gear is the key factor affecting the transmission accuracy of a robot rotary vector (RV) reducer. A methodology is proposed to realize the digitized measurement and data processing of the cycloidal gear manufacturing error based on the gear measuring center, which can quickly and accurately measure and evaluate the manufacturing error of the cycloidal gear by using both the whole tooth profile measurement and a single tooth profile measurement. By analyzing the particularity of the cycloidal profile and its effect on the actual meshing characteristics of the RV transmission, the cycloid profile measurement strategy is planned, and the theoretical profile model and error measurement model of cycloid-pin gear transmission are established. Through the digital processing technology, the theoretical trajectory of the probe and the normal vector of the measured point are calculated. By means of precision measurement principle and error compensation theory, a mathematical model for the accurate calculation and data processing of manufacturing error is constructed, and the actual manufacturing error of the cycloidal gear is obtained by the optimization iterative solution. Finally, the measurement experiment of the cycloidal gear tooth profile is carried out on the gear measuring center and the HEXAGON coordinate measuring machine, respectively. The measurement results verify the correctness and validity of the measurement theory and method. This methodology will provide the basis for the accurate evaluation and the effective control of manufacturing precision of the cycloidal gear in a robot RV reducer.
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
1982-11-01
1430 2, 1, 0 1876 1988 1952 0, 1, 1 1953 1963 1928 1, 1, 1 2074 2089 2048 2, 1, 1 2401 2499 2440 3, 1, 0 2442 2875 2808 3, 1, 1 2866 3346 3233 f, n...PPFrPIDF/NOSET $ VEC USETD:/VP/*P*/*COMP*/*R* $ VEC USETI/Vt’/*D:*/*CDMP*/*R* $ PARTN PPF ,v VP/PPF1 ,PPF2yy,/ 1 $ PARTN PtIFv , VDi /P’I:FIP’F2, /I...PARTN MI)DtiVI:’, MDtti ,ML’D21 ,Mt’D1;2,MtiE122 $ PARTN BIIE’vVt’,/BLID). ,BiliD21 ,BDIl 2 BDE $ PARTN KDlL’ VDI ,/KI:II vKDEI2i ,K~Ith2, KEIE
NASA Astrophysics Data System (ADS)
Kadaj, Roman
2016-12-01
The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
NASA Astrophysics Data System (ADS)
Duan, Wansuo; Zhao, Peng
2017-04-01
Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.
Experiments With Magnetic Vector Potential
ERIC Educational Resources Information Center
Skinner, J. W.
1975-01-01
Describes the experimental apparatus and method for the study of magnetic vector potential (MVP). Includes a discussion of inherent errors in the calculations involved, precision of the results, and further applications of MVP. (GS)
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
NASA Astrophysics Data System (ADS)
Milione, Giovanni; Lavery, Martin P. J.; Huang, Hao; Ren, Yongxiong; Xie, Guodong; Nguyen, Thien An; Karimi, Ebrahim; Marrucci, Lorenzo; Nolan, Daniel A.; Alfano, Robert R.; Willner, Alan E.
2015-05-01
Vector modes are spatial modes that have spatially inhomogeneous states of polarization, such as, radial and azimuthal polarization. They can produce smaller spot sizes and stronger longitudinal polarization components upon focusing. As a result, they are used for many applications, including optical trapping and nanoscale imaging. In this work, vector modes are used to increase the information capacity of free space optical communication via the method of optical communication referred to as mode division multiplexing. A mode (de)multiplexer for vector modes based on a liquid crystal technology referred to as a q-plate is introduced. As a proof of principle, using the mode (de)multiplexer four vector modes each carrying a 20 Gbit/s quadrature phase shift keying signal on a single wavelength channel (~1550nm), comprising an aggregate 80 Gbit/s, were transmitted ~1m over the lab table with <-16.4 dB (<2%) mode crosstalk. Bit error rates for all vector modes were measured at the forward error correction threshold with power penalties < 3.41dB.
Evaluation of the SPAR thermal analyzer on the CYBER-203 computer
NASA Technical Reports Server (NTRS)
Robinson, J. C.; Riley, K. M.; Haftka, R. T.
1982-01-01
The use of the CYBER 203 vector computer for thermal analysis is investigated. Strengths of the CYBER 203 include the ability to perform, in vector mode using a 64 bit word, 50 million floating point operations per second (MFLOPS) for addition and subtraction, 25 MFLOPS for multiplication and 12.5 MFLOPS for division. The speed of scalar operation is comparable to that of a CDC 7600 and is some 2 to 3 times faster than Langley's CYBER 175s. The CYBER 203 has 1,048,576 64-bit words of real memory with an 80 nanosecond (nsec) access time. Memory is bit addressable and provides single error correction, double error detection (SECDED) capability. The virtual memory capability handles data in either 512 or 65,536 word pages. The machine has 256 registers with a 40 nsec access time. The weaknesses of the CYBER 203 include the amount of vector operation overhead and some data storage limitations. In vector operations there is a considerable amount of time before a single result is produced so that vector calculation speed is slower than scalar operation for short vectors.
Query Auto-Completion Based on Word2vec Semantic Similarity
NASA Astrophysics Data System (ADS)
Shao, Taihua; Chen, Honghui; Chen, Wanyu
2018-04-01
Query auto-completion (QAC) is the first step of information retrieval, which helps users formulate the entire query after inputting only a few prefixes. Regarding the models of QAC, the traditional method ignores the contribution from the semantic relevance between queries. However, similar queries always express extremely similar search intention. In this paper, we propose a hybrid model FS-QAC based on query semantic similarity as well as the query frequency. We choose word2vec method to measure the semantic similarity between intended queries and pre-submitted queries. By combining both features, our experiments show that FS-QAC model improves the performance when predicting the user’s query intention and helping formulate the right query. Our experimental results show that the optimal hybrid model contributes to a 7.54% improvement in terms of MRR against a state-of-the-art baseline using the public AOL query logs.
VEGF promotes tumorigenesis and angiogenesis of human glioblastoma stem cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oka, Naoki; Soeda, Akio; Inagaki, Akihito
2007-08-31
There is increasing evidence for the presence of cancer stem cells (CSCs) in malignant brain tumors, and these CSCs may play a pivotal role in tumor initiation, growth, and recurrence. Vascular endothelial growth factor (VEGF) promotes the proliferation of vascular endothelial cells (VECs) and the neurogenesis of neural stem cells. Using CSCs derived from human glioblastomas and a retrovirus expressing VEGF, we examined the effects of VEGF on the properties of CSCs in vitro and in vivo. Although VEGF did not affect the property of CSCs in vitro, the injection of mouse brains with VEGF-expressing CSCs led to the massivemore » expansion of vascular-rich GBM, tumor-associated hemorrhage, and high morbidity, suggesting that VEGF promoted tumorigenesis via angiogenesis. These results revealed that VEGF induced the proliferation of VEC in the vascular-rich tumor environment, the so-called stem cell niche.« less
Hierarchical Rhetorical Sentence Categorization for Scientific Papers
NASA Astrophysics Data System (ADS)
Rachman, G. H.; Khodra, M. L.; Widyantoro, D. H.
2018-03-01
Important information in scientific papers can be composed of rhetorical sentences that is structured from certain categories. To get this information, text categorization should be conducted. Actually, some works in this task have been completed by employing word frequency, semantic similarity words, hierarchical classification, and the others. Therefore, this paper aims to present the rhetorical sentence categorization from scientific paper by employing TF-IDF and Word2Vec to capture word frequency and semantic similarity words and employing hierarchical classification. Every experiment is tested in two classifiers, namely Naïve Bayes and SVM Linear. This paper shows that hierarchical classifier is better than flat classifier employing either TF-IDF or Word2Vec, although it increases only almost 2% from 27.82% when using flat classifier until 29.61% when using hierarchical classifier. It shows also different learning model for child-category can be built by hierarchical classifier.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
The loss-of-allele assay for ES cell screening and mouse genotyping.
Frendewey, David; Chernomorsky, Rostislav; Esau, Lakeisha; Om, Jinsop; Xue, Yingzi; Murphy, Andrew J; Yancopoulos, George D; Valenzuela, David M
2010-01-01
Targeting vectors used to create directed mutations in mouse embryonic stem (ES) cells consist, in their simplest form, of a gene for drug selection flanked by mouse genomic sequences, the so-called homology arms that promote site-directed homologous recombination between the vector and the target gene. The VelociGene method for the creation of targeted mutations in ES cells employs targeting vectors, called BACVecs, that are based on bacterial artificial chromosomes. Compared with conventional short targeting vectors, BacVecs provide two major advantages: (1) their much larger homology arms promote high targeting efficiencies without the need for isogenicity or negative selection strategies; and (2) they enable deletions and insertions of up to 100kb in a single targeting event, making possible gene-ablating definitive null alleles and other large-scale genomic modifications. Because of their large arm sizes, however, BACVecs do not permit screening by conventional assays, such as long-range PCR or Southern blotting, that link the inserted targeting vector to the targeted locus. To exploit the advantages of BACVecs for gene targeting, we inverted the conventional screening logic in developing the loss-of-allele (LOA) assay, which quantifies the number of copies of the native locus to which the mutation was directed. In a correctly targeted ES cell clone, the LOA assay detects one of the two native alleles (for genes not on the X or Y chromosome), the other allele being disrupted by the targeted modification. We apply the same principle in reverse as a gain-of-allele assay to quantify the copy number of the inserted targeting vector. The LOA assay reveals a correctly targeted clone as having lost one copy of the native target gene and gained one copy of the drug resistance gene or other inserted marker. The combination of these quantitative assays makes LOA genotyping unequivocal and amenable to automated scoring. We use the quantitative polymerase chain reaction (qPCR) as our method of allele quantification, but any method that can reliably distinguish the difference between one and two copies of the target gene can be used to develop an LOA assay. We have designed qPCR LOA assays for deletions, insertions, point mutations, domain swaps, conditional, and humanized alleles and have used the insert assays to quantify the copy number of random insertion BAC transgenics. Because of its quantitative precision, specificity, and compatibility with high throughput robotic operations, the LOA assay eliminates bottlenecks in ES cell screening and mouse genotyping and facilitates maximal speed and throughput for knockout mouse production. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Xu, Rui; Chen, Wenbin; Zhang, Zhifen; Qiu, Yang; Wang, Yong; Zhang, Bingchang; Lu, Wei
2018-05-30
Bone-Marrow Stromal Cells (BMSCs)-derived vascular endothelial cells (VECs) is regarded as an important therapeutic strategy for spinal cord injury, disc degeneration, cerebral ischemic disease and diabetes. The change in DNA methylation level is essential for stem cell differentiation. However, the DNA methylation related mechanisms underlying the endothelial differentiation of BMSCs are not well understood. In this study, DNA methyltransferase inhibitor, 5-aza-2'-deoxycytidine (5-aza-dC) significantly elevated the endothelial markers expression (CD31/PECAM1, CD105/ENG, eNOS and VE-cadherin), as well as promoted the capacity of angiogenesis on Matrigel. The result of Alexa 488-Ac-LDL uptake assay indicated that the differentiation ratio of BMSCs into VECs was 68.7% in 5-azaz-dC induced differentiation. And then we screened differentiation inducers with altered expression patterns and DNA methylation levels in four important families (VEGF, ANG, FGF and ETS). By integrating these data, five endothelial differentiation inducers (VEGFA, ANGPT2, FGF2, FGF9 and ETS1) which were directly upregulated by 5-aza-dC and five indirect factors (FGF1, FGF3, ETS2, ETV1 and ETV4) were identified. These data suggested that 5-aza-dC is an excellent chemical molecule for BMSCs differentiation into functional VECs and also provided essential clues for DNA methylation related signaling during 5-aza-dC induced endothelial differentiation of BMSCs. Copyright © 2018 Elsevier B.V. All rights reserved.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Orion Exploration Flight Test-1 Contingency Drogue Deploy Velocity Trigger
NASA Technical Reports Server (NTRS)
Gay, Robert S.; Stochowiak, Susan; Smith, Kelly
2013-01-01
As a backup to the GPS-aided Kalman filter and the Barometric altimeter, an "adjusted" velocity trigger is used during entry to trigger the chain of events that leads to drogue chute deploy for the Orion Multi-Purpose Crew Vehicle (MPCV) Exploration Flight Test-1 (EFT-1). Even though this scenario is multiple failures deep, the Orion Guidance, Navigation, and Control (GN&C) software makes use of a clever technique that was taken from the Mars Science Laboratory (MSL) program, which recently successfully landing the Curiosity rover on Mars. MSL used this technique to jettison the heat shield at the proper time during descent. Originally, Orion use the un-adjusted navigated velocity, but the removal of the Star Tracker to save costs for EFT-1, increased attitude errors which increased inertial propagation errors to the point where the un-adjusted velocity caused altitude dispersions at drogue deploy to be too large. Thus, to reduce dispersions, the velocity vector is projected onto a "reference" vector that represents the nominal "truth" vector at the desired point in the trajectory. Because the navigation errors are largely perpendicular to the truth vector, this projection significantly reduces dispersions in the velocity magnitude. This paper will detail the evolution of this trigger method for the Orion project and cover the various methods tested to determine the reference "truth" vector; and at what point in the trajectory it should be computed.
Video Vectorization via Tetrahedral Remeshing.
Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping
2017-02-09
We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.
Calibration Errors in Interferometric Radio Polarimetry
NASA Astrophysics Data System (ADS)
Hales, Christopher A.
2017-08-01
Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
AveBoost2: Boosting for Noisy Data
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.
2004-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the pre- vious base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. In previous work, we developed an algorithm, AveBoost, that constructed distributions orthogonal to the mistake vectors of all the previous models, and then averaged them to create the next base model s distribution. Our experiments demonstrated the superior accuracy of our approach. In this paper, we slightly revise our algorithm to allow us to obtain non-trivial theoretical results: bounds on the training error and generalization error (difference between training and test error). Our averaging process has a regularizing effect which, as expected, leads us to a worse training error bound for our algorithm than for AdaBoost but a superior generalization error bound. For this paper, we experimented with the data that we used in both as originally supplied and with added label noise-a small fraction of the data has its original label changed. Noisy data are notoriously difficult for AdaBoost to learn. Our algorithm's performance improvement over AdaBoost is even greater on the noisy data than the original data.
Zhang, Jiamei; Wang, Yan; Chen, Xiaoqin
2016-04-01
To evaluate and compare refractive outcomes of moderate- and high-astigmatism correction after wavefront-guided laser in situ keratomileusis (LASIK) and small-incision lenticule extraction (SMILE). This comparative study enrolled a total of 64 eyes that had undergone SMILE (42 eyes) and wavefront-guided LASIK (22 eyes). Preoperative cylindrical diopters were ≤-2.25 D in moderate- and >-2.25 D in high-astigmatism subgroups. The refractive results were analyzed based on the Alpins vector method that included target-induced astigmatism, surgically induced astigmatism, difference vector, correction index, index of success, magnitude of error, angle of error, and flattening index. All subjects completed the 3-month follow-up. No significant differences were found in the target-induced astigmatism, surgically induced astigmatism, and difference vector between SMILE and wavefront-guided LASIK. However, the average angle of error value was -1.00 ± 3.16 after wavefront-guided LASIK and 1.22 ± 3.85 after SMILE with statistical significance (P < 0.05). The absolute angle of error value was statistically correlated with difference vector and index of success after both procedures. In the moderate-astigmatism group, correction index was 1.04 ± 0.15 after wavefront-guided LASIK and 0.88 ± 0.15 after SMILE (P < 0.05). However, in the high-astigmatism group, correction index was 0.87 ± 0.13 after wavefront-guided LASIK and 0.88 ± 0.12 after SMILE (P = 0.889). Both procedures showed preferable outcomes in the correction of moderate and high astigmatism. However, high astigmatism was undercorrected after both procedures. Axial error of astigmatic correction may be one of the potential factors for the undercorrection.
Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak; ...
2017-12-20
In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodwin, Geoffrey T.; Chung, Hee Sok; Ee, June-Haak
In this addendum to Phys. Rev. D 95, 054018 (2017) we recompute the rates for the decays of the Higgs boson to a vector quarkonium plus a photon, where the vector quarkonium is J/psi, Upsilon(1S) Upsilon(2S). We correct an error in the Abel-Pad'e summation formula that was used to carry out the evolution of the quarkonium light-cone distribution amplitude in Phys. Rev. D 95, 054018 (2017). We also correct an error in the scale of quarkonium wave function at the origin in Phys. Rev. D 95, 054018 (2017) and introduce several additional refinements in the calculation.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
NASA Astrophysics Data System (ADS)
Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi
2017-01-01
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.
Nicopoullos, James D M; Abdalla, Hossam
2011-01-01
To determine optimal management with one or two mature follicles after stimulation. Retrospective analysis. Lister fertility clinic. A total of 1,350 IVF/intracytoplasmic sperm injection cycles (7.3% of total) during 1998-2009 were found to have one or two mature follicles. Group 1 (n = 807) comprised those who proceeded to vaginal egg collection (VEC) (59.8%; outcome per egg collection), group 2 (n=248) those who converted to IUI (18.4%; outcome per insemination) and group 3 (n=259) those who abandoned the current cycle (21.9%; outcome per abandoned cycle in first subsequent cycle). Live birth rate, clinical pregnancy rate, and biochemical pregnancy rate. Biochemical pregnancy rates of 13.1%, 4.9%, and 9.7%, clinical pregnancy rates of 8.1%, 3.6%, and 7.2%, and ongoing pregnancy rates of 6.8%, 2.0%, and 5.5% were achieved in groups 1, 2, and 3, respectively. All pregnancy outcomes were significantly higher after VEC (group 1) than for those converted to IUI (group 2), and all pregnancy outcomes were higher with borderline significance in group 3 vs. group 2. There was no significant difference in outcome between groups 1 and 3. Our data suggest that for such poor responders, proceeding to VEC may represent their best chance of successful outcome. Conversion to IUI offers the poorest outcome, and despite the potential for improvements in cycle protocol, abandoning and a further attempt does not improve outcome (using abandoned cycle as the denominator). Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Effect of different dialyzer membranes on cutaneous microcirculation during hemodialysis.
Sato, M; Morita, H; Ema, H; Yamaguchi, S; Amano, I
2006-12-01
Biocompatibility profiles of synthetic membranes may vary. In this prospective crossover study, we examined the effect of various membranes on cutaneous microcirculation during HD. 11 HD patients without cardiovascular complications were enrolled in this study. They were dialyzed using three types of membrane in a randomized order: ethylene-vinyl alcohol copolymer (EVAL), vitamin E-bonded cellulose (VE-C) and polysulfone (PS). The transcutaneous oxygen tension (TcPO2) was examined on the dorsum of foot to assess the cutaneous microcirculation. Serum biochemical parameters were also measured. The TcPO2 as a percentage of the predialysis level decreased from the beginning of HD, and significant differences were observed after 15 min of HD between EVAL and the other 2 membranes (98 +/- 6% (mean +/- SD) for EVAL versus 89 +/- 7% for VE-C (p < 0.01) and 88 +/- 10% for PS (p < 0.01)). Furthermore, there were significant differences at 30 and 60 min between EVAL and PS (30 min: 93 +/- 9% for EVAL versus 85 +/- 7% for PS (p < 0.05); 60 min: 92 +/- 10% for EVAL versus 79 +/- 10% for PS (p < 0.01)). The serum level of thiobarbituric acid reactants (TBARs), a marker of lipid peroxidation, increased significantly at the end of HD relative to that at the beginning of HD when using a PS membrane (from 1.9 +/- 0.5 to 2.1 +/- 0.5 nmol/ml, p < 0.05). Our results indicate that an EVAL membrane is superior to PS and VE-C membranes in terms of its smaller influence on cutaneous microcirculation. The repeated occurrence of microcirculatory disturbance during HD sessions may cause chronic endothelial dysfunction and even cardiovascular complications in HD patients.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Role of MUC4-NIDO domain in the MUC4-mediated metastasis of pancreatic cancer cells.
Senapati, S; Gnanapragassam, V S; Moniaux, N; Momi, N; Batra, S K
2012-07-12
MUC4 is a large transmembrane type I glycoprotein that is overexpressed in pancreatic cancer (PC) and has been shown to be associated with its progression and metastasis. However, the exact cellular and molecular mechanism(s) through which MUC4 promotes metastasis of PC cells has been sparsely studied. Here we showed that the nidogen-like (NIDO) domain of MUC4, which is similar to the G1-domain present in the nidogen or entactin (an extracellular matrix protein), contributes to the protein-protein interaction property of MUC4. By this interaction, MUC4 promotes breaching of basement membrane (BM) integrity, and spreading of cancer cells. These observations are corroborated with the data from our study using an engineered MUC4 protein without the NIDO domain, which was ectopically expressed in the MiaPaCa PC cells, lacking endogenous MUC4 and nidogen protein. The in vitro studies demonstrated an enhanced invasiveness of MiaPaCa cells expressing MUC4 (MiaPaCa-MUC4) compared with vector-transfected cells (MiaPaCa-Vec; P=0.003) or cells expressing MUC4 without the NIDO domain (MiaPaCa-MUC4-NIDO(Δ); P=0.03). However, the absence of NIDO-domain has no significant role on cell growth and motility (P=0.93). In the in vivo studies, all the mice orthotopically implanted with MiPaCa-MUC4 cells developed metastasis to the liver as compared with MiaPaCa-Vec or the MiaPaCa-MUC4-NIDO(Δ) group, hence, supporting our in vitro observations. Additionally, a reduced binding (P=0.0004) of MiaPaCa-MUC4-NIDO(Δ) cells to the fibulin-2 coated plates compared with MiaPaCa-MUC4 cells indicated a possible interaction between the MUC4-NIDO domain and fibulin-2, a nidogen-interacting protein. Furthermore, in PC tissue samples, MUC4 colocalized with the fibulin-2 present in the BM. Altogether, our findings demonstrate that the MUC4-NIDO domain significantly contributes to the MUC4-mediated metastasis of PC cells. This may be partly due to the interaction between the MUC4-NIDO domain and fibulin-2.
Lee, Young Han
2018-04-04
The purposes of this study are to evaluate the feasibility of protocol determination with a convolutional neural networks (CNN) classifier based on short-text classification and to evaluate the agreements by comparing protocols determined by CNN with those determined by musculoskeletal radiologists. Following institutional review board approval, the database of a hospital information system (HIS) was queried for lists of MRI examinations, referring department, patient age, and patient gender. These were exported to a local workstation for analyses: 5258 and 1018 consecutive musculoskeletal MRI examinations were used for the training and test datasets, respectively. The subjects for pre-processing were routine or tumor protocols and the contents were word combinations of the referring department, region, contrast media (or not), gender, and age. The CNN Embedded vector classifier was used with Word2Vec Google news vectors. The test set was tested with each classification model and results were output as routine or tumor protocols. The CNN determinations were evaluated using the receiver operating characteristic (ROC) curves. The accuracies were evaluated by a radiologist-confirmed protocol as the reference protocols. The optimal cut-off values for protocol determination between routine protocols and tumor protocols was 0.5067 with a sensitivity of 92.10%, a specificity of 95.76%, and an area under curve (AUC) of 0.977. The overall accuracy was 94.2% for the ConvNet model. All MRI protocols were correct in the pelvic bone, upper arm, wrist, and lower leg MRIs. Deep-learning-based convolutional neural networks were clinically utilized to determine musculoskeletal MRI protocols. CNN-based text learning and applications could be extended to other radiologic tasks besides image interpretations, improving the work performance of the radiologist.
47 CFR 97.527 - Reimbursement for expenses.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Reimbursement for expenses. 97.527 Section 97... AMATEUR RADIO SERVICE Qualifying Examination Systems § 97.527 Reimbursement for expenses. VEs and VECs may be reimbursed by examinees for out-of-pocket expenses incurred in preparing, processing...
Guo, Jian-You; Han, Chun-Chao
2010-01-01
Diabetes mellitus is accompanied by hormonal and neurochemical changes that can be associated with anxiety and depression. Both diabetes and depression negatively interact, in that depression leads to poor metabolic control and hyperglycemia exacerbates depression. We hypothesize one novel vanadium complex of vanadium-enriched Cordyceps sinensis (VECS), which is beneficial in preventing depression in diabetes, and influences the long-term course of glycemic control. Vanadium compounds have the ability to imitate the action of insulin, and this mimicry may have further favorable effects on the level of treatment satisfaction and mood. C. sinensis has an antidepressant-like activity, and attenuates the diabetes-induced increase in blood glucose concentrations. We suggest that the VECS may be a potential strategy for contemporary treatment of depression and diabetes through the co-effect of C. sinensis and vanadium. The validity of the hypothesis can most simply be tested by examining blood glucose levels, and swimming and climbing behavior in streptozotocin-induced hyperglycemic rats. PMID:19948751
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A.; Avakian, H.; Burkert, V.
The target and double spin asymmetries of the exclusive pseudoscalar channelmore » $$\\vec e\\vec p\\to ep\\pi^0$$ were measured for the first time in the deep-inelastic regime using a longitudinally polarized 5.9 GeV electron beam and a longitudinally polarized proton target at Jefferson Lab with the CEBAF Large Acceptance Spectrometer (CLAS). The data were collected over a large kinematic phase space and divided into 110 four-dimensional bins of $Q^2$, $$x_B$$, $-t$ and $$\\phi$$. Large values of asymmetry moments clearly indicate a substantial contribution to the polarized structure functions from transverse virtual photon amplitudes. The interpretation of experimental data in terms of generalized parton distributions (GPDs) provides the first insight on the chiral-odd GPDs $$\\tilde{H}_T$$ and $$E_T$$, and complement previous measurements of unpolarized structure functions sensitive to the GPDs $$H_T$$ and $$\\bar E_T$$. Finally, these data provide necessary constraints for chiral-odd GPD parametrizations and will strongly influence existing theoretical handbag models.« less
Gamow-Teller Strength in the Continuum Studied via the (p,n) Reaction
NASA Astrophysics Data System (ADS)
Wakasa, T.; Hatanaka, K.; Sakai, H.; Fujita, S.; Nonaka, T.; Ohnishi, T.; Yako, K.; Sekiguchi, K.; Okamura, H.; Otsu, H.; Ishida, S.; Sakamoto, N.; Uesaka, T.; Satou, Y.; Greenfield, M. B.
2002-09-01
The double differential cross sections for θ1ab between 0.0° and 14.7° and the polarization transfer coefficient DNN(0°) for the 27 Al(vec {p},vec {n}) reaction have been measured at a bombarding energy of 295 MeV. A multipole decomposition technique is applied for the cross section data to extract L = 0, 1, 2, and 3 contributions. The Gamow-Teller (GT) strength B(GT) deduced from the L = 0 contribution is compared with the B(GT) values calculated in a full sd shell-model space. The sum of B(GT) values up to 20 MeV excitation is S
Proprioception Is Robust under External Forces
Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.
2013-01-01
Information from cutaneous, muscle and joint receptors is combined with efferent information to create a reliable percept of the configuration of our body (proprioception). We exposed the hand to several horizontal force fields to examine whether external forces influence this percept. In an end-point task subjects reached visually presented positions with their unseen hand. In a vector reproduction task, subjects had to judge a distance and direction visually and reproduce the corresponding vector by moving the unseen hand. We found systematic individual errors in the reproduction of the end-points and vectors, but these errors did not vary systematically with the force fields. This suggests that human proprioception accounts for external forces applied to the hand when sensing the position of the hand in the horizontal plane. PMID:24019959
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
NASA Astrophysics Data System (ADS)
Zhou, Wen; Qin, Chaoyi
2017-09-01
We demonstrate multi-frequency QPSK millimeter-wave (mm-wave) vector signal generation enabled by MZM-based optical carrier suppression (OCS) modulation and in-phase/quadrature (I/Q) modulation. We numerically simulate the generation of 40-, 80- and 120-GHz vector signal. Here, the three different signals carry the same QPSK modulation information. We also experimentally realize 11Gbaud/s QPSK vector signal transmission over 20 km fiber, and the generation of the vector signals at 40-GHz, 80-GHz and 120-GHz. The experimental results show that the bit-error-rate (BER) for all the three different signals can reach the forward-error-correction (FEC) threshold of 3.8×10-3. The advantage of the proposed system is that provide high-speed, high-bandwidth and high-capacity seamless access of TDM and wireless network. These features indicate the important application prospect in wireless access networks for WiMax, Wi-Fi and 5G/LTE.
Kalman Filter for Spinning Spacecraft Attitude Estimation
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Sedlak, Joseph E.
2008-01-01
This paper presents a Kalman filter using a seven-component attitude state vector comprising the angular momentum components in an inertial reference frame, the angular momentum components in the body frame, and a rotation angle. The relatively slow variation of these parameters makes this parameterization advantageous for spinning spacecraft attitude estimation. The filter accounts for the constraint that the magnitude of the angular momentum vector is the same in the inertial and body frames by employing a reduced six-component error state. Four variants of the filter, defined by different choices for the reduced error state, are tested against a quaternion-based filter using simulated data for the THEMIS mission. Three of these variants choose three of the components of the error state to be the infinitesimal attitude error angles, facilitating the computation of measurement sensitivity matrices and causing the usual 3x3 attitude covariance matrix to be a submatrix of the 6x6 covariance of the error state. These variants differ in their choice for the other three components of the error state. The variant employing the infinitesimal attitude error angles and the angular momentum components in an inertial reference frame as the error state shows the best combination of robustness and efficiency in the simulations. Attitude estimation results using THEMIS flight data are also presented.
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-01-01
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-03-25
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes--the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC--were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
47 CFR Appendix 2 to Part 97 - VEC Regions
Code of Federal Regulations, 2010 CFR
2010-10-01
... Hampshire, Rhode Island and Vermont. 2. New Jersey and New York. 3. Delaware, District of Columbia, Maryland and Pennsylvania. 4. Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Tennessee... Dakota and South Dakota. 11. Alaska. 12. Caribbean Insular areas. 13. Hawaii and Pacific Insular areas. ...
47 CFR Appendix 2 to Part 97 - VEC Regions
Code of Federal Regulations, 2012 CFR
2012-10-01
... Hampshire, Rhode Island and Vermont. 2. New Jersey and New York. 3. Delaware, District of Columbia, Maryland and Pennsylvania. 4. Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Tennessee... Dakota and South Dakota. 11. Alaska. 12. Caribbean Insular areas. 13. Hawaii and Pacific Insular areas. ...
47 CFR Appendix 2 to Part 97 - VEC Regions
Code of Federal Regulations, 2013 CFR
2013-10-01
... Hampshire, Rhode Island and Vermont. 2. New Jersey and New York. 3. Delaware, District of Columbia, Maryland and Pennsylvania. 4. Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Tennessee... Dakota and South Dakota. 11. Alaska. 12. Caribbean Insular areas. 13. Hawaii and Pacific Insular areas. ...
47 CFR Appendix 2 to Part 97 - VEC Regions
Code of Federal Regulations, 2011 CFR
2011-10-01
... Hampshire, Rhode Island and Vermont. 2. New Jersey and New York. 3. Delaware, District of Columbia, Maryland and Pennsylvania. 4. Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Tennessee... Dakota and South Dakota. 11. Alaska. 12. Caribbean Insular areas. 13. Hawaii and Pacific Insular areas. ...
NASA Technical Reports Server (NTRS)
Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.
2009-01-01
Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.
Combined group ECC protection and subgroup parity protection
Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin
2013-06-18
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.
Nonlinear calibration for petroleum water content measurement using PSO
NASA Astrophysics Data System (ADS)
Li, Mingbao; Zhang, Jiawei
2008-10-01
A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Vector Addition: Effect of the Context and Position of the Vectors
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2010-10-01
In this article we investigate the effect of: 1) the context, and 2) the position of the vectors, on 2D vector addition tasks. We administered a test to 512 students completing introductory physics courses at a private Mexican university. In the first part, we analyze students' responses in three isomorphic problems: displacements, forces, and no physical context. Students were asked to draw two vectors and the vector sum. We analyzed students' procedures detecting the difficulties when drawing the vector addition and proved that the context matters, not only compared to the context-free case but also between the contexts. In the second part, we analyze students' responses with three different arrangements of the sum of two vectors: tail-to-tail, head-to-tail and separated vectors. We compared the frequencies of the errors in the three different positions to deduce students' conceptions in the addition of vectors.
Li, Wei; Liu, Jian Guo; Zhu, Ning Hua
2015-04-15
We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
NASA Technical Reports Server (NTRS)
Battin, R. H.; Croopnick, S. R.; Edwards, J. A.
1977-01-01
The formulation of a recursive maximum likelihood navigation system employing reference position and velocity vectors as state variables is presented. Convenient forms of the required variational equations of motion are developed together with an explicit form of the associated state transition matrix needed to refer measurement data from the measurement time to the epoch time. Computational advantages accrue from this design in that the usual forward extrapolation of the covariance matrix of estimation errors can be avoided without incurring unacceptable system errors. Simulation data for earth orbiting satellites are provided to substantiate this assertion.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
NASA Astrophysics Data System (ADS)
Zimina, S. V.
2015-06-01
We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.
Using Oncolytic Viruses to Treat Cancer
Cancer treatments known as oncolytic viruses are being tested in clinical trials, and one, T-VEC or Imlygic®, has been approved by the FDA. Research now suggests that these treatments work not only by infecting and killing tumor cells, but that they may also be a form of cancer immunotherapy.
ERIC Educational Resources Information Center
Cedefop - European Centre for the Development of Vocational Training, 2013
2013-01-01
This paper provides an overview of VET (vocational education and training) in Ireland. In Ireland, the main providers of VET are the national Training and Employment Authority (FAS--a non-commercial semi-State body, part of the public sector) and vocational education committees (VECs--public sector bodies at county level responsible for vocational…
47 CFR 97.519 - Coordinating examination sessions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... qualified examinees, forward electronically all required data to the FCC. All data forwarded must be... itself or under the supervision of a VEC or VEs designated by the FCC; or (3) Cancel the operator/primary... instancce of such cancellation, the person will be granted an operator/primary station license consistent...
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine.
Liu, Zhiyuan; Wang, Changhui
2015-10-23
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method.
Yoon, Wonsuck; Park, Yoo Chang; Kim, Jinseok; Chae, Yang Seok; Byeon, Jung Hye; Min, Sang-Hyun; Park, Sungha; Yoo, Young; Park, Yong Keun; Kim, Byeong Mo
2017-01-01
Salmonella have been experimentally used as anti-cancer agents, because they show selective growth in tumours. In this study, we genetically modified attenuated Salmonella typhimurium to express and secrete interferon-gamma (IFN-γ) as a tumouricidal agent to enhance the therapeutic efficacy of Salmonella. IFN-γ was fused to the N-terminal region (residues 1-160) of SipB (SipB160) for secretion from bacterial cells. Attenuated S. typhimurium expressing recombinant IFN-γ (S. typhimurium (IFN-γ)) invaded the melanoma cells and induced cytotoxicity. Subcutaneous administration of S. typhimurium (IFN-γ) also efficiently inhibited tumour growth and prolonged the survival of C57BL/6 mice bearing B16F10 melanoma compared with administration of phosphate-buffered saline (PBS), unmodified S. typhimurium or S. typhimurium expressing empty vector (S. typhimurium [Vec]) in a natural killer (NK) cell-dependent manner. Moreover, genetically modified Salmonella, including S. typhimurium (IFN-γ), showed little toxicity to normal tissues with no observable adverse effects. However, S. typhimurium (IFN-γ)-mediated tumour suppression was attributed to direct killing of tumour cells rather than to stable anti-tumour immunity. Collectively, these results suggest that tumour-targeted therapy using S. typhimurium (IFN-γ) has potential for melanoma treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity
NASA Astrophysics Data System (ADS)
Scherrer, Phil; HMI Team
2016-10-01
The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.
Using Redundancy To Reduce Errors in Magnetometer Readings
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.
An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression
Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay
2012-01-01
Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552
Research on bearing fault diagnosis of large machinery based on mathematical morphology
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang; Liu, Ming
2016-12-01
An integrated inertial/celestial navigation system (INS/CNS) has wide applicability in lunar rovers as it provides accurate and autonomous navigational information. Initialization is particularly vital for a INS. This paper proposes a two-position initialization method based on a standard Kalman filter. The difference between the computed star vector and the measured star vector is measured. With the aid of a star sensor and the two positions, the attitudinal and positional errors can be greatly reduced, and the biases of three gyros and accelerometers can also be estimated. The semi-physical simulation results show that the positional and attitudinal errors converge within 0.07″ and 0.1 m, respectively, when the given initial positional error is 1 km and the attitudinal error is 10°. These good results show that the proposed method can accomplish alignment, positioning and calibration functions simultaneously. Thus the proposed two-position initialization method has the potential for application in lunar rover navigation.
Control method and system for hydraulic machines employing a dynamic joint motion model
Danko, George [Reno, NV
2011-11-22
A control method and system for controlling a hydraulically actuated mechanical arm to perform a task, the mechanical arm optionally being a hydraulically actuated excavator arm. The method can include determining a dynamic model of the motion of the hydraulic arm for each hydraulic arm link by relating the input signal vector for each respective link to the output signal vector for the same link. Also the method can include determining an error signal for each link as the weighted sum of the differences between a measured position and a reference position and between the time derivatives of the measured position and the time derivatives of the reference position for each respective link. The weights used in the determination of the error signal can be determined from the constant coefficients of the dynamic model. The error signal can be applied in a closed negative feedback control loop to diminish or eliminate the error signal for each respective link.
Combined group ECC protection and subgroup parity protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Cheng, Dong; Heidelberger, Philip
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data
NASA Technical Reports Server (NTRS)
Voorhies, C. V.; Santana, J.; Sabaka, T.
1999-01-01
Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).
Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru
2017-11-27
It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.
1974-08-01
of the surface irregularities are larce ÖE in comparison to the wavelength > so that E and 5— may be approximated on S by (E) (1 + R E.) S 1...sum vector (Z) and the difference vector (A) at the radar have been determined, the rough boresite error is computed as DELPHI A- I |Z|/|E|27|ä|2
Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Shenk, W. E.; Skillman, W.
1974-01-01
An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.
Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements
NASA Astrophysics Data System (ADS)
Appel, Pontus
2005-01-01
For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.
Pointing error analysis of Risley-prism-based beam steering system.
Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng
2014-09-01
Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.
76 FR 65713 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-24
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice.... submits tariff filing per 35.17(b): Supplemental Filing to Schedule 21-VEC Revisions to be effective 4/1... effective 12/1/2011. Filed Date: 10/13/2011. Accession Number: 20111013-5038. Comment Date: 5 p.m. Eastern...
An Exploration of Female Travellers' Experiences of Guidance Counselling in Adult Education
ERIC Educational Resources Information Center
Doyle, Anne; Hearne, Lucy
2012-01-01
The proposed changes in the further education sector, including the rationalisation of the VEC into Local Education and Training Boards (LETBs) and the closures of the Senior Traveller Training Centres (STTCs), have implications for guidance counselling provision to the Traveller community. This article discusses female Travellers' experiences of…
A Collaborative Virtual Environment for Situated Language Learning Using VEC3D
ERIC Educational Resources Information Center
Shih, Ya-Chun; Yang, Mau-Tsuen
2008-01-01
A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…
Goto, Makiko; Ikeyama, Kazuyuki; Tsutsumi, Moe; Denda, Sumiko; Denda, Mitsuhiro
2010-07-01
We have previously suggested that a variety of environmental factors might be first sensed by epidermal keratinocytes, which represent the frontier of the body. To further examine this idea, in the present study, we examined the intracellular calcium responses of cultured keratinocytes to external hydraulic pressure. First, we compared the responses of undifferentiated and differentiated keratinocytes with those of fibroblasts, vascular endothelial cells (VEC), and lymphatic endothelial cells. Elevation of intracellular calcium was observed after application of pressure to keratinocytes, fibroblasts, and VEC. The calcium propagation extended over a larger area and continued for a longer period of time in differentiated keratinocytes, as compared with the other cells. The response of the keratinocytes was dramatically reduced when the cells were incubated in medium without calcium. Application of a non-selective transient receptor potential (TRP) channel blocker also attenuated the calcium response. These results suggest that differentiated keratinocytes are sensitive to external pressure and that TRP might be involved in the mechanism of their response. (c) 2010 Wiley-Liss, Inc.
Sukumaran, Sunil K; Prasadarao, Nemani V
2003-11-01
We investigated the permeability changes that occur in the human brain microvascular endothelial cell (HBMEC) monolayer, an in vitro model of the blood-brain barrier, during Escherichia coli K1 infection. An increase in permeability of HBMECs and a decrease in transendothelial electrical resistance were observed. These permeability changes occurred only when HBMECs were infected with E. coli expressing outer membrane protein A (OmpA) and preceded the traversal of bacteria across the monolayer. Activated protein kinase C (PKC)-alpha interacts with vascular-endothelial cadherins (VECs) at the tight junctions of HBMECs, resulting in the dissociation of beta-catenins from VECs and leading to the increased permeability of the HBMEC monolayer. Overexpression of a dominant negative form of PKC-alpha in HBMECs blocked the E. coli-induced increase in permeability of HBMECs. Anti-OmpA and anti-OmpA receptor antibodies exerted inhibition of E. coli-induced permeability of HBMEC monolayers. This inhibition was the result of the absence of PKC-alpha activation in HBMECs treated with the antibodies.
Kim, A.; Avakian, H.; Burkert, V.; ...
2017-02-22
The target and double spin asymmetries of the exclusive pseudoscalar channelmore » $$\\vec e\\vec p\\to ep\\pi^0$$ were measured for the first time in the deep-inelastic regime using a longitudinally polarized 5.9 GeV electron beam and a longitudinally polarized proton target at Jefferson Lab with the CEBAF Large Acceptance Spectrometer (CLAS). The data were collected over a large kinematic phase space and divided into 110 four-dimensional bins of $Q^2$, $$x_B$$, $-t$ and $$\\phi$$. Large values of asymmetry moments clearly indicate a substantial contribution to the polarized structure functions from transverse virtual photon amplitudes. The interpretation of experimental data in terms of generalized parton distributions (GPDs) provides the first insight on the chiral-odd GPDs $$\\tilde{H}_T$$ and $$E_T$$, and complement previous measurements of unpolarized structure functions sensitive to the GPDs $$H_T$$ and $$\\bar E_T$$. Finally, these data provide necessary constraints for chiral-odd GPD parametrizations and will strongly influence existing theoretical handbag models.« less
An alternative clinical routine for subjective refraction based on power vectors with trial frames.
María Revert, Antonia; Conversa, Maria Amparo; Albarrán Diego, César; Micó, Vicente
2017-01-01
Subjective refraction determines the final point of refractive error assessment in most clinical environments and its foundations have remained unchanged for decades. The purpose of this paper is to compare the results obtained when monocular subjective refraction is assessed in trial frames by a new clinical procedure based on a pure power vector interpretation with conventional clinical refraction procedures. An alternative clinical routine is described that uses power vector interpretation with implementation in trial frames. Refractive error is determined in terms of: (i) the spherical equivalent (M component), and (ii) a pair of Jackson Crossed Cylinder lenses oriented at 0°/90° (J 0 component) and 45°/135° (J 45 component) for determination of astigmatism. This vector subjective refraction result (VR) is compared separately for right and left eyes of 25 subjects (mean age, 35 ± 4 years) against conventional sphero-cylindrical subjective refraction (RX) using a phoropter. The VR procedure was applied with both conventional tumbling E optotypes (VR1) and modified optotypes with oblique orientation (VR2). Bland-Altman plots and intra-class correlation coefficient showed good agreement between VR, and RX (with coefficient values above 0.82) and anova showed no significant differences in any of the power vector components between RX and VR. VR1 and VR2 procedure results were similar (p ≥ 0.77). The proposed routine determines the three components of refractive error in power vector notation [M, J 0 , J 45 ], with a refraction time similar to the one used in conventional subjective procedures. The proposed routine could be helpful for inexperienced clinicians and for experienced clinicians in those cases where it is difficult to get a valid starting point for conventional RX (irregular corneas, media opacities, etc.) and for refractive situations/places with inadequate refractive facilities/equipment. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
Vincenti, H.; Lobet, M.; Lehe, R.; ...
2016-09-19
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Lobet, M.; Lehe, R.
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
Duanmu, J; Cheng, J; Xu, J; Booth, C J; Hu, Z
2011-04-26
The purpose of this study was to test a novel, dual tumour vascular endothelial cell (VEC)- and tumour cell-targeting factor VII-targeted Sn(IV) chlorin e6 photodynamic therapy (fVII-tPDT) by targeting a receptor tissue factor (TF) as an alternative treatment for chemoresistant breast cancer using a multidrug resistant (MDR) breast cancer line MCF-7/MDR. The TF expression by the MCF-7/MDR breast cancer cells and tumour VECs in MCF-7/MDR tumours from mice was determined separately by flow cytometry and immunohistochemistry using anti-human or anti-murine TF antibodies. The efficacy of fVII-tPDT was tested in vitro and in vivo and was compared with non-targeted PDT for treatment of chemoresistant breast cancer. The in vitro efficacy was determined by a non-clonogenic assay using crystal violet staining for monolayers, and apoptosis and necrosis were assayed to elucidate the underlying mechanisms. The in vivo efficacy of fVII-tPDT was determined in a nude mouse model of subcutaneous MCF-7/MDR tumour xenograft by measuring tumour volume. To our knowledge, this is the first presentation showing that TF was expressed on tumour VECs in chemoresistant breast tumours from mice. The in vitro efficacy of fVII-tPDT was 12-fold stronger than that of ntPDT for MCF-7/MDR cancer cells, and the mechanism of action involved induction of apoptosis and necrosis. Moreover, fVII-tPDT was effective and safe for the treatment of chemoresistant breast tumours in the nude mouse model. We conclude that fVII-tPDT is effective and safe for the treatment of chemoresistant breast cancer, presumably by simultaneously targeting both the tumour neovasculature and chemoresistant cancer cells. Thus, this dual-targeting fVII-tPDT could also have therapeutic potential for the treatment of other chemoresistant cancers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, Saroj L.; Corbett, John D.
Na{sub 6}Cd{sub 16}Au{sub 7} has been synthesized via typical high-temperature reactions, and its structure refined by single crystal X-ray diffraction as cubic, Fm{bar 3}m, a = 13.589(1) {angstrom}, Z = 4. The structure consists of Cd{sub 8} tetrahedral star (TS) building blocks that are face capped by six shared gold (Au2) vertexes and further diagonally bridged via Au1 to generate an orthogonal, three-dimensional framework [Cd{sub 8}(Au2){sub 6/2}(Au1){sub 4/8}], an ordered ternary derivative of Mn{sub 6}Th{sub 23}. Linear muffin-tin-orbital (LMTO)-atomic sphere approximation (ASA) electronic structure calculations indicate that Na{sub 6}Cd{sub 16}Au{sub 7} is metallic and that {approx}76% of the total crystalmore » orbital Hamilton populations (-ICOHP) originate from polar Cd-Au bonding with 18% more from fewer Cd-Cd contacts. Na{sub 6}Cd{sub 16}Au{sub 7} (45 valence electron count (vec)) is isotypic with the older electron-richer Mg{sub 6}Cu{sub 16}Si{sub 7} (56 vec) in which the atom types are switched and bonding characteristics among the network elements are altered considerably (Si for Au, Cu for Cd, Mg for Na). The earlier and more electronegative element Au now occupies the Si site, in accord with the larger relativistic bonding contributions from polar Cd-Au versus Cu-Si bonds with the neighboring Cd in the former Cu positions. Substantial electronic differences in partial densities-of-states (PDOS) and COHP data for all atoms emphasize these. Strong contributions of nearby Au 5d{sup 10} to bonding states without altering the formal vec are the likely origin of these effects.« less
Alaybeyoglu, Begum; Uluocak, Bilge Gedik; Akbulut, Berna Sariyar; Ozkirimli, Elif
2017-05-01
Co-administration of beta-lactam antibiotics and beta-lactamase inhibitors has been a favored treatment strategy against beta-lactamase-mediated bacterial antibiotic resistance, but the emergence of beta-lactamases resistant to current inhibitors necessitates the discovery of novel non-beta-lactam inhibitors. Peptides derived from the Ala46-Tyr51 region of the beta-lactamase inhibitor protein are considered as potent inhibitors of beta-lactamase; unfortunately, peptide delivery into the cell limits their potential. The properties of cell-penetrating peptides could guide the design of beta-lactamase inhibitory peptides. Here, our goal is to modify the peptide with the sequence RRGHYY that possesses beta-lactamase inhibitory activity under in vitro conditions. Inspired by the work on the cell-penetrating peptide pVEC, our approach involved the addition of the N-terminal hydrophobic residues, LLIIL, from pVEC to the inhibitor peptide to build a chimera. These residues have been reported to be critical in the uptake of pVEC. We tested the potential of RRGHYY and its chimeric derivative as a beta-lactamase inhibitory peptide on Escherichia coli cells and compared the results with the action of the antimicrobial peptide melittin, the beta-lactam antibiotic ampicillin, and the beta-lactamase inhibitor potassium clavulanate to get mechanistic details on their action. Our results show that the addition of LLIIL to the N-terminus of the beta-lactamase inhibitory peptide RRGHYY increases its membrane permeabilizing potential. Interestingly, the addition of this short stretch of hydrophobic residues also modified the inhibitory peptide such that it acquired antimicrobial property. We propose that addition of the hydrophobic LLIIL residues to the peptide N-terminus offers a promising strategy to design novel antimicrobial peptides in the battle against antibiotic resistance. Copyright © 2017 European Peptide Society and John Wiley & Sons, Ltd. Copyright © 2017 European Peptide Society and John Wiley & Sons, Ltd.
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
An LPV Adaptive Observer for Updating a Map Applied to an MAF Sensor in a Diesel Engine
Liu, Zhiyuan; Wang, Changhui
2015-01-01
In this paper, a new method for mass air flow (MAF) sensor error compensation and an online updating error map (or lookup table) due to installation and aging in a diesel engine is developed. Since the MAF sensor error is dependent on the engine operating point, the error model is represented as a two-dimensional (2D) map with two inputs, fuel mass injection quantity and engine speed. Meanwhile, the 2D map representing the MAF sensor error is described as a piecewise bilinear interpolation model, which can be written as a dot product between the regression vector and parameter vector using a membership function. With the combination of the 2D map regression model and the diesel engine air path system, an LPV adaptive observer with low computational load is designed to estimate states and parameters jointly. The convergence of the proposed algorithm is proven under the conditions of persistent excitation and given inequalities. The observer is validated against the simulation data from engine software enDYNA provided by Tesis. The results demonstrate that the operating point-dependent error of the MAF sensor can be approximated acceptably by the 2D map from the proposed method. PMID:26512675
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-01
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles. PMID:28275211
An investigation of reports of Controlled Flight Toward Terrain (CFTT)
NASA Technical Reports Server (NTRS)
Porter, R. F.; Loomis, J. P.
1981-01-01
Some 258 reports from more than 23,000 documents in the files of the Aviation Safety Reporting System (ASRS) were found to be to the hazard of flight into terrain with no prior awareness by the crew of impending disaster. Examination of the reports indicate that human error was a casual factor in 64% of the incidents in which some threat of terrain conflict was experienced. Approximately two-thirds of the human errors were attributed to controllers, the most common discrepancy being a radar vector below the Minimum Vector Altitude (MVA). Errors by pilots were of a much diverse nature and include a few instances of gross deviations from their assigned altitudes. The ground proximity warning system and the minimum safe altitude warning equipment were the initial recovery factor in some 18 serious incidents and were apparently the sole warning in six reported instances which otherwise would most probably have ended in disaster.
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano
2015-01-01
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392
Design of an optimal preview controller for linear discrete-time descriptor systems with state delay
NASA Astrophysics Data System (ADS)
Cao, Mengjuan; Liao, Fucheng
2015-04-01
In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano
2015-06-17
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.
Early Error Detection: An Action-Research Experience Teaching Vector Calculus
ERIC Educational Resources Information Center
Añino, María Magdalena; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana
2014-01-01
This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied,…
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Nogales-Bueno, Julio; Ayala, Fernando; Hernández-Hierro, José Miguel; Rodríguez-Pulido, Francisco José; Echávarri, José Federico; Heredia, Francisco José
2015-05-06
Characteristic vector analysis has been applied to near-infrared spectra to extract the main spectral information from hyperspectral images. For this purpose, 3, 6, 9, and 12 characteristic vectors have been used to reconstruct the spectra, and root-mean-square errors (RMSEs) have been calculated to measure the differences between characteristic vector reconstructed spectra (CVRS) and hyperspectral imaging spectra (HIS). RMSE values obtained were 0.0049, 0.0018, 0.0012, and 0.0012 [log(1/R) units] for spectra allocated into the validation set, for 3, 6, 9, and 12 characteristic vectors, respectively. After that, calibration models have been developed and validated using the different groups of CVRS to predict skin total phenolic concentration, sugar concentration, titratable acidity, and pH by modified partial least-squares (MPLS) regression. The obtained results have been compared to those previously obtained from HIS. The models developed from the CVRS reconstructed from 12 characteristic vectors present similar values of coefficients of determination (RSQ) and standard errors of prediction (SEP) than the models developed from HIS. RSQ and SEP were 0.84 and 1.13 mg g(-1) of skin grape (expressed as gallic acid equivalents), 0.93 and 2.26 °Brix, 0.97 and 3.87 g L(-1) (expressed as tartaric acid equivalents), and 0.91 and 0.14 for skin total phenolic concentration, sugar concentration, titratable acidity, and pH, respectively, for the models developed from the CVRS reconstructed from 12 characteristic vectors.
Bishop, Peter J; Clemente, Christofer J; Hocknull, Scott A; Barrett, Rod S; Lloyd, David G
2017-03-01
Cancellous bone is very sensitive to its prevailing mechanical environment, and study of its architecture has previously aided interpretations of locomotor biomechanics in extinct animals or archaeological populations. However, quantification of architectural features may be compromised by poor preservation in fossil and archaeological specimens, such as post mortem cracking or fracturing. In this study, the effects of post mortem cracks on the quantification of cancellous bone fabric were investigated through the simulation of cracks in otherwise undamaged modern bone samples. The effect on both scalar (degree of fabric anisotropy, fabric elongation index) and vector (principal fabric directions) variables was assessed through comparing the results of architectural analyses of cracked vs. non-cracked samples. Error was found to decrease as the relative size of the crack decreased, and as the orientation of the crack approached the orientation of the primary fabric direction. However, even in the best-case scenario simulated, error remained substantial, with at least 18% of simulations showing a > 10% error when scalar variables were considered, and at least 6.7% of simulations showing a > 10° error when vector variables were considered. As a 10% (scalar) or 10° (vector) difference is probably too large for reliable interpretation of a fossil or archaeological specimen, these results suggest that cracks should be avoided if possible when analysing cancellous bone architecture in such specimens. © 2016 Anatomical Society.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Valuing Puget Sound’s Valued Ecosystems Components
2007-07-01
communicate the value of Puget Sound nearshore restoration to managers and the public, and are intended to speak to ecological and societal values...list of VECs is meant to represent a cross-section of organisms and physical structures that occupy and interact with the physical processes found in...7 Applying Economic Valuation Techniques to Ecological Resources
USDA-ARS?s Scientific Manuscript database
‘Ca. Liberibacter asiaticus’ is the causal agent of citrus huanglongbing, the most serious disease of citrus worldwide. We have developed and applied immunization and affinity screening methods to develop a primary library of recombinant single chain variable fragment (scFv) antibodies in an M13 vec...
Average Nuclear Potentials from Selfconsistent Semiclassical Calculations
NASA Astrophysics Data System (ADS)
Bartel, J.
1999-03-01
Using the selfconsistent semiclassical Extended Thomas-Fermi (ETF) method up to 4th order in connection with Skyrme forces it is demonstrated that the neutron and proton average potentials obtained using the semiclassical functionals τ (ETF)[ρ] and vec {J}(ETF)[ρ] reproduce the corresponding Hartree-Fock fields extremely well, except for shell oscillations in the nuclear center.
NASA Astrophysics Data System (ADS)
Odinokov, S. B.; Petrov, A. V.
1995-10-01
Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.
Study on the precision of the guide control system of independent wheel
NASA Astrophysics Data System (ADS)
ji, Y.; Ren, L.; Li, R.; Sun, W.
2016-09-01
The torque ripple of permanent magnet synchronous motor vector with active control is studied in this paper. The ripple appears because of the impact of position detection and current detection, the error generated in inverter and the influence of motor ontology (magnetic chain harmonic and the cogging effect and so on). Then, the simulation dynamic model of bogie with permanent magnet synchronous motor vector control system is established with MATLAB/Simulink. The stability of bogie with steering control is studied. The relationship between the error of the motor and the precision of the control system is studied. The result shows that the existing motor does not meet the requirements of the control system.
Early error detection: an action-research experience teaching vector calculus
NASA Astrophysics Data System (ADS)
Magdalena Añino, María; Merino, Gabriela; Miyara, Alberto; Perassi, Marisol; Ravera, Emiliano; Pita, Gustavo; Waigandt, Diana
2014-04-01
This paper describes an action-research experience carried out with second year students at the School of Engineering of the National University of Entre Ríos, Argentina. Vector calculus students played an active role in their own learning process. They were required to present weekly reports, in both oral and written forms, on the topics studied, instead of merely sitting and watching as the teacher solved problems on the blackboard. The students were also asked to perform computer assignments, and their learning process was continuously monitored. Among many benefits, this methodology has allowed students and teachers to identify errors and misconceptions that might have gone unnoticed under a more passive approach.
A fingerprint key binding algorithm based on vector quantization and error correction
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
NASA Technical Reports Server (NTRS)
Sylvester, W. B.
1984-01-01
A series of SEASAT repeat orbits over a sequence of best Low center positions is simulated by using the Seatrak satellite calculator. These Low centers are, upon appropriate interpolation to hourly positions, Located at various times during the + or - 3 hour assimilation cycle. Error analysis for a sample of best cyclone center positions taken from the Atlantic and Pacific oceans reveals a minimum average error of 1.1 deg of Longitude and a standard deviation of 0.9 deg of Longitude. The magnitude of the average error seems to suggest that by utilizing the + or - 3 hour window in the assimilation cycle, the quality of the SASS data is degraded to the Level of the background. A further consequence of this assimilation scheme is the effect which is manifested as a result of the blending of two or more more juxtaposed vector winds, generally possessing different properties (vector quantity and time). The outcome of this is to reduce gradients in the wind field and to deform isobaric and frontal patterns of the intial field.
Xue, Min; Pan, Shilong; Zhao, Yongjiu
2015-02-15
A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A
2014-06-15
Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less
Attitude determination using vector observations: A fast optimal matrix algorithm
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba
2016-05-01
In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Development of a two-dimensional dual pendulum thrust stand for Hall thrusters.
Nagao, N; Yokota, S; Komurasaki, K; Arakawa, Y
2007-11-01
A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors [axial and horizontal (transverse) direction thrusts] of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%) in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of +/-2.3 degrees was measured with the error of +/-0.2 degrees under the typical operating conditions for the thruster.
Spaceflight Ka-Band High-Rate Radiation-Hard Modulator
NASA Technical Reports Server (NTRS)
Jaso, Jeffery M.
2011-01-01
A document discusses the creation of a Ka-band modulator developed specifically for the NASA/GSFC Solar Dynamics Observatory (SDO). This flight design consists of a high-bandwidth, Quadriphase Shift Keying (QPSK) vector modulator with radiation-hardened, high-rate driver circuitry that receives I and Q channel data. The radiationhard design enables SDO fs Ka-band communications downlink system to transmit 130 Mbps (300 Msps after data encoding) of science instrument data to the ground system continuously throughout the mission fs minimum life of five years. The low error vector magnitude (EVM) of the modulator lowers the implementation loss of the transmitter in which it is used, thereby increasing the overall communication system link margin. The modulator comprises a component within the SDO transmitter, and meets the following specifications over a 0 to 40 C operational temperature range: QPSK/OQPSK modulator, 300-Msps symbol rate, 26.5-GHz center frequency, error vector magnitude less than or equal to 10 percent rms, and compliance with the NTIA (National Telecommunications and Information Administration) spectral mask.
Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
NASA Astrophysics Data System (ADS)
Imig, Astrid; Stephenson, Edward
2009-10-01
The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.
Force estimation from OCT volumes using 3D CNNs.
Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander
2018-07-01
Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Vezewski, D. J.
1980-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Jezewski, D. J.
1979-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
PubRunner: A light-weight framework for updating text mining results.
Anekalla, Kishore R; Courneya, J P; Fiorini, Nicolas; Lever, Jake; Muchow, Michael; Busby, Ben
2017-01-01
Biomedical text mining promises to assist biologists in quickly navigating the combined knowledge in their domain. This would allow improved understanding of the complex interactions within biological systems and faster hypothesis generation. New biomedical research articles are published daily and text mining tools are only as good as the corpus from which they work. Many text mining tools are underused because their results are static and do not reflect the constantly expanding knowledge in the field. In order for biomedical text mining to become an indispensable tool used by researchers, this problem must be addressed. To this end, we present PubRunner, a framework for regularly running text mining tools on the latest publications. PubRunner is lightweight, simple to use, and can be integrated with an existing text mining tool. The workflow involves downloading the latest abstracts from PubMed, executing a user-defined tool, pushing the resulting data to a public FTP or Zenodo dataset, and publicizing the location of these results on the public PubRunner website. We illustrate the use of this tool by re-running the commonly used word2vec tool on the latest PubMed abstracts to generate up-to-date word vector representations for the biomedical domain. This shows a proof of concept that we hope will encourage text mining developers to build tools that truly will aid biologists in exploring the latest publications.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Can different quantum state vectors correspond to the same physical state? An experimental test
NASA Astrophysics Data System (ADS)
Nigg, Daniel; Monz, Thomas; Schindler, Philipp; Martinez, Esteban A.; Hennrich, Markus; Blatt, Rainer; Pusey, Matthew F.; Rudolph, Terry; Barrett, Jonathan
2016-01-01
A century after the development of quantum theory, the interpretation of a quantum state is still discussed. If a physicist claims to have produced a system with a particular quantum state vector, does this represent directly a physical property of the system, or is the state vector merely a summary of the physicist’s information about the system? Assume that a state vector corresponds to a probability distribution over possible values of an unknown physical or ‘ontic’ state. Then, a recent no-go theorem shows that distinct state vectors with overlapping distributions lead to predictions different from quantum theory. We report an experimental test of these predictions using trapped ions. Within experimental error, the results confirm quantum theory. We analyse which kinds of models are ruled out.
ERIC Educational Resources Information Center
Patterson, Lorne; Dowd, Kathleen
2010-01-01
The recent economic downturn and surge in unemployment has focused attention on education and training as a strategic response to Ireland's socio-economic crisis. However, that attention has been concentrated on training through statutory institutions, particularly FAS and the VECs. Longford Women's Link, a Women's Community Education centre in Co…
Detecting West Nile virus in owls and raptors by an antigen-capture assay.
Gancz, Ady Y; Campbell, Douglas G; Barker, Ian K; Lindsay, Robbin; Hunter, Bruce
2004-12-01
We evaluated a rapid antigen-capture assay (VecTest) for detection of West Nile virus in oropharyngeal and cloacal swabs, collected at necropsy from owls (N = 93) and raptors (N = 27). Sensitivity was 93.5%-95.2% for northern owl species but <42.9% for all other species. Specificity was 100% for owls and 85.7% for raptors.
USDA-ARS?s Scientific Manuscript database
Dual luciferase reporter systems are valuable tools for functional genomic studies, but have not previously been developed for use in tick cell culture. We evaluated expression of available luciferase constructs in tick cell cultures derived from Rhipicephalus (Boophilus) microplus, an important vec...
Effect of Variable Emittance Coatings on the Operation of a Miniature Loop Heat Pipe
NASA Technical Reports Server (NTRS)
Douglas, Donya M.; Ku, Jentung; Ottenstein, Laura; Swanson, Theodore; Hess, Steve; Darrin, Ann
2005-01-01
Abstract. As the size of spacecraft shrink to accommodate small and more efficient instruments, smaller launch vehicles, and constellation missions, all subsystems must also be made smaller. Under NASA NFL4 03-OSS-02, Space Technology-8 (ST 8), NASA Goddard Space Flight Center and Jet Propulsion Laboratory jointly conducted a Concept Definition study to develop a miniature loop heat pipe (MLHP) thermal management system design suitable for future small spacecraft. The proposed MLHP thermal management system consists of a miniature loop heat pipe (LHP) and deployable radiators that are coated with variable emittance coatings (VECs). As part of the Phase A study and proof of the design concept, variable emittance coatings were integrated with a breadboard miniature loop heat pipe. The miniature loop heat pipe was supplied by the Jet Propulsion Laboratory (PL), while the variable emittance technology were supplied by Johns Hopkins University Applied Physics Laboratory and Sensortex, Inc. The entire system was tested under vacuum at various temperature extremes and power loads. This paper summarizes the results of this testing and shows the effect of the VEC on the operation of a miniature loop heat pipe.
Blake, Zoë; Marks, Douglas K; Gartrell, Robyn D; Hart, Thomas; Horton, Patti; Cheng, Simon K; Taback, Bret; Horst, Basil A; Saenger, Yvonne M
2018-04-06
Immunotherapy, in particular checkpoint blockade, has changed the clinical landscape of metastatic melanoma. Nonetheless, the majority of patients will either be primary refractory or progress over follow up. Management of patients progressing on first-line immunotherapy remains challenging. Expanded treatment options with combination immunotherapy has demonstrated efficacy in patients previously unresponsive to single agent or alternative combination therapy. We describe the case of a patient with diffusely metastatic melanoma, including brain metastases, who, despite being treated with stereotactic radiosurgery and dual CTLA-4/PD-1 blockade (ipilimumab/nivolumab), developed systemic disease progression and innumerable brain metastases. This patient achieved a complete CNS response and partial systemic response with standard whole brain radiation therapy (WBRT) combined with Talimogene laherparepvec (T-Vec) and pembrolizumab. Patients who do not respond to one immunotherapy combination may respond during treatment with an alternate combination, even in the presence of multiple brain metastases. Biomarkers are needed to assist clinicians in evidence based clinical decision making after progression on first line immunotherapy to determine whether response can be achieved with second line immunotherapy.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
JPRS Report, Science & Technology, China
1991-10-22
ZHONGGUO KEXUE BAO, 30 Aug 91] .......................................... 22 Shanghai Scientist Develops State-of-the-Art Liquid-Crystal Light Valve...the angle of attack will gradu- direction of the final velocity vector of the satellite are ally decrease under the action of aerodynamic moments...impulse and the direction of the thrust vector of the The recovery system, is located inside the sealed reentry retro-rocket engine, errors in the
The Alignment of the Mean Wind and Stress Vectors in the Unstable Surface Layer
NASA Astrophysics Data System (ADS)
Bernardes, M.; Dias, N. L.
2010-01-01
A significant non-alignment between the mean horizontal wind vector and the stress vector was observed for turbulence measurements both above the water surface of a large lake, and over a land surface (soybean crop). Possible causes for this discrepancy such as flow distortion, averaging times and the procedure used for extracting the turbulent fluctuations (low-pass filtering and filter widths etc.), were dismissed after a detailed analysis. Minimum averaging times always less than 30 min were established by calculating ogives, and error bounds for the turbulent stresses were derived with three different approaches, based on integral time scales (first-crossing and lag-window estimates) and on a bootstrap technique. It was found that the mean absolute value of the angle between the mean wind and stress vectors is highly related to atmospheric stability, with the non-alignment increasing distinctively with increasing instability. Given a coordinate rotation that aligns the mean wind with the x direction, this behaviour can be explained by the growth of the relative error of the u- w component with instability. As a result, under more unstable conditions the u- w and the v- w components become of the same order of magnitude, and the local stress vector gives the impression of being non-aligned with the mean wind vector. The relative error of the v- w component is large enough to make it undistinguishable from zero throughout the range of stabilities. Therefore, the standard assumptions of Monin-Obukhov similarity theory hold: it is fair to assume that the v- w stress component is actually zero, and that the non-alignment is a purely statistical effect. An analysis of the dimensionless budgets of the u- w and the v- w components confirms this interpretation, with both shear and buoyant production of u- w decreasing with increasing instability. In the v- w budget, shear production is zero by definition, while buoyancy displays very low-intensity fluctuations around zero. As local free convection is approached, the turbulence becomes effectively axisymetrical, and a practical limit seems to exist beyond which it is not possible to measure the u- w component accurately.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
Coherent Doppler Lidar for Boundary Layer Studies and Wind Energy
NASA Astrophysics Data System (ADS)
Choukulkar, Aditya
This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS RTM) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Tuning support vector machines for minimax and Neyman-Pearson classification.
Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D
2010-10-01
This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.
Test of understanding of vectors: A reliable multiple-choice vector concept test
NASA Astrophysics Data System (ADS)
Barniol, Pablo; Zavala, Genaro
2014-06-01
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended problems in which a total of 2067 students participated. Using this taxonomy, we then designed a 20-item multiple-choice test [Test of understanding of vectors (TUV)] and administered it in English to 423 students who were completing the required sequence of introductory physics courses at a large private Mexican university. We evaluated the test's content validity, reliability, and discriminatory power. The results indicate that the TUV is a reliable assessment tool. We also conducted a detailed analysis of the students' understanding of the vector concepts evaluated in the test. The TUV is included in the Supplemental Material as a resource for other researchers studying vector learning, as well as instructors teaching the material.
Attitude control with realization of linear error dynamics
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Bach, Ralph E.
1993-01-01
An attitude control law is derived to realize linear unforced error dynamics with the attitude error defined in terms of rotation group algebra (rather than vector algebra). Euler parameters are used in the rotational dynamics model because they are globally nonsingular, but only the minimal three Euler parameters are used in the error dynamics model because they have no nonlinear mathematical constraints to prevent the realization of linear error dynamics. The control law is singular only when the attitude error angle is exactly pi rad about any eigenaxis, and a simple intuitive modification at the singularity allows the control law to be used globally. The forced error dynamics are nonlinear but stable. Numerical simulation tests show that the control law performs robustly for both initial attitude acquisition and attitude control.
The bee's map of the e-vector pattern in the sky.
Rossel, S; Wehner, R
1982-07-01
It has long been known that bees can use the pattern of polarized light in the sky as a compass cue even if they can see only a small part of the whole pattern. How they solve this problem has remained enigmatic. Here we show that the bees rely on a generalized celestial map that is used invariably throughout the day. We reconstruct this map by analyzing the navigation errors made by bees to which single e-vectors are displayed. In addition, we demonstrate how the bee's celestial map can be derived from the e-vector patterns in the sky.
Deep Learning from EEG Reports for Inferring Underspecified Information
Goodwin, Travis R.; Harabagiu, Sanda M.
2017-01-01
Secondary use1of electronic health records (EHRs) often relies on the ability to automatically identify and extract information from EHRs. Unfortunately, EHRs are known to suffer from a variety of idiosyncrasies – most prevalently, they have been shown to often omit or underspecify information. Adapting traditional machine learning methods for inferring underspecified information relies on manually specifying features characterizing the specific information to recover (e.g. particular findings, test results, or physician’s impressions). By contrast, in this paper, we present a method for jointly (1) automatically extracting word- and report-level features and (2) inferring underspecified information from EHRs. Our approach accomplishes these two tasks jointly by combining recent advances in deep neural learning with access to textual data in electroencephalogram (EEG) reports. We evaluate the performance of our model on the problem of inferring the neurologist’s over-all impression (normal or abnormal) from electroencephalogram (EEG) reports and report an accuracy of 91.4% precision of 94.4% recall of 91.2% and F1 measure of 92.8% (a 40% improvement over the performance obtained using Doc2Vec). These promising results demonstrate the power of our approach, while error analysis reveals remaining obstacles as well as areas for future improvement. PMID:28815118
The role of model dynamics in ensemble Kalman filter performance for chaotic systems
Ng, G.-H.C.; McLaughlin, D.; Entekhabi, D.; Ahanin, A.
2011-01-01
The ensemble Kalman filter (EnKF) is susceptible to losing track of observations, or 'diverging', when applied to large chaotic systems such as atmospheric and ocean models. Past studies have demonstrated the adverse impact of sampling error during the filter's update step. We examine how system dynamics affect EnKF performance, and whether the absence of certain dynamic features in the ensemble may lead to divergence. The EnKF is applied to a simple chaotic model, and ensembles are checked against singular vectors of the tangent linear model, corresponding to short-term growth and Lyapunov vectors, corresponding to long-term growth. Results show that the ensemble strongly aligns itself with the subspace spanned by unstable Lyapunov vectors. Furthermore, the filter avoids divergence only if the full linearized long-term unstable subspace is spanned. However, short-term dynamics also become important as non-linearity in the system increases. Non-linear movement prevents errors in the long-term stable subspace from decaying indefinitely. If these errors then undergo linear intermittent growth, a small ensemble may fail to properly represent all important modes, causing filter divergence. A combination of long and short-term growth dynamics are thus critical to EnKF performance. These findings can help in developing practical robust filters based on model dynamics. ?? 2011 The Authors Tellus A ?? 2011 John Wiley & Sons A/S.
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-01-01
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-12-09
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
PREDICTION OF SOLAR FLARE SIZE AND TIME-TO-FLARE USING SUPPORT VECTOR MACHINE REGRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucheron, Laura E.; Al-Ghraibah, Amani; McAteer, R. T. James
We study the prediction of solar flare size and time-to-flare using 38 features describing magnetic complexity of the photospheric magnetic field. This work uses support vector regression to formulate a mapping from the 38-dimensional feature space to a continuous-valued label vector representing flare size or time-to-flare. When we consider flaring regions only, we find an average error in estimating flare size of approximately half a geostationary operational environmental satellite (GOES) class. When we additionally consider non-flaring regions, we find an increased average error of approximately three-fourths a GOES class. We also consider thresholding the regressed flare size for the experimentmore » containing both flaring and non-flaring regions and find a true positive rate of 0.69 and a true negative rate of 0.86 for flare prediction. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This is supported by our larger error rates of some 40 hr in the time-to-flare regression problem. The 38 magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem.« less
History of the Voluntary Intermodal Sealift Agreement
2002-06-01
reflect executed Voluntary Enrollment Contracts (VEC) for VISA Stages I, II, and III to include basic activation procedures; DOD annual minimums for...provisions; and on-the-shelf basic agreements (such as VISA Intermodal Contingency Contracts (VICC) for Stages I, II, and III). The anticipated...insufficient Program incentives are revised annually, but the basic tenets remain in place. Activation, capacity required to commit and carrier risk clauses
Detecting West Nile Virus in Owls and Raptors by an Antigen-capture Assay
Campbell, Douglas G.; Barker, Ian K.; Lindsay, Robbin; Hunter, Bruce
2004-01-01
We evaluated a rapid antigen-capture assay (VecTest) for detection of West Nile virus in oropharyngeal and cloacal swabs, collected at necropsy from owls (N = 93) and raptors (N = 27). Sensitivity was 93.5%–95.2% for northern owl species but <42.9% for all other species. Specificity was 100% for owls and 85.7% for raptors. PMID:15663862
Planning for the Future of the Adult Education Service: A Challenge for VECs
ERIC Educational Resources Information Center
Muircheartaigh, Lucas O.
2004-01-01
In this paper, the author points out that there has been a significant development of the adult education service in Ireland in recent years. However, if the service is to become part of the mainstream of Irish education, the issue of structures at all levels within the system has to be addressed. Quite frankly the present system is very…
Promoter and Cofactor Requirements for SERM-ER Activity
2007-05-01
was seen in the no- digestion control or no-ligation control. We performed the same experiment using thedesigned against the intergenic region between...estrogen, and the fixed chromatin was digested with a specific restriction taining an SV40 promoter and transfected these vec- tors into hormone...Enhancer Domains and Transcriptional Activity of En- hancer Regions (A) Chromosome capture assay was per- formed after digesting fixed chromatin from
Yang, Fan; Hu, Duan; Bai, Xiang-jun; Zhang, Kun; Li, Ren-jie; Xue, Chen-chen
2012-07-01
To investigate the effect of vacuum sealing drainage (VSD) on variation of oxygen partial pressure (PtO2) and vascularization. The 12 cases of rabbit's wound models were undergoing the VSD (vacuum group, n = 6) or conventional therapy (conventional group, n = 6). Variation of PtO2 was measured by oxygen partial pressure admeasuring apparatus, expression of hypoxia inducible factor 1α (HIF-1α) mRNA was measured by real-time fluorescent quantitative PCR, content of vascular endothelial growth factor (VEGF) was measured by ELISA after tissue homogenate in 7 days. Vascular endothelial cell (VEC) and new blood capillary (NBC) of hematoxylin-eosin slice of tissue were counted by using light microscope. Average value of PtO2 of vacuum group was significant lower than conventional group (t = -99.780 to -5.305, P < 0.01). Expression of HIF-1α (30 minutes, 1, 6, 12 hours were 3.11 ± 0.07, 3.68 ± 0.26, 4.16 ± 0.13 and 3.91 ± 0.26 respectively) and content of VEGF (30 minutes, 1, 6, 12 hours were 103.3 ± 2.4, 134.2 ± 9.0, 167.8 ± 3.8 and 232.1 ± 9.5 respectively) of vacuum group were increased after 30 minutes and significant lower than conventional group (t = 13.038 - 80.208, P < 0.01), and both of them were reduced after 24 hours (P < 0.05). Counting numbers of VEC (2.47 ± 0.45 to 4.70 ± 0.38) and NBC (1.33 ± 0.49 to 4.33 ± 0.68) of vacuum group were increased at the same time-point and significant higher than conventional group (t = -0.670 to 16.500, P < 0.05). PtO2 of wound surface could be reduced significantly by VSD. Expression of HIF-1α and content of VEGF were increased by VSD for enhancing differentiated state of VEC and construction of NBC, which were better for vascularization and wound healing.
Yang, Di; Xiao, Chen-Xi; Su, Zheng-Hua; Huang, Meng-Wei; Qin, Ming; Wu, Wei-Jun; Jia, Wan-Wan; Zhu, Yi-Zhun; Hu, Jin-Feng; Liu, Xin-Hua
2017-08-15
Endothelial inflammation is an increasingly prevalent condition in the pathogenesis of many cardiovascular diseases. (-)-7(S)-hydroxymatairesinol (7-HMR), a naturally occurring plant lignan, possesses both antioxidant and anti-cancer properties and therefore would be a good strategy to suppress tumor necrosis factor-α (TNF-α)-mediated inflammation in vascular endothelial cells (VECs). The objective of this study is to evaluate for its anti-inflammatory effect on TNF-α-stimulated VECs and underling mechanisms. The effect of the 7-HMR on suppression of TNF-α-induced inflammation mediators in VECs were determined by qRT-PCR and Western blot. MAPKs and phosphorylation of Akt, HO-1 and NF-κB p65 were examined using Western blot. Nuclear localisation of NF-κB was also examined using Western blot and immunofluorescence. Here we found that 7-HMR could suppress TNF-α-induced inflammatory mediators, such as vascularcelladhesion molecule-1, interleukin-6 and inducible nitric oxide synthase expression both in mRNA and protein levels, and concentration-dependently attenuated reactive oxidase species generation. We further identified that 7-HMR remarkably induced superoxide dismutase and heme oxygenase-1 expression associated with degradation of Kelch-like ECH-associated protein 1 (keap1) and up-regulated nuclear factor erythroid 2-related factor 2 (Nrf2). In addition, 7-HMR time- and concentration-dependently attenuated TNF-α-induced phosphorylation of extracellular signal-regulated kinase 1/2 (ERK) and Akt, but not p38, or c-Jun N-terminal kinase 1/2. Moreover, 7-HMR significantly suppressed TNF-α-mediated nuclear factor-κB (NF-κB) activation by inhibiting phosphorylation and nuclear translocation of NF-κB p65. Our results demonstrated that 7-HMR inhibited TNF-α-stimulated endothelial inflammation, at least in part, through inhibition of NF-κB activation and upregulation of Nrf2-antioxidant response element signaling pathway, suggesting 7-HMR might be used as a promising vascular protective drug. Copyright © 2017. Published by Elsevier GmbH.
Xi, Lei; Zhang, Chen; He, Yanling
2018-05-09
To evaluate the refractive and visual outcomes of Transepithelial photorefractive keratectomy (TransPRK) in the treatment of low to moderate myopic astigmatism. This retrospective study enrolled a total of 47 eyes that had undergone Transepithelial photorefractive keratectomy. Preoperative cylinder diopters ranged from - 0.75D to - 2.25D (mean - 1.11 ± 0.40D), and the sphere was between - 1.50D to - 5.75D. Visual outcomes and vector analysis of astigmatism that included error ratio (ER), correction ratio (CR), error of magnitude (EM) and error of angle (EA) were evaluated. At 6 months after TransPRK, all eyes had an uncorrected distance visual acuity of 20/20 or better, no eyes lost ≥2 lines of corrected distant visual acuity (CDVA), and 93.6% had residual refractive cylinder within ±0.50D of intended correction. On vector analysis, the mean correction ratio for refractive cylinder was 1.03 ± 0.30. The mean error magnitude was - 0.04 ± 0.36. The mean error of angle was 0.44° ± 7.42°and 80.9% of eyes had axis shift within ±10°. The absolute astigmatic error of magnitude was statistically significantly correlated with the intended cylinder correction (r = 0.48, P < 0.01). TransPRK showed safe, effective and predictable results in the correction of low to moderate astigmatism and myopia.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
NASA Technical Reports Server (NTRS)
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
An algorithm for targeting finite burn maneuvers
NASA Technical Reports Server (NTRS)
Barbieri, R. W.; Wyatt, G. H.
1972-01-01
An algorithm was developed to solve the following problem: given the characteristics of the engine to be used to make a finite burn maneuver and given the desired orbit, when must the engine be ignited and what must be the orientation of the thrust vector so as to obtain the desired orbit? The desired orbit is characterized by classical elements and functions of these elements whereas the control parameters are characterized by the time to initiate the maneuver and three direction cosines which locate the thrust vector. The algorithm was built with a Monte Carlo capability whereby samples are taken from the distribution of errors associated with the estimate of the state and from the distribution of errors associated with the engine to be used to make the maneuver.
[Gene therapy for the treatment of inborn errors of metabolism].
Pérez-López, Jordi
2014-06-16
Due to the enzymatic defect in inborn errors of metabolism, there is a blockage in the metabolic pathways and an accumulation of toxic metabolites. Currently available therapies include dietary restriction, empowering of alternative metabolic pathways, and the replacement of the deficient enzyme by cell transplantation, liver transplantation or administration of the purified enzyme. Gene therapy, using the transfer in the body of the correct copy of the altered gene by a vector, is emerging as a promising treatment. However, the difficulty of vectors currently used to cross the blood brain barrier, the immune response, the cellular toxicity and potential oncogenesis are some limitations that could greatly limit its potential clinical application in human beings. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Impact of Orbit Position Errors on Future Satellite Gravity Models
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Ditmar, P.; Klees, R.
2015-12-01
We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.
Development of a two-dimensional dual pendulum thrust stand for Hall thrusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagao, N.; Yokota, S.; Komurasaki, K.
A two-dimensional dual pendulum thrust stand was developed to measure thrust vectors (axial and horizontal (transverse) direction thrusts) of a Hall thruster. A thruster with a steering mechanism is mounted on the inner pendulum, and thrust is measured from the displacement between inner and outer pendulums, by which a thermal drift effect is canceled out. Two crossover knife-edges support each pendulum arm: one is set on the other at a right angle. They enable the pendulums to swing in two directions. Thrust calibration using a pulley and weight system showed that the measurement errors were less than 0.25 mN (1.4%)more » in the main thrust direction and 0.09 mN (1.4%) in its transverse direction. The thrust angle of the thrust vector was measured with the stand using the thruster. Consequently, a vector deviation from the main thrust direction of {+-}2.3 deg. was measured with the error of {+-}0.2 deg. under the typical operating conditions for the thruster.« less
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Gold, Peter O.; Cowgill, Eric; Kreylos, Oliver; Gold, Ryan D.
2012-01-01
Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.
NASA Astrophysics Data System (ADS)
Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.
2010-10-01
The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.
Gibbs, P E; Kilbey, B J; Banerjee, S K; Lawrence, C W
1993-05-01
We have compared the mutagenic properties of a T-T cyclobutane dimer in baker's yeast, Saccharomyces cerevisiae, with those in Escherichia coli by transforming each of these species with the same single-stranded shuttle vector carrying either the cis-syn or the trans-syn isomer of this UV photoproduct at a unique site. The mutagenic properties investigated were the frequency of replicational bypass of the photoproduct, the error rate of bypass, and the mutation spectrum. In SOS-induced E. coli, the cis-syn dimer was bypassed in approximately 16% of the vector molecules, and 7.6% of the bypass products had targeted mutations. In S. cerevisiae, however, bypass occurred in about 80% of these molecules, and the bypass was at least 19-fold more accurate (approximately 0.4% targeted mutations). Each of these yeast mutations was a single unique event, and none were like those in E. coli, suggesting that in fact the difference in error rate is much greater. Bypass of the trans-syn dimer occurred in about 17% of the vector molecules in both species, but with this isomer the error rate was higher in S. cerevisiae (21 to 36% targeted mutations) than in E. coli (13%). However, the spectra of mutations induced by the latter photoproduct were virtually identical in the two organisms. We conclude that bypass and error frequencies are determined both by the structure of the photoproduct-containing template and by the particular replication proteins concerned but that the types of mutations induced depend predominantly on the structure of the template. Unlike E. coli, bypass in S. cerevisiae did not require UV-induced functions.
Support of Mark III Optical Interferometer
1988-11-01
error, and low visibility* pedestal, and the surface of a zerodur sphere attached to the mirror errors are not entirely consistent. as shown in Fig. 7...of’ stellar usually associated with the primary mirror of a large astronomical interferometers at Mt. Wilson Observatory. The first instrument...the two siderostats is directed toward the central building by fixed mirrors . These fixed mirrors are necessary to keep the polarization - vectors
Defense Mapping Agency (DMA) Raster-to-Vector Analysis
1984-11-30
model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
Masking of errors in transmission of VAPC-coded speech
NASA Technical Reports Server (NTRS)
Cox, Neil B.; Froese, Edwin L.
1990-01-01
A subjective evaluation is provided of the bit error sensitivity of the message elements of a Vector Adaptive Predictive (VAPC) speech coder, along with an indication of the amenability of these elements to a popular error masking strategy (cross frame hold over). As expected, a wide range of bit error sensitivity was observed. The most sensitive message components were the short term spectral information and the most significant bits of the pitch and gain indices. The cross frame hold over strategy was found to be useful for pitch and gain information, but it was not beneficial for the spectral information unless severe corruption had occurred.
Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution
NASA Astrophysics Data System (ADS)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-01
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Adaptive Identification of Fluid-Dynamic Systems
2001-06-14
Fig. 1. Unknown System Adaptive Filter Σ _ + Input u Filter Output y Desired Output d Error e Fig. 1. Modeling of a SISO system using...2J E e n = (12) Here [ ]. E is the expectation operator and ( ) ( ) ( ) e n d n y n= − is the error between the desired system output and...B … input vector ( ) ( ) ( ) ( )[ ], , ,1 1 Tn u n u n u n N= − − +U … output and error ( ) ( ) ( ) ( ) ( ) ( ) ( ) T T y n n n e n d n n n
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Robust support vector regression networks for function approximation with outliers.
Chuang, Chen-Chia; Su, Shun-Feng; Jeng, Jin-Tsong; Hsiao, Chih-Ching
2002-01-01
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.
Zhou, Wen; Li, Xinying; Yu, Jianjun
2017-10-30
We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.
ERIC Educational Resources Information Center
Luo, Jingyi; Sorour, Shaymaa E.; Goda, Kazumasa; Mine, Tsunenori
2015-01-01
Continuously tracking students during a whole semester plays a vital role to enable a teacher to grasp their learning situation, attitude and motivation. It also helps to give correct assessment and useful feedback to them. To this end, we ask students to write their comments just after each lesson, because student comments reflect their learning…
Sc–Zr–Nb–Rh–Pd and Sc–Zr–Nb–Ta–Rh–Pd High-Entropy Alloy Superconductors on a CsCl-Type Lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolze, Karoline; Tao, Jing; von Rohr, Fabian O.
We have synthesized previously unreported High-Entropy Alloys (HEAs) in the pentanary (ScZrNb) 1-x[RhPd] x and hexanary (ScZrNbTa) 1-x[RhPd] x systems. The materials have CsCl-type structures and mixed site occupancies. Both HEAs are type-II superconductors with strongly varying critical temperatures (T cs) depending on the valence electron count (VEC); the T cs increase monotonically with decreasing VEC within each series, and do not follow the trends seen for either crystalline or amorphous transition metal superconductors. The (ScZrNb) 0.65[RhPd] 0.35 HEA with the highest T c, ~9.3 K, also exhibits the largest µ 0H c2(0) = 10.7 T. The pentanary and hexanarymore » HEAs have higher superconducting transition tempera-tures than their simple binary intermetallic relatives with the CsCl-type structure and a surprisingly ductile mechanical behavior. The presence of niobium, even at the 20% level, has a positive impact on the T c. Nevertheless, niobium-free (ScZr) 0.50[RhPd] 0.50, as mother-compound of both superconducting HEAs found here, is itself superconducting, proving that superconductivity is an intrinsic feature of the bulk material.« less
Electronic structure and the origin of the Dzyaloshinskii-Moriya interaction in MnSi
Satpathy, S.; Shanavas, K. V.
2016-05-02
Here, the metallic helimagnet MnSi has been found to exhibit skyrmionic spin textures when subjected to magnetic fields at low temperatures. The Dzyaloshinskii-Moriya (DM) interaction plays a key role in stabilizing the skyrmion state. With the help of first-principles calculations, crystal field theory and a tight-binding model we study the electronic structure and the origin of the DM interaction in the B20 phase of MnSi. The strength ofmore » $$\\vec{D}$$ parameter is determined by the magnitude of the spin-orbit interaction and the degree of orbital mixing, induced by the symmetry-breaking distortions in the B20 phase. We find that, strong coupling between Mn-$d$ and Si-$p$ states lead to a mixed valence ground state $$|d^{7-x}p^{2+x}\\rangle$$ configuration. The experimental magnetic moment of $$0.4~\\mu_B$$ is consistent with the Coulomb-corrected DFT+$U$ calculations, which redistributes electrons between the majority and minority spin channels. We derive the magnetic interaction parameters $J$ and $$\\vec{D}$$ for Mn-Si-Mn superexchange paths using Moriya's theory assuming the interaction to be mediated by $$e_g$$ electrons near the Fermi level. Finally, using parameters from our calculations, we get reasonable agreement with the observations.« less
Sc–Zr–Nb–Rh–Pd and Sc–Zr–Nb–Ta–Rh–Pd High-Entropy Alloy Superconductors on a CsCl-Type Lattice
Stolze, Karoline; Tao, Jing; von Rohr, Fabian O.; ...
2018-01-17
We have synthesized previously unreported High-Entropy Alloys (HEAs) in the pentanary (ScZrNb) 1-x[RhPd] x and hexanary (ScZrNbTa) 1-x[RhPd] x systems. The materials have CsCl-type structures and mixed site occupancies. Both HEAs are type-II superconductors with strongly varying critical temperatures (T cs) depending on the valence electron count (VEC); the T cs increase monotonically with decreasing VEC within each series, and do not follow the trends seen for either crystalline or amorphous transition metal superconductors. The (ScZrNb) 0.65[RhPd] 0.35 HEA with the highest T c, ~9.3 K, also exhibits the largest µ 0H c2(0) = 10.7 T. The pentanary and hexanarymore » HEAs have higher superconducting transition tempera-tures than their simple binary intermetallic relatives with the CsCl-type structure and a surprisingly ductile mechanical behavior. The presence of niobium, even at the 20% level, has a positive impact on the T c. Nevertheless, niobium-free (ScZr) 0.50[RhPd] 0.50, as mother-compound of both superconducting HEAs found here, is itself superconducting, proving that superconductivity is an intrinsic feature of the bulk material.« less
NASA Astrophysics Data System (ADS)
Aruna, S. A.; Zhang, P.; Lin, F. Y.; Ding, S. Y.; Yao, X. X.
2000-04-01
Within the framework of the thermally activated process of the flux line or flux line bundles, and by time integration of the 1D equation of motion of the circulating current density icons/Journals/Common/vecJ" ALT="vecJ" ALIGN="TOP"/> (icons/Journals/Common/rho" ALT="rho" ALIGN="TOP"/> ,t ), which is suitable for thin superconducting films (R >>d ,icons/Journals/Common/le" ALT="le" ALIGN="TOP"/> icons/Journals/Common/lambda" ALT="lambda" ALIGN="TOP"/> ), we present numerical calculations of the current profiles, magnetization hysteresis loops and ac susceptibility icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/> n = icons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/> ´n +iicons/Journals/Common/chi" ALT="chi" ALIGN="TOP"/> ´´n for n = 1, 3 and 5 of a thin disc immersed in an axial time-dependent external magnetic field Ba (t ) = Bdc +Bac cos(2icons/Journals/Common/pi" ALT="pi" ALIGN="TOP"/> icons/Journals/Common/nu" ALT="nu" ALIGN="TOP"/> t ). Our calculated results are compared with those of the critical state model (CSM) and found to prove the approximate validity of the CSM below the irreversibility field. The differences between our computed results and those of the CSM are also discussed.
Nucleon form factors from quenched lattice QCD with domain wall fermions
NASA Astrophysics Data System (ADS)
Sasaki, Shoichi; Yamazaki, Takeshi
2008-07-01
We present a quenched lattice calculation of the weak nucleon form factors: vector [FV(q2)], induced tensor [FT(q2)], axial vector [FA(q2)] and induced pseudoscalar [FP(q2)] form factors. Our simulations are performed on three different lattice sizes L3×T=243×32, 163×32, and 123×32 with a lattice cutoff of a-1≈1.3GeV and light quark masses down to about 1/4 the strange quark mass (mπ≈390MeV) using a combination of the DBW2 gauge action and domain wall fermions. The physical volume of our largest lattice is about (3.6fm)3, where the finite volume effects on form factors become negligible and the lower momentum transfers (q2≈0.1GeV2) are accessible. The q2 dependences of form factors in the low q2 region are examined. It is found that the vector, induced tensor, and axial-vector form factors are well described by the dipole form, while the induced pseudoscalar form factor is consistent with pion-pole dominance. We obtain the ratio of axial to vector coupling gA/gV=FA(0)/FV(0)=1.219(38) and the pseudoscalar coupling gP=mμFP(0.88mμ2)=8.15(54), where the errors are statistical errors only. These values agree with experimental values from neutron β decay and muon capture on the proton. However, the root mean-squared radii of the vector, induced tensor, and axial vector underestimate the known experimental values by about 20%. We also calculate the pseudoscalar nucleon matrix element in order to verify the axial Ward-Takahashi identity in terms of the nucleon matrix elements, which may be called as the generalized Goldberger-Treiman relation.
Optimal four-impulse rendezvous between coplanar elliptical orbits
NASA Astrophysics Data System (ADS)
Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun
2011-04-01
Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.
Characterization of a 300-GHz Transmission System for Digital Communications
NASA Astrophysics Data System (ADS)
Hudlička, Martin; Salhi, Mohammed; Kleine-Ostmann, Thomas; Schrader, Thorsten
2017-08-01
The paper presents the characterization of a 300-GHz transmission system for modern digital communications. The quality of the modulated signal at the output of the system (error vector magnitude, EVM) is measured using a vector signal analyzer. A method using a digital real-time oscilloscope and consecutive mathematical processing in a computer is shown for analysis of signals with bandwidths exceeding that of state-of-the-art vector signal analyzers. The uncertainty of EVM measured using the real-time oscilloscope is open to analysis. Behaviour of the 300-GHz transmission system is studied with respect to various modulation schemes and different signal symbol rates.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Adams, C N; Kattawar, G W
1993-08-20
We have developed a Monte Carlo program that is capable of calculating both the scalar and the Stokes vector radiances in an atmosphere-ocean system in a single computer run. The correlated sampling technique is used to compute radiance distributions for both the scalar and the Stokes vector formulations simultaneously, thus permitting a direct comparison of the errors induced. We show the effect of the volume-scattering phase function on the errors in radiance calculations when one neglects polarization effects. The model used in this study assumes a conservative Rayleigh-scattering atmosphere above a flat ocean. Within the ocean, the volume-scattering function (the first element in the Mueller matrix) is varied according to both a Henyey-Greenstein phase function, with asymmetry factors G = 0.0, 0.5, and 0.9, and also to a Rayleigh-scattering phase function. The remainder of the reduced Mueller matrix for the ocean is taken to be that for Rayleigh scattering, which is consistent with ocean water measurement.
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Evaluation of the navigation performance of shipboard-VTOL-landing guidance systems
NASA Technical Reports Server (NTRS)
Mcgee, L. A.; Paulk, C. H., Jr.; Steck, S. A.; Schmidt, S. F.; Merz, A. W.
1979-01-01
The objective of this study was to explore the performance of a VTOL aircraft landing approach navigation system that receives data (1) from either a microwave scanning beam (MSB) or a radar-transponder (R-T) landing guidance system, and (2) information data-linked from an aviation facility ship. State-of-the-art low-cost-aided inertial techniques and variable gain filters were used in the assumed navigation system. Compensation for ship motion was accomplished by a landing pad deviation vector concept that is a measure of the landing pad's deviation from its calm sea location. The results show that the landing guidance concepts were successful in meeting all of the current Navy navigation error specifications, provided that vector magnitude of the allowable error, rather than the error in each axis, is a permissible interpretation of acceptable performance. The success of these concepts, however, is strongly dependent on the distance measuring equipment bias. In addition, the 'best possible' closed-loop tracking performance achievable with the assumed point-mass VTOL aircraft guidance concept is demonstrated.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
Heavy and Light Quarks with Lattice Chiral Fermions
NASA Astrophysics Data System (ADS)
Liu, K. F.; Dong, S. J.
The feasibility of using lattice chiral fermions which are free of O(a) errors for both the heavy and light quarks is examined. The fact that the effective quark propagators in these fermions have the same form as that in the continuum with the quark mass being only an additive parameter to a chirally symmetric anti-Hermitian Dirac operator is highlighted. This implies that there is no distinction between the heavy and light quarks and no mass dependent tuning of the action or operators as long as the discretization error O(m2a2) is negligible. Using the overlap fermion, we find that the O(m2a2) (and O(ma2)) errors in the dispersion relations of the pseudoscalar and vector mesons and the renormalization of the axial-vector current and scalar density are small. This suggests that the applicable range of ma may be extended to ~0.56 with only 5% error, which is a factor of ~2.4 larger than the corresponding range of the improved Wilson action. We show that the generalized Gell-Mann-Oakes-Renner relation with unequal masses can be utilized to determine the finite ma corrections in the renormalization of the matrix elements for the heavy-light decay constants and semileptonic decay constants of the B/D meson.
Conical Probe Calibration and Wind Tunnel Data Analysis of the Channeled Centerbody Inlet Experiment
NASA Technical Reports Server (NTRS)
Truong, Samson Siu
2011-01-01
For a multi-hole test probe undergoing wind tunnel tests, the resulting data needs to be analyzed for any significant trends. These trends include relating the pressure distributions, the geometric orientation, and the local velocity vector to one another. However, experimental runs always involve some sort of error. As a result, a calibration procedure is required to compensate for this error. For this case, it is the misalignment bias angles resulting from the distortion associated with the angularity of the test probe or the local velocity vector. Through a series of calibration steps presented here, the angular biases are determined and removed from the data sets. By removing the misalignment, smoother pressure distributions contribute to more accurate experimental results, which in turn could be then compared to theoretical and actual in-flight results to derive any similarities. Error analyses will also be performed to verify the accuracy of the calibration error reduction. The resulting calibrated data will be implemented into an in-flight RTF script that will output critical flight parameters during future CCIE experimental test runs. All of these tasks are associated with and in contribution to NASA Dryden Flight Research Center s F-15B Research Testbed s Small Business Innovation Research of the Channeled Centerbody Inlet Experiment.
Mutation-adapted U1 snRNA corrects a splicing error of the dopa decarboxylase gene.
Lee, Ni-Chung; Lee, Yu-May; Chen, Pin-Wen; Byrne, Barry J; Hwu, Wuh-Liang
2016-12-01
Aromatic l-amino acid decarboxylase (AADC) deficiency is an inborn error of monoamine neurotransmitter synthesis, which results in dopamine, serotonin, epinephrine and norepinephrine deficiencies. The DDC gene founder mutation IVS6 + 4A > T is highly prevalent in Chinese patients with AADC deficiency. In this study, we designed several U1 snRNA vectors to adapt U1 snRNA binding sequences of the mutated DDC gene. We found that only the modified U1 snRNA (IVS-AAA) that completely matched both the intronic and exonic U1 binding sequences of the mutated DDC gene could correct splicing errors of either the mutated human DDC minigene or the mouse artificial splicing construct in vitro. We further injected an adeno-associated viral (AAV) vector to express IVS-AAA in the brain of a knock-in mouse model. This treatment was well tolerated and improved both the survival and brain dopamine and serotonin levels of mice with AADC deficiency. Therefore, mutation-adapted U1 snRNA gene therapy can be a promising method to treat genetic diseases caused by splicing errors, but the efficiency of such a treatment still needs improvements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Pang, Hongfeng; Zhu, XueJun; Pan, Mengchun; Zhang, Qi; Wan, Chengbiao; Luo, Shitu; Chen, Dixiang; Chen, Jinfei; Li, Ji; Lv, Yunxiao
2016-12-01
Misalignment error is one key factor influencing the measurement accuracy of geomagnetic vector measurement system, which should be calibrated with the difficulties that sensors measure different physical information and coordinates are invisible. A new misalignment calibration method by rotating a parallelepiped frame is proposed. Simulation and experiment result show the effectiveness of calibration method. The experimental system mainly contains DM-050 three-axis fluxgate magnetometer, INS (inertia navigation system), aluminium parallelepiped frame, aluminium plane base. Misalignment angles are calculated by measured data of magnetometer and INS after rotating the aluminium parallelepiped frame on aluminium plane base. After calibration, RMS error of geomagnetic north, vertical and east are reduced from 349.441 nT, 392.530 nT and 562.316 nT to 40.130 nT, 91.586 nT and 141.989 nT respectively.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
NASA Astrophysics Data System (ADS)
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
Direct discretization of planar div-curl problems
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.
1989-01-01
A control volume method is proposed for planar div-curl systems. The method is independent of potential and least squares formulations, and works directly with the div-curl system. The novelty of the technique lies in its use of a single local vector field component and two control volumes rather than the other way around. A discrete vector field theory comes quite naturally from this idea and is developed. Error estimates are proved for the method, and other ramifications investigated.
Bobrova, E V; Bogacheva, I N; Lyakhovetskii, V A; Fabinskaja, A A; Fomina, E V
2017-01-01
In order to test the hypothesis of hemisphere specialization for different types of information coding (the right hemisphere, for positional coding; the left one, for vector coding), we analyzed the errors of right and left-handers during a task involving the memorization of sequences of movements by the left or the right hand, which activates vector coding by changing the order of movements in memorized sequences. The task was first performed by the right or the left hand, then by the opposite hand. It was found that both'right- and left-handers use the information about the previous movements of the dominant hand, but not of the non-dom" inant one. After changing the hand, right-handers use the information about previous movements of the second hand, while left-handers do not. We compared our results with the data of previous experiments, in which positional coding was activated, and concluded that both right- and left-handers use vector coding for memorizing the sequences of their dominant hands and positional coding for memorizing the sequences of non-dominant hand. No similar patterns of errors were found between right- and left-handers after changing the hand, which suggests that in right- and left-handersthe skills are transferred in different ways depending on the type of coding.
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
Angular motion estimation using dynamic models in a gyro-free inertial measurement unit.
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters.
Angular Motion Estimation Using Dynamic Models in a Gyro-Free Inertial Measurement Unit
Edwan, Ezzaldeen; Knedlik, Stefan; Loffeld, Otmar
2012-01-01
In this paper, we summarize the results of using dynamic models borrowed from tracking theory in describing the time evolution of the state vector to have an estimate of the angular motion in a gyro-free inertial measurement unit (GF-IMU). The GF-IMU is a special type inertial measurement unit (IMU) that uses only a set of accelerometers in inferring the angular motion. Using distributed accelerometers, we get an angular information vector (AIV) composed of angular acceleration and quadratic angular velocity terms. We use a Kalman filter approach to estimate the angular velocity vector since it is not expressed explicitly within the AIV. The bias parameters inherent in the accelerometers measurements' produce a biased AIV and hence the AIV bias parameters are estimated within an augmented state vector. Using dynamic models, the appended bias parameters of the AIV become observable and hence we can have unbiased angular motion estimate. Moreover, a good model is required to extract the maximum amount of information from the observation. Observability analysis is done to determine the conditions for having an observable state space model. For higher grades of accelerometers and under relatively higher sampling frequency, the error of accelerometer measurements is dominated by the noise error. Consequently, simulations are conducted on two models, one has bias parameters appended in the state space model and the other is a reduced model without bias parameters. PMID:22778586
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Error analysis of 3D-PTV through unsteady interfaces
NASA Astrophysics Data System (ADS)
Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier
2018-03-01
The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
Martella, Andrea; Matjusaitis, Mantas; Auxillos, Jamie; Pollard, Steven M; Cai, Yizhi
2017-07-21
Mammalian plasmid expression vectors are critical reagents underpinning many facets of research across biology, biomedical research, and the biotechnology industry. Traditional cloning methods often require laborious manual design and assembly of plasmids using tailored sequential cloning steps. This process can be protracted, complicated, expensive, and error-prone. New tools and strategies that facilitate the efficient design and production of bespoke vectors would help relieve a current bottleneck for researchers. To address this, we have developed an extensible mammalian modular assembly kit (EMMA). This enables rapid and efficient modular assembly of mammalian expression vectors in a one-tube, one-step golden-gate cloning reaction, using a standardized library of compatible genetic parts. The high modularity, flexibility, and extensibility of EMMA provide a simple method for the production of functionally diverse mammalian expression vectors. We demonstrate the value of this toolkit by constructing and validating a range of representative vectors, such as transient and stable expression vectors (transposon based vectors), targeting vectors, inducible systems, polycistronic expression cassettes, fusion proteins, and fluorescent reporters. The method also supports simple assembly combinatorial libraries and hierarchical assembly for production of larger multigenetic cargos. In summary, EMMA is compatible with automated production, and novel genetic parts can be easily incorporated, providing new opportunities for mammalian synthetic biology.
A Worksheet to Enhance Students’ Conceptual Understanding in Vector Components
NASA Astrophysics Data System (ADS)
Wutchana, Umporn; Emarat, Narumon
2017-09-01
With and without physical context, we explored 59 undergraduate students’conceptual and procedural understanding of vector components using both open ended problems and multiple choice items designed based on research instruments used in physics education research. The results showed that a number of students produce errors and revealed alternative conceptions especially when asked to draw graphical form of vector components. It indicated that most of them did not develop a strong foundation of understanding in vector components and could not apply those concepts to such problems with physical context. Based on the findings, we designed a worksheet to enhance the students’ conceptual understanding in vector components. The worksheet is composed of three parts which help students to construct their own understanding of definition, graphical form, and magnitude of vector components. To validate the worksheet, focus group discussions of 3 and 10 graduate students (science in-service teachers) had been conducted. The modified worksheet was then distributed to 41 grade 9 students in a science class. The students spent approximately 50 minutes to complete the worksheet. They sketched and measured vectors and its components and compared with the trigonometry ratio to condense the concepts of vector components. After completing the worksheet, their conceptual model had been verified. 83% of them constructed the correct model of vector components.
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Term Cancellations in Computing Floating-Point Gröbner Bases
NASA Astrophysics Data System (ADS)
Sasaki, Tateaki; Kako, Fujio
We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.
A SVM framework for fault detection of the braking system in a high speed train
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Yan-Fu; Zio, Enrico
2017-03-01
In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.
Ebtehaj, Isa; Bonakdari, Hossein
2016-01-01
Sediment transport without deposition is an essential consideration in the optimum design of sewer pipes. In this study, a novel method based on a combination of support vector regression (SVR) and the firefly algorithm (FFA) is proposed to predict the minimum velocity required to avoid sediment settling in pipe channels, which is expressed as the densimetric Froude number (Fr). The efficiency of support vector machine (SVM) models depends on the suitable selection of SVM parameters. In this particular study, FFA is used by determining these SVM parameters. The actual effective parameters on Fr calculation are generally identified by employing dimensional analysis. The different dimensionless variables along with the models are introduced. The best performance is attributed to the model that employs the sediment volumetric concentration (C(V)), ratio of relative median diameter of particles to hydraulic radius (d/R), dimensionless particle number (D(gr)) and overall sediment friction factor (λ(s)) parameters to estimate Fr. The performance of the SVR-FFA model is compared with genetic programming, artificial neural network and existing regression-based equations. The results indicate the superior performance of SVR-FFA (mean absolute percentage error = 2.123%; root mean square error =0.116) compared with other methods.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
Density-based penalty parameter optimization on C-SVM.
Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An
2014-01-01
The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.
Supplier Short Term Load Forecasting Using Support Vector Regression and Exogenous Input
NASA Astrophysics Data System (ADS)
Matijaš, Marin; Vukićcević, Milan; Krajcar, Slavko
2011-09-01
In power systems, task of load forecasting is important for keeping equilibrium between production and consumption. With liberalization of electricity markets, task of load forecasting changed because each market participant has to forecast their own load. Consumption of end-consumers is stochastic in nature. Due to competition, suppliers are not in a position to transfer their costs to end-consumers; therefore it is essential to keep forecasting error as low as possible. Numerous papers are investigating load forecasting from the perspective of the grid or production planning. We research forecasting models from the perspective of a supplier. In this paper, we investigate different combinations of exogenous input on the simulated supplier loads and show that using points of delivery as a feature for Support Vector Regression leads to lower forecasting error, while adding customer number in different datasets does the opposite.
Postlaunch calibration of spacecraft attitude instruments
NASA Technical Reports Server (NTRS)
Davis, W.; Hashmall, J.; Garrick, J.; Harman, R.
1993-01-01
The accuracy of both onboard and ground attitude determination can be significantly enhanced by calibrating spacecraft attitude instruments (sensors) after launch. Although attitude sensors are accurately calibrated before launch, the stresses of launch and the space environment inevitably cause changes in sensor parameters. During the mission, these parameters may continue to drift requiring repeated on-orbit calibrations. The goal of attitude sensor calibration is to reduce the systematic errors in the measurement models. There are two stages at which systematic errors may enter. The first occurs in the conversion of sensor output into an observation vector in the sensor frame. The second occurs in the transformation of the vector from the sensor frame to the spacecraft attitude reference frame. This paper presents postlaunch alignment and transfer function calibration of the attitude sensors for the Compton Gamma Ray Observatory (GRO), the Upper Atmosphere Research Satellite (UARS), and the Extreme Ultraviolet Explorer (EUVE).
Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume
2013-01-01
Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.
Word-level recognition of multifont Arabic text using a feature vector matching approach
NASA Astrophysics Data System (ADS)
Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III
1996-03-01
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng
2018-05-03
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
Fagone, Paolo; Wright, J Fraser; Nathwani, Amit C; Nienhuis, Arthur W; Davidoff, Andrew M; Gray, John T
2012-02-01
Self-complementary AAV (scAAV) vector genomes contain a covalently closed hairpin derived from a mutated inverted terminal repeat that connects the two monomer single-stranded genomes into a head-to-head or tail-to-tail dimer. We found that during quantitative PCR (qPCR) this structure inhibits the amplification of proximal amplicons and causes the systemic underreporting of copy number by as much as 10-fold. We show that cleavage of scAAV vector genomes with restriction endonuclease to liberate amplicons from the covalently closed terminal hairpin restores quantitative amplification, and we implement this procedure in a simple, modified qPCR titration method for scAAV vectors. In addition, we developed and present an AAV genome titration procedure based on gel electrophoresis that requires minimal sample processing and has low interassay variability, and as such is well suited for the rigorous quality control demands of clinical vector production facilities.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
NASA Astrophysics Data System (ADS)
Ritter, Kenneth August, III
Industry has a continuing need to train its workforce on recent engineering developments, but many engineering products and processes are hard to explain because of limitations of size, visibility, time scale, cost, and safety. The product or process might be difficult to see because it is either very large or very small, because it is enclosed within an opaque container, or because it happens very fast or very slowly. Some engineering products and processes are also costly or unsafe to use for training purposes, and sometimes the domain expert is not physically available at the training location. All these limitations can potentially be addressed using advanced visualization techniques such as virtual reality. This dissertation describes the development of an immersive virtual reality application using the Six Sigma DMADV process to explain the main equipment and processes used in a concentrating solar power plant. The virtual solar energy center (VEC) application was initially developed and tested in a Cave Automatic Virtual Environment (CAVE) during 2013 and 2014. The software programs used for development were SolidWorks, 3ds Max Design, and Unity 3D. Current hardware and software technologies that could complement this research were analyzed. The NVIDA GRID Visual Computing Appliance (VCA) was chosen as the rendering solution for animating complex CAD models in this application. The MiddleVR software toolkit was selected as the toolkit for VR interactions and CAVE display. A non-immersive 3D version of the VEC application was tested and shown to be an effective training tool in late 2015. An immersive networked version of the VEC allows the user to receive live instruction from a trainer being projected via depth camera imagery from a remote location. Four comparative analysis studies were performed. These studies used the average normalized gain from pre-test scores to determine the effectiveness of the various training methods. With the DMADV approach, solutions were identified and verified during each iteration of the development, which saved valuable time and resulted in better results being achieved in each revision of the application, with the final version having 88% positive responses and same effectiveness as other methods assessed.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
Gordon, H R; Wang, M
1992-07-20
The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Corruption of genomic databases with anomalous sequence.
Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L
1992-06-11
We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Computerized tongue image segmentation via the double geo-vector flow
2014-01-01
Background Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Methods Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. Results The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. Conclusions By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation. PMID:24507094
Computerized tongue image segmentation via the double geo-vector flow.
Shi, Miao-Jing; Li, Guo-Zheng; Li, Fu-Feng; Xu, Chao
2014-02-08
Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation. Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part. The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively. By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Su, Le; Han, Lei; Ge, Fei; Zhang, Shang Li; Zhang, Yun; Zhao, Bao Xiang; Zhao, Jing; Miao, Jun Ying
2012-10-15
Manufactured nanoparticles are currently used for many fields. However, their potential toxicity provides a growing concern for human health. In our previous study, we prepared novel magnetic nanoparticles (MNPs), which could effectively remove heavy metal ions and cationic dyes from aqueous solution. To understand its biocompatibility, we investigated the effect of the nanoparticles on the function of vascular endothelial cells. The results showed that the nanoparticles were taken up by human umbilical vein endothelial cells (HUVECs) and could inhibit cell proliferation at 400 μg/ml. An increase in nitric oxide (NO) production and endothelial nitric oxide synthase (eNOS) activity were induced, which companied with the decrease in caveolin-1 level. The endothelium in the aortic root was damaged and the NO level in serum was elevated after treated mice with 20mg/kg nanoparticles for 3 days, but it was integrated after treated with 5mg/kg nanoparticles. Meanwhile, an increase in eNOS activity and decrease in caveolin-1 level were induced in the endothelium. The data suggested that the low concentration of nanoparticles could not affect the function and viability of VECs. The high concentration of nanoparticles could inhibit VEC proliferation through elevation of the eNOS activity and NO production and thus present toxicity. Copyright © 2012 Elsevier B.V. All rights reserved.
Design of large vacuum chamber for VEC superconducting cyclotron beam line switching magnet
NASA Astrophysics Data System (ADS)
Bhattacharya, Sumantra; Nandi, Chinmoy; Gayen, Subhasis; Roy, Suvadeep; Mishra, Santosh Kumar; Ramrao Bajirao, Sanjay; Pal, Gautam; Mallik, C.
2012-11-01
VEC K500 superconducting cyclotron will be used to accelerate heavy ion. The accelerated beam will be transported to different beam halls by using large switching magnets. The vacuum chamber for the switching magnet is around 1000 mm long. It has a height of 85 mm and width varying from 100 mm to 360 mm. The material for the chamber has been chosen as SS304.The material for the vacuum chamber for the switching magnet has been chosen as SS304. Design of the vessel was done as per ASME Boiler and Pressure Vessel Code, Section VIII, Division 1. It was observed that primary stress values exceed the allowable limit. Since, the magnet was already designed with a fixed pole gap; increase of the vacuum chamber plate thickness restricts the space for beam transport. Design was optimized using stress analysis software ANSYS. Analysis was started using plate thickness of 4 mm. The stress was found higher than the allowable level. The analysis was repeated by increasing plate thickness to 6 mm, resulting in the reduction of stress level below the allowable level. In order to reduce the stress concentration due to sharp bend, chamfering was done at the corner, where the stress level was higher. The thickness of the plate at the corner was increased from 6 mm to 10 mm. These measures resulted in reduction of localized stress.
Zecchin, Annalisa; Wong, Brian W; Tembuyser, Bieke; Souffreau, Joris; Van Nuffelen, An; Wyns, Sabine; Vinckier, Stefan; Carmeliet, Peter; Dewerchin, Mieke
2018-06-18
During embryonic development, lymphatic endothelial cells (LECs) differentiate from venous endothelial cells (VECs), a process that is tightly regulated by several genetic signals. While the aquatic zebrafish model is regularly used for studying lymphangiogenesis and offers the unique advantage of time-lapse video-imaging of lymphatic development, some aspects of lymphatic development in this model differ from those in the mouse. It therefore remained to be determined whether fatty acid β-oxidation (FAO), which we showed to regulate lymphatic formation in the mouse, also co-determines lymphatic development in this aquatic model. Here, we took advantage of the power of the zebrafish embryo model to visualize the earliest steps of lymphatic development through time-lapse video-imaging. By targeting zebrafish isoforms of carnitine palmitoyltransferase 1a (cpt1a), a rate controlling enzyme of FAO, with multiple morpholinos, we demonstrate that reducing CPT1A levels and FAO flux during zebrafish development impairs lymphangiogenic secondary sprouting, the initiation of lymphatic development in the zebrafish trunk, and the formation of the first lymphatic structures. These findings not only show evolutionary conservation of the importance of FAO for lymphatic development, but also suggest a role for FAO in co-regulating the process of VEC-to-LEC differentiation in zebrafish in vivo. Copyright © 2018 Elsevier Inc. All rights reserved.
Shoji, Mamoru; Sun, Aiming; Kisiel, Walter; Lu, Yang J; Shim, Hyunsuk; McCarey, Bernard E; Nichols, Christopher; Parker, Ernest T; Pohl, Jan; Mosley, Cara A; Alizadeh, Aaron R; Liotta, Dennis C; Snyder, James P
2008-04-01
Tissue factor (TF) is aberrantly expressed on tumor vascular endothelial cells (VECs) and on cancer cells in many malignant tumors, but not on normal VECs, making it a promising target for cancer therapy. As a transmembrane receptor for coagulation factor VIIa (fVIIa), TF forms a high-affinity complex with its cognate ligand, which is subsequently internalized through receptor-mediated endocytosis. Accordingly, we developed a method for selectively delivering EF24, a potent synthetic curcumin analog, to TF-expressing tumor vasculature and tumors using fVIIa as a drug carrier. EF24 was chemically conjugated to fVIIa through a tripeptide-chloromethyl ketone. After binding to TF-expressing targets by fVIIa, EF24 will be endocytosed along with the drug carrier and will exert its cytotoxicity. Our results showed that the conjugate inhibits vascular endothelial growth factor-induced angiogenesis in a rabbit cornea model and in a Matrigel model in athymic nude mice. The conjugate-induced apoptosis in tumor cells and significantly reduced tumor size in human breast cancer xenografts in athymic nude mice as compared with the unconjugated EF24. By conjugating potent drugs to fVIIa, this targeted drug delivery system has the potential to enhance therapeutic efficacy, while reducing toxic side effects. It may also prove to be useful for treating drug-resistant tumors and micro-metastases in addition to primary tumors.
Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.
2014-01-01
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind
2014-08-15
Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less
Quadrature mixture LO suppression via DSW DAC noise dither
Dubbert, Dale F [Cedar Crest, NM; Dudley, Peter A [Albuquerque, NM
2007-08-21
A Quadrature Error Corrected Digital Waveform Synthesizer (QECDWS) employs frequency dependent phase error corrections to, in effect, pre-distort the phase characteristic of the chirp to compensate for the frequency dependent phase nonlinearity of the RF and microwave subsystem. In addition, the QECDWS can employ frequency dependent correction vectors to the quadrature amplitude and phase of the synthesized output. The quadrature corrections cancel the radars' quadrature upconverter (mixer) errors to null the unwanted spectral image. A result is the direct generation of an RF waveform, which has a theoretical chirp bandwidth equal to the QECDWS clock frequency (1 to 1.2 GHz) with the high Spurious Free Dynamic Range (SFDR) necessary for high dynamic range radar systems such as SAR. To correct for the problematic upconverter local oscillator (LO) leakage, precision DC offsets can be applied over the chirped pulse using a pseudo-random noise dither. The present dither technique can effectively produce a quadrature DC bias which has the precision required to adequately suppress the LO leakage. A calibration technique can be employed to calculate both the quadrature correction vectors and the LO-nulling DC offsets using the radar built-in test capability.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
Polarization-analyzing circuit on InP for integrated Stokes vector receiver.
Ghosh, Samir; Kawabata, Yuto; Tanemura, Takuo; Nakano, Yoshiaki
2017-05-29
Stokes vector modulation and direct detection (SVM/DD) has immense potentiality to reduce the cost burden for the next-generation short-reach optical communication networks. In this paper, we propose and demonstrate an InGaAsP/InP waveguide-based polarization-analyzing circuit for an integrated Stokes vector (SV) receiver. By transforming the input state-of-polarization (SOP) and projecting its SV onto three different vectors on the Poincare sphere, we show that the actual SOP can be retrieved by simple calculation. We also reveal that this projection matrix has a flexibility and its deviation due to device imperfectness can be calibrated to a certain degree, so that the proposed device would be fundamentally robust against fabrication errors. A proof-of-concept photonic integrated circuit (PIC) is fabricated on InP by using half-ridge waveguides to successfully demonstrate detection of different SOPs scattered on the Poincare sphere.
Boosting with Averaged Weight Vectors
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2002-01-01
AdaBoost is a well-known ensemble learning algorithm that constructs its constituent or base models in sequence. A key step in AdaBoost is constructing a distribution over the training examples to create each base model. This distribution, represented as a vector, is constructed to be orthogonal to the vector of mistakes made by the previous base model in the sequence. The idea is to make the next base model's errors uncorrelated with those of the previous model. Some researchers have pointed out the intuition that it is probably better to construct a distribution that is orthogonal to the mistake vectors of all the previous base models, but that this is not always possible. We present an algorithm that attempts to come as close as possible to this goal in an efficient manner. We present experimental results demonstrating significant improvement over AdaBoost and the Totally Corrective boosting algorithm, which also attempts to satisfy this goal.
None, None
2016-11-21
Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less
New Research on MEMS Acoustic Vector Sensors Used in Pipeline Ground Markers
Song, Xiaopeng; Jian, Zeming; Zhang, Guojun; Liu, Mengran; Guo, Nan; Zhang, Wendong
2015-01-01
According to the demands of current pipeline detection systems, the above-ground marker (AGM) system based on sound detection principle has been a major development trend in pipeline technology. A novel MEMS acoustic vector sensor for AGM systems which has advantages of high sensitivity, high signal-to-noise ratio (SNR), and good low frequency performance has been put forward. Firstly, it is presented that the frequency of the detected sound signal is concentrated in a lower frequency range, and the sound attenuation is relatively low in soil. Secondly, the MEMS acoustic vector sensor structure and basic principles are introduced. Finally, experimental tests are conducted and the results show that in the range of 0°∼90°, when r = 5 m, the proposed MEMS acoustic vector sensor can effectively detect sound signals in soil. The measurement errors of all angles are less than 5°. PMID:25609046
Tie, Junbo; Cao, Juliang; Chang, Lubing; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-03-16
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
Cao, Juliang; Cai, Shaokun; Wu, Meiping; Lian, Junxiang
2018-01-01
Compensation of gravity disturbance can improve the precision of inertial navigation, but the effect of compensation will decrease due to the accelerometer bias, and estimation of the accelerometer bias is a crucial issue in gravity disturbance compensation. This paper first investigates the effect of accelerometer bias on gravity disturbance compensation, and the situation in which the accelerometer bias should be estimated is established. The accelerometer bias is estimated from the gravity vector measurement, and a model of measurement noise in gravity vector measurement is built. Based on this model, accelerometer bias is separated from the gravity vector measurement error by the method of least squares. Horizontal gravity disturbances are calculated through EGM2008 spherical harmonic model to build the simulation scene, and the simulation results indicate that precise estimations of the accelerometer bias can be obtained with the proposed method. PMID:29547552
Coherent vector meson photoproduction from deuterium at intermediate energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, T.C.; Strikman, M.I.; Sargsian, M.M.
2006-04-15
We analyze the cross section for vector meson photoproduction off a deuteron for the intermediate range of photon energies starting at a few giga-electron-volts above the threshold and higher. We reproduce the steps in the derivation of the conventional nonrelativistic Glauber expression based on an effective diagrammatic method while making corrections for Fermi motion and intermediate-energy kinematic effects. We show that, for intermediate-energy vector meson production, the usual Glauber factorization breaks down, and we derive corrections to the usual Glauber method to linear order in longitudinal nucleon momentum. The purpose of our analysis is to establish methods for probing interestingmore » physics in the production mechanism for {phi} mesons and heavier vector mesons. We demonstrate how neglecting the breakdown of Glauber factorization can lead to errors in measurements of basic cross sections extracted from nuclear data.« less
Nonperturbative interpretation of the Bloch vector's path beyond the rotating-wave approximation
NASA Astrophysics Data System (ADS)
Benenti, Giuliano; Siccardi, Stefano; Strini, Giuliano
2013-09-01
The Bloch vector's path of a two-level system exposed to a monochromatic field exhibits, in the regime of strong coupling, complex corkscrew trajectories. By considering the infinitesimal evolution of the two-level system when the field is treated as a classical object, we show that the Bloch vector's rotation speed oscillates between zero and twice the rotation speed predicted by the rotating wave approximation. Cusps appear when the rotation speed vanishes. We prove analytically that in correspondence to cusps the curvature of the Bloch vector's path diverges. On the other hand, numerical data show that the curvature is very large even for a quantum field in the deep quantum regime with mean number of photons n¯≲1. We finally compute numerically the typical error size in a quantum gate when the terms beyond rotating wave approximation are neglected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatia, Harsh
This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thusmore » creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Jun; Sebastian, Evelyn; Mangona, Victor
2013-02-15
Purpose: In order to increase the accuracy and speed of catheter reconstruction in a high-dose-rate (HDR) prostate implant procedure, an automatic tracking system has been developed using an electromagnetic (EM) device (trakSTAR, Ascension Technology, VT). The performance of the system, including the accuracy and noise level with various tracking parameters and conditions, were investigated. Methods: A direct current (dc) EM transmitter (midrange model) and a sensor with diameter of 1.3 mm (Model 130) were used in the trakSTAR system for tracking catheter position during HDR prostate brachytherapy. Localization accuracy was assessed under both static and dynamic analyses conditions. For themore » static analysis, a calibration phantom was used to investigate error dependency on operating room (OR) table height (bottom vs midposition vs top), sensor position (distal tip of catheter vs connector end of catheter), direction [left-right (LR) vs anterior-posterior (AP) vs superior-inferior (SI)], sampling frequency (40 vs 80 vs 120 Hz), and interference from OR equipment (present vs absent). The mean and standard deviation of the localization offset in each direction and the corresponding error vectors were calculated. For dynamic analysis, the paths of five straight catheters were tracked to study the effects of directions, sampling frequency, and interference of EM field. Statistical analysis was conducted to compare the results in different configurations. Results: When interference was present in the static analysis, the error vectors were significantly higher at the top table position (3.3 {+-} 1.3 vs 1.8 {+-} 0.9 mm at bottom and 1.7 {+-} 1.0 mm at middle, p < 0.001), at catheter end position (3.1 {+-} 1.1 vs 1.4 {+-} 0.7 mm at the tip position, p < 0.001), and at 40 Hz sampling frequency (2.6 {+-} 1.1 vs 2.4 {+-} 1.5 mm at 80 Hz and 1.8 {+-} 1.1 at 160 Hz, p < 0.001). So did the mean offset errors in the LR direction (-1.7 {+-} 1.4 vs 0.4 {+-} 0.5 mm in AP and 0.8 {+-} 0.8 mm in SI directions, p < 0.001). The error vectors were significantly higher with surrounding interference (2.2 {+-} 1.3 mm) vs without interference (1.0 {+-} 0.7 mm, p < 0.001). An accuracy of 1.6 {+-} 0.2 mm can be reached when using optimum configuration (160 Hz at middle table position). When interference was present in the dynamic tracking, the mean tracking errors in LR direction (1.4 {+-} 0.5 mm) was significantly higher than that in AP direction (0.3 {+-} 0.2 mm, p < 0.001). So did the mean vector errors at 40 Hz (2.1 {+-} 0.2 mm vs 1.3 {+-} 0.2 mm at 80 Hz and 0.9 {+-} 0.2 mm at 160 Hz, p < 0.05). However, when interference was absent, they were comparable in the both directions and at all sampling frequencies. An accuracy of 0.9 {+-} 0.2 mm was obtained for the dynamic tracking when using optimum configuration. Conclusions: The performance of an EM tracking system depends highly on the system configuration and surrounding environment. The accuracy of EM tracking for catheter reconstruction in a prostate HDR brachytherapy procedure can be improved by reducing interference from surrounding equipment, decreasing distance from transmitter to tracking area, and choosing appropriated sampling frequency. A calibration scheme is needed to further reduce the tracking error when the interference is high.« less
Stochastic estimates of gradient from laser measurements for an autonomous Martian roving vehicle
NASA Technical Reports Server (NTRS)
Burger, P. A.
1973-01-01
The general problem of estimating the state vector x from the state equation h = Ax where h, A, and x are all stochastic, is presented. Specifically, the problem is for an autonomous Martian roving vehicle to utilize laser measurements in estimating the gradient of the terrain. Error exists due to two factors - surface roughness and instrumental measurements. The errors in slope depend on the standard deviations of these noise factors. Numerically, the error in gradient is expressed as a function of instrumental inaccuracies. Certain guidelines for the accuracy of permissable gradient must be set. It is found that present technology can meet these guidelines.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
NASA Astrophysics Data System (ADS)
Samson, Thomas
Nous proposons une methode permettant d'obtenir une expression pour la conductivite de Hall de structures electroniques bidimensionnelles et nous examinons celle -ci a la limite d'une temperature nulle dans le but de verifier l'effet Hall quantique. Nous allons nous interesser essentiellement a l'effet Hall quantique entier et aux effets fractionnaires inferieurs a un. Le systeme considere est forme d'un gaz d'electrons en interaction faible avec les impuretes de l'echantillon. Le modele du gaz d'electrons consiste en un gaz bidimensionnel d'electrons sans spin expose perpendiculairement a un champ magnetique uniforme. Ce dernier est decrit par le potentiel vecteur vec{rm A} defini dans la jauge de Dingle ou jauge symetrique. Conformement au formalisme de la seconde quantification, l'hamiltonien de ce gaz est represente dans la base des etats a un-corps de Dingle |n,m> et exprime ainsi en terme des operateurs de creation et d'annihilation correspondants a_sp{ rm n m}{dag} et a _{rm n m}. Nous supposons de plus que les electrons du niveau fondamental de Dingle interagissent entre eux via le potentiel coulombien. La methode utilisee fait appel a une equation mai tresse a N-corps, de nature quantique et statistique, et verifiant le second principe de la thermodynamique. A partir de celle-ci, nous obtenons un systeme d'equations differentielles appele hierarchie d'equations quantique dont la resolution nous permet de determiner une equation a un-corps, dite de Boltzmann quantique, et dictant l'evolution de la moyenne statistique de l'operateur non-diagonal a _sp{rm n m}{dag } a_{rm n}, _{rm m}, sous l'action du champ electrique applique vec{rm E}(t). C'est sa solution Tr(p(t) a _sp{rm n m}{dag} a_{rm n},_ {rm m}), qui definit la relation de convolution entre la densite courant de Hall vec{rm J}_{rm H }(t) et le champ electrique vec {rm E}(t) dont la transformee de Laplace-Fourier du noyau nous fournit l'expression de la conductivite de Hall desiree. Pour une valeur de facteur d'occupation (nombre d'electrons/degenerescence des etats de Dingle) superieure a un, c'est-a-dire en absence d'interaction electron-electron, il nous sera facile d'evaluer cette conductivite a la limite d'une temperature nulle et de demontrer qu'elle tend vers l'une des valeurs quantiques qe^2/h conformement a l'effet Hall quantique entier. Cependant, pour une valeur du facteur d'occupation inferieure a un, c'est-a-dire en presence d'interaction electron-electron, nous ne pourrons evaluer cette limite et obtenir les resultats escomptes a cause de l'impossibilite de determiner l'un des termes impliques. Neanmoins, ce dernier etant de nature statistique, il pourra etre aisement mis en fonction du propagateur du gaz d'electrons dont on doit maintenant determiner une expression en regime effet Hall quantique fractionnaire. Apres avoir demontre l'impuissance de la theorie des perturbations, basee sur le theoreme de Wick et la technique des diagrammes de Feynman, a accomplir cette tache correctement, nous proposons une seconde methode. Celle -ci fait appel au formalisme de l'integrale fonctionnelle et a l'utilisation d'une transformation de Hubbard-Stratonovich generalisee permettant de substituer a l'interaction a deux-corps une interaction effective a un-corps. L'expression finale obtenue bien que non completement resolue, devrait pouvoir etre estimee par une bonne approximation analytique ou au pire numeriquement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY
Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Quadrature demultiplexing using a degenerate vector parametric amplifier.
Lorences-Riesgo, Abel; Liu, Lan; Olsson, Samuel L I; Malik, Rohit; Kumpera, Aleš; Lundström, Carl; Radic, Stojan; Karlsson, Magnus; Andrekson, Peter A
2014-12-01
We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phase-shift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10(-9). The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition.
How Alterations in the Cdt1 Expression Lead to Gene Amplification in Breast Cancer
2011-07-01
absence of extrinsic DNA damage. We measured the TLS activity by measuring the mutation frequency in a supF gene (in a shuttle vector) subjected to UV...induced DNA damage before its introduction into the cells. Error-prone TLS activity will mutate the supF gene , which is scored by a blue-white colony...Figure 4A). Sequencing of the mutant supF genes , revealed a mutation spectrum consistent with error prone TLS (Supplemental Table 1). Significantly
Fellner, Klemens; Kovtunenko, Victor A
2016-01-01
A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.
Modulated error diffusion CGHs for neural nets
NASA Astrophysics Data System (ADS)
Vermeulen, Pieter J. E.; Casasent, David P.
1990-05-01
New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).
Generalized Analysis Tools for Multi-Spacecraft Missions
NASA Astrophysics Data System (ADS)
Chanteur, G. M.
2011-12-01
Analysis tools for multi-spacecraft missions like CLUSTER or MMS have been designed since the end of the 90's to estimate gradients of fields or to characterize discontinuities crossed by a cluster of spacecraft. Different approaches have been presented and discussed in the book "Analysis Methods for Multi-Spacecraft Data" published as Scientific Report 001 of the International Space Science Institute in Bern, Switzerland (G. Paschmann and P. Daly Eds., 1998). On one hand the approach using methods of least squares has the advantage to apply to any number of spacecraft [1] but is not convenient to perform analytical computation especially when considering the error analysis. On the other hand the barycentric approach is powerful as it provides simple analytical formulas involving the reciprocal vectors of the tetrahedron [2] but appears limited to clusters of four spacecraft. Moreover the barycentric approach allows to derive theoretical formulas for errors affecting the estimators built from the reciprocal vectors [2,3,4]. Following a first generalization of reciprocal vectors proposed by Vogt et al [4] and despite the present lack of projects with more than four spacecraft we present generalized reciprocal vectors for a cluster made of any number of spacecraft : each spacecraft is given a positive or nul weight. The non-coplanarity of at least four spacecraft with strictly positive weights is a necessary and sufficient condition for this analysis to be enabled. Weights given to spacecraft allow to minimize the influence of some spacecraft if its location or the quality of its data are not appropriate, or simply to extract subsets of spacecraft from the cluster. Estimators presented in [2] are generalized within this new frame except for the error analysis which is still under investigation. References [1] Harvey, C. C.: Spatial Gradients and the Volumetric Tensor, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 307-322, ISSI SR-001, 1998. [2] Chanteur, G.: Spatial Interpolation for Four Spacecraft: Theory, in: Analysis Methods for Multi-Spacecraft Data, G. Paschmann and P. Daly (eds.), pp. 371-393, ISSI SR-001, 1998. [3] Chanteur, G.: Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. [4] Vogt, J., Paschmann, G., and Chanteur, G.: Reciprocal Vectors, pp. 33-46, ISSI SR-008, 2008.
Reinforced Concrete Wall Form Design Program
1992-08-01
criteria is an absolute limit. You have the choice of 1/8 or 1/16 of an inch total deflection in a span. Once these limits are set here, then they are...Calls GET-INFO-TEXT - Calls ZERO -PLY - If the response to GET-INFO-TEXT is "Values retrieved by computer", then the following procedures are executed...like to enter their own values. ZERO -PLY - Re-initializes all PLY-VEC values to"?". GET-PLY-CLASS - Retrieves from the user the grade of plyform to be
Face recognition using total margin-based adaptive fuzzy support vector machines.
Liu, Yi-Hung; Chen, Yen-Ting
2007-01-01
This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie
2007-11-01
Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.
Simulations of linear and Hamming codes using SageMath
NASA Astrophysics Data System (ADS)
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Effects of vibration on inertial wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Buehrle, Ralph D.; Balakrishna, S.; Kilgore, W. Allen
1994-01-01
Results of an experimental study of a wind tunnel model inertial angle-of-attack sensor response to a simulated dynamic environment are presented. The inertial device cannot distinguish between the gravity vector and the centrifugal accelerations associated with wind tunnel model vibration, this situation results in a model attitude measurement bias error. Significant bias error in model attitude measurement was found for the model system tested. The model attitude bias error was found to be vibration mode and amplitude dependent. A first order correction model was developed and used for estimating attitude measurement bias error due to dynamic motion. A method for correcting the output of the model attitude inertial sensor in the presence of model dynamics during on-line wind tunnel operation is proposed.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
A multidomain spectral collocation method for the Stokes problem
NASA Technical Reports Server (NTRS)
Landriani, G. Sacchi; Vandeven, H.
1989-01-01
A multidomain spectral collocation scheme is proposed for the approximation of the two-dimensional Stokes problem. It is shown that the discrete velocity vector field is exactly divergence-free and we prove error estimates both for the velocity and the pressure.
Student difficulties regarding symbolic and graphical representations of vector fields
NASA Astrophysics Data System (ADS)
Bollen, Laurens; van Kampen, Paul; Baily, Charles; Kelly, Mossy; De Cock, Mieke
2017-12-01
The ability to switch between various representations is an invaluable problem-solving skill in physics. In addition, research has shown that using multiple representations can greatly enhance a person's understanding of mathematical and physical concepts. This paper describes a study of student difficulties regarding interpreting, constructing, and switching between representations of vector fields, using both qualitative and quantitative methods. We first identified to what extent students are fluent with the use of field vector plots, field line diagrams, and symbolic expressions of vector fields by conducting individual student interviews and analyzing in-class student activities. Based on those findings, we designed the Vector Field Representations test, a free response assessment tool that has been given to 196 second- and third-year physics, mathematics, and engineering students from four different universities. From the obtained results we gained a comprehensive overview of typical errors that students make when switching between vector field representations. In addition, the study allowed us to determine the relative prevalence of the observed difficulties. Although the results varied greatly between institutions, a general trend revealed that many students struggle with vector addition, fail to recognize the field line density as an indication of the magnitude of the field, confuse characteristics of field lines and equipotential lines, and do not choose the appropriate coordinate system when writing out mathematical expressions of vector fields.
Thrust vector control of upper stage with a gimbaled thruster during orbit transfer
NASA Astrophysics Data System (ADS)
Wang, Zhaohui; Jia, Yinghong; Jin, Lei; Duan, Jiajia
2016-10-01
In launching Multi-Satellite with One-Vehicle, the main thruster provided by the upper stage is mounted on a two-axis gimbal. During orbit transfer, the thrust vector of this gimbaled thruster (GT) should theoretically pass through the mass center of the upper stage and align with the command direction to provide orbit transfer impetus. However, it is hard to be implemented from the viewpoint of the engineering mission. The deviations of the thrust vector from the command direction would result in large velocity errors. Moreover, the deviations of the thrust vector from the upper stage mass center would produce large disturbance torques. This paper discusses the thrust vector control (TVC) of the upper stage during its orbit transfer. Firstly, the accurate nonlinear coupled kinematic and dynamic equations of the upper stage body, the two-axis gimbal and the GT are derived by taking the upper stage as a multi-body system. Then, a thrust vector control system consisting of the special attitude control of the upper stage and the gimbal rotation of the gimbaled thruster is proposed. The special attitude control defined by the desired attitude that draws the thrust vector to align with the command direction when the gimbal control makes the thrust vector passes through the upper stage mass center. Finally, the validity of the proposed method is verified through numerical simulations.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Zhang, Lijun; Sy, Mary Ellen; Mai, Harry; Yu, Fei; Hamilton, D Rex
2015-01-01
To compare the prediction error after toric intraocular lens (IOL) (Acrysof IQ) implantation using corneal astigmatism measurements obtained with an IOLMaster automated keratometer and a Galilei dual rotating camera Scheimpflug-Placido tomographer. Jules Stein Eye Institute, University of California Los Angeles, Los Angeles, California, USA. Retrospective case series. The predicted residual astigmatism after toric IOL implantation was calculated using preoperative astigmatism values from an automated keratometer and the total corneal power (TCP) determined by ray tracing through the measured anterior and posterior corneal surfaces using dual Scheimpflug-Placido tomography. The prediction error was calculated as the difference between the predicted astigmatism and the manifest astigmatism at least 1 month postoperatively. The calculations included vector analysis. The study evaluated 35 eyes (35 patients). The preoperative corneal posterior astigmatism mean magnitude was 0.33 diopter (D) ± 0.16 (SD) (vector mean 0.23 × 176). Twenty-six eyes (74.3%) had with-the-rule (WTR) posterior astigmatism. The postoperative manifest refractive astigmatism mean magnitude was 0.38 ± 0.18 D (vector mean 0.26 × 171). There was no statistically significant difference in the mean magnitude prediction error between the automated keratometer and TCP techniques. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect against-the-rule (ATR) astigmatism. The TCP technique lacked these biases. The automated keratometer and TCP methods for estimating the magnitude of corneal astigmatism gave similar results. However, the automated keratometer method tended to overcorrect WTR astigmatism and undercorrect ATR astigmatism. Dr. Hamilton has received honoraria for educational lectures from Ziemer Ophthalmic Systems. No other author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
An integrated use of topography with RSI in gully mapping, Shandong Peninsula, China.
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed.
An Integrated Use of Topography with RSI in Gully Mapping, Shandong Peninsula, China
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed. PMID:25302333
Ali, Mohamed A; Kobashi, Hidenaga; Kamiya, Kazutaka; Igarashi, Akihito; Miyake, Toshiyuki; Elewa, Mohamed Ehab M; Komatsu, Mari; Shimizu, Kimiya
2014-12-01
To compare postoperative astigmatic correction between femtosecond lenticule extraction (FLEx) and wavefront-guided LASIK in eyes with myopic astigmatism. Fifty-eight eyes of 41 patients undergoing FLEx and 49 eyes of 29 patients undergoing wavefront-guided LASIK to correct myopic astigmatism were examined. Visual acuity, cylindrical refraction, predictability of the astigmatic correction, and astigmatic vector components were compared between groups 6 months after surgery. There was no statistically significant difference in manifest cylindrical refraction (P = .08) or percentage of eyes within ± 0.50 diopter (D) of its refraction (P = .11) between the surgical procedures. The index of success in FLEx was statistically significantly better than that of wavefront-guided LASIK (P = .02), although there was no significant difference between the groups in other indices (eg, surgically induced astigmatism, target-induced astigmatism, astigmatic correction index, angle of error, difference vector, and flattening index). Subgroup analysis showed that FLEx had a better index of success (P = .02) and difference vector (P = .04) than wavefront-guided LASIK in the low cylinder subgroup; the angle of error in FLEx was significantly smaller than that of wavefront-guided LASIK in the moderate cylinder subgroup (P = .03). Both FLEx and wavefront-guided LASIK worked well for the correction of myopic astigmatism by the 6-month follow-up visit. Although FLEx had a better index of success than wavefront-guided LASIK when using vector analysis, it appears equivalent to wavefront-guided LASIK in terms of visual acuity and the correction of astigmatism. Copyright 2014, SLACK Incorporated.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto
2006-01-01
We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
A new optical head tracing reflected light for nanoprofiler
NASA Astrophysics Data System (ADS)
Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.
2014-09-01
High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.