Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Enhancing Community Detection By Affinity-based Edge Weighting Scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Andy; Sanders, Geoffrey; Henson, Van
Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is idealmore » for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.« less
A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
Income-based equity weights in healthcare planning and policy.
Herlitz, Anders
2017-08-01
Recent research indicates that there is a gap in life expectancy between the rich and the poor. This raises the question: should we on egalitarian grounds use income-based equity weights when we assess benefits of alternative benevolent interventions, so that health benefits to the poor count for more? This article provides three egalitarian arguments for using income-based equity weights under certain circumstances. If income inequality correlates with inequality in health, we have reason to use income-based equity weights on the ground that health inequality is bad. If income inequality correlates with inequality in opportunity for health, we have reason to use such weights on the ground that inequality in opportunity for health is bad. If income inequality correlates with inequality in well-being, income-based equity weights should be used to mitigate inequality in well-being. Three different ways in which to construe income-based equity weights are introduced and discussed. They can be based on relative income inequality, on income rankings and on capped absolute income. The article does not defend any of these types of weighting schemes, but argues that in order to settle which of these types of weighting scheme to choose, more empirical research is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Receiver-Coupling Schemes Based On Optimal-Estimation Theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.
Parametric Study of Decay of Homogeneous Isotropic Turbulence Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Rumsey, Christopher L.; Rubinstein, Robert; Balakumar, Ponnampalam; Zang, Thomas A.
2012-01-01
Numerical simulations of decaying homogeneous isotropic turbulence are performed with both low-order and high-order spatial discretization schemes. The turbulent Mach and Reynolds numbers for the simulations are 0.2 and 250, respectively. For the low-order schemes we use either second-order central or third-order upwind biased differencing. For higher order approximations we apply weighted essentially non-oscillatory (WENO) schemes, both with linear and nonlinear weights. There are two objectives in this preliminary effort to investigate possible schemes for large eddy simulation (LES). One is to explore the capability of a widely used low-order computational fluid dynamics (CFD) code to perform LES computations. The other is to determine the effect of higher order accuracy (fifth, seventh, and ninth order) achieved with high-order upwind biased WENO-based schemes. Turbulence statistics, such as kinetic energy, dissipation, and skewness, along with the energy spectra from simulations of the decaying turbulence problem are used to assess and compare the various numerical schemes. In addition, results from the best performing schemes are compared with those from a spectral scheme. The effects of grid density, ranging from 32 cubed to 192 cubed, on the computations are also examined. The fifth-order WENO-based scheme is found to be too dissipative, especially on the coarser grids. However, with the seventh-order and ninth-order WENO-based schemes we observe a significant improvement in accuracy relative to the lower order LES schemes, as revealed by the computed peak in the energy dissipation and by the energy spectrum.
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray
2004-01-01
One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.
NASA Astrophysics Data System (ADS)
Zhang, Hui; Wang, Deqing; Wu, Wenjun; Hu, Hongping
2012-11-01
In today's business environment, enterprises are increasingly under pressure to process the vast amount of data produced everyday within enterprises. One method is to focus on the business intelligence (BI) applications and increasing the commercial added-value through such business analytics activities. Term weighting scheme, which has been used to convert the documents as vectors in the term space, is a vital task in enterprise Information Retrieval (IR), text categorisation, text analytics, etc. When determining term weight in a document, the traditional TF-IDF scheme sets weight value for the term considering only its occurrence frequency within the document and in the entire set of documents, which leads to some meaningful terms that cannot get the appropriate weight. In this article, we propose a new term weighting scheme called Term Frequency - Function of Document Frequency (TF-FDF) to address this issue. Instead of using monotonically decreasing function such as Inverse Document Frequency, FDF presents a convex function that dynamically adjusts weights according to the significance of the words in a document set. This function can be manually tuned based on the distribution of the most meaningful words which semantically represent the document set. Our experiments show that the TF-FDF can achieve higher value of Normalised Discounted Cumulative Gain in IR than that of TF-IDF and its variants, and improving the accuracy of relevance ranking of the IR results.
Simulation study on combination of GRACE monthly gravity field solutions
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2016-04-01
The GRACE monthly gravity fields from different processing centers are combined in the frame of the project EGSIEM. This combination is done on solution level first to define weights which will be used for a combination on normal equation level. The applied weights are based on the deviation of the individual gravity fields from the arithmetic mean of all involved gravity fields. This kind of weighting scheme relies on the assumption that the true gravity field is close to the arithmetic mean of the involved individual gravity fields. However, the arithmetic mean can be affected by systematic errors in individual gravity fields, which consequently results in inappropriate weights. For the future operational scientific combination service of GRACE monthly gravity fields, it is necessary to examine the validity of the weighting scheme also in possible extreme cases. To investigate this, we make a simulation study on the combination of gravity fields. Firstly, we show how a deviated gravity field can affect the combined solution in terms of signal and noise in the spatial domain. We also show the impact of systematic errors in individual gravity fields on the resulting combined solution. Then, we investigate whether the weighting scheme still works in the presence of outliers. The result of this simulation study will be useful to understand and validate the weighting scheme applied to the combination of the monthly gravity fields.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Baeder, James D.
2014-01-21
A new class of compact-reconstruction weighted essentially non-oscillatory (CRWENO) schemes were introduced (Ghosh and Baeder in SIAM J Sci Comput 34(3): A1678–A1706, 2012) with high spectral resolution and essentially non-oscillatory behavior across discontinuities. The CRWENO schemes use solution-dependent weights to combine lower-order compact interpolation schemes and yield a high-order compact scheme for smooth solutions and a non-oscillatory compact scheme near discontinuities. The new schemes result in lower absolute errors, and improved resolution of discontinuities and smaller length scales, compared to the weighted essentially non-oscillatory (WENO) scheme of the same order of convergence. Several improvements to the smoothness-dependent weights, proposed inmore » the literature in the context of the WENO schemes, address the drawbacks of the original formulation. This paper explores these improvements in the context of the CRWENO schemes and compares the different formulations of the non-linear weights for flow problems with small length scales as well as discontinuities. Simplified one- and two-dimensional inviscid flow problems are solved to demonstrate the numerical properties of the CRWENO schemes and its different formulations. Canonical turbulent flow problems—the decay of isotropic turbulence and the shock-turbulence interaction—are solved to assess the performance of the schemes for the direct numerical simulation of compressible, turbulent flows« less
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
Esdar, Moritz; Hübner, Ursula; Liebe, Jan-David; Hüsers, Jens; Thye, Johannes
2017-01-01
Clinical information logistics is a construct that aims to describe and explain various phenomena of information provision to drive clinical processes. It can be measured by the workflow composite score, an aggregated indicator of the degree of IT support in clinical processes. This study primarily aimed to investigate the yet unknown empirical patterns constituting this construct. The second goal was to derive a data-driven weighting scheme for the constituents of the workflow composite score and to contrast this scheme with a literature based, top-down procedure. This approach should finally test the validity and robustness of the workflow composite score. Based on secondary data from 183 German hospitals, a tiered factor analytic approach (confirmatory and subsequent exploratory factor analysis) was pursued. A weighting scheme, which was based on factor loadings obtained in the analyses, was put into practice. We were able to identify five statistically significant factors of clinical information logistics that accounted for 63% of the overall variance. These factors were "flow of data and information", "mobility", "clinical decision support and patient safety", "electronic patient record" and "integration and distribution". The system of weights derived from the factor loadings resulted in values for the workflow composite score that differed only slightly from the score values that had been previously published based on a top-down approach. Our findings give insight into the internal composition of clinical information logistics both in terms of factors and weights. They also allowed us to propose a coherent model of clinical information logistics from a technical perspective that joins empirical findings with theoretical knowledge. Despite the new scheme of weights applied to the calculation of the workflow composite score, the score behaved robustly, which is yet another hint of its validity and therefore its usefulness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Consensus-based distributed cooperative learning from closed-loop neural control systems.
Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang
2015-02-01
In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.
Gait Characteristic Analysis and Identification Based on the iPhone's Accelerometer and Gyrometer
Sun, Bing; Wang, Yang; Banda, Jacob
2014-01-01
Gait identification is a valuable approach to identify humans at a distance. In this paper, gait characteristics are analyzed based on an iPhone's accelerometer and gyrometer, and a new approach is proposed for gait identification. Specifically, gait datasets are collected by the triaxial accelerometer and gyrometer embedded in an iPhone. Then, the datasets are processed to extract gait characteristic parameters which include gait frequency, symmetry coefficient, dynamic range and similarity coefficient of characteristic curves. Finally, a weighted voting scheme dependent upon the gait characteristic parameters is proposed for gait identification. Four experiments are implemented to validate the proposed scheme. The attitude and acceleration solutions are verified by simulation. Then the gait characteristics are analyzed by comparing two sets of actual data, and the performance of the weighted voting identification scheme is verified by 40 datasets of 10 subjects. PMID:25222034
NASA Astrophysics Data System (ADS)
Wang, Yiguang; Chi, Nan
2016-10-01
Light emitting diodes (LEDs) based visible light communication (VLC) has been considered as a promising technology for indoor high-speed wireless access, due to its unique advantages, such as low cost, license free and high security. To achieve high-speed VLC transmission, carrierless amplitude and phase (CAP) modulation has been utilized for its lower complexity and high spectral efficiency. Moreover, to compensate the linear and nonlinear distortions such as frequency attenuation, sampling time offset, LED nonlinearity etc., series of pre- and post-equalization schemes should be employed in high-speed VLC systems. In this paper, we make an investigation on several advanced pre- and postequalization schemes for high-order CAP modulation based VLC systems. We propose to use a weighted preequalization technique to compensate the LED frequency attenuation. In post-equalization, a hybrid post equalizer is proposed, which consists of a linear equalizer, a Volterra series based nonlinear equalizer, and a decision-directed least mean square (DD-LMS) equalizer. Modified cascaded multi-modulus algorithm (M-CMMA) is employed to update the weights of the linear and the nonlinear equalizer, while DD-LMS can further improve the performance after the preconvergence. Based on high-order CAP modulation and these equalization schemes, we have experimentally demonstrated a 1.35-Gb/s, a 4.5-Gb/s and a 8-Gb/s high-speed indoor VLC transmission systems. The results show the benefit and feasibility of the proposed equalization schemes for high-speed VLC systems.
NASA Astrophysics Data System (ADS)
Lin, Guofen; Hong, Hanshu; Xia, Yunhao; Sun, Zhixin
2017-10-01
Attribute-based encryption (ABE) is an interesting cryptographic technique for flexible cloud data sharing access control. However, some open challenges hinder its practical application. In previous schemes, all attributes are considered as in the same status while they are not in most of practical scenarios. Meanwhile, the size of access policy increases dramatically with the raise of its expressiveness complexity. In addition, current research hardly notices that mobile front-end devices, such as smartphones, are poor in computational performance while too much bilinear pairing computation is needed for ABE. In this paper, we propose a key-policy weighted attribute-based encryption without bilinear pairing computation (KP-WABE-WB) for secure cloud data sharing access control. A simple weighted mechanism is presented to describe different importance of each attribute. We introduce a novel construction of ABE without executing any bilinear pairing computation. Compared to previous schemes, our scheme has a better performance in expressiveness of access policy and computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope ofmore » a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.« less
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
NASA Technical Reports Server (NTRS)
1975-01-01
A shuttle EVLSS Thermal Control System (TCS) is defined. Thirteen heat rejection subsystems, thirteen water management subsystems, nine humidity control subsystems, three pressure control schemes and five temperature control schemes are evaluated. Sixteen integrated TCS systems are studied, and an optimum system is selected based on quantitative weighting of weight, volume, cost, complexity and other factors. The selected sybsystem contains a sublimator for heat rejection, a bubble expansion tank for water management, and a slurper and rotary separator for humidity control. Design of the selected subsystem prototype hardware is presented.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Sensors with centroid-based common sensing scheme and their multiplexing
NASA Astrophysics Data System (ADS)
Berkcan, Ertugrul; Tiemann, Jerome J.; Brooksby, Glen W.
1993-03-01
The ability to multiplex sensors with different measurands but with a common sensing scheme is of importance in aircraft and aircraft engine applications; this unification of the sensors into a common interface has major implications for weight, cost, and reliability. A new class of sensors based on a common sensing scheme and their E/O Interface has been developed. The approach detects the location of the centroid of a beam of light; the set of fiber optic sensors with this sensing scheme include linear and rotary position, temperature, pressure, as well as duct Mach number. The sensing scheme provides immunity to intensity variations of the source or due to environmental effects on the fiber. A detector spatially multiplexed common electro-optic interface for the sensors has been demonstrated with a position and a temperature sensor.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2000-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
A Technique of Treating Negative Weights in WENO Schemes
NASA Technical Reports Server (NTRS)
Shi, Jing; Hu, Changqing; Shu, Chi-Wang
2000-01-01
High order accurate weighted essentially non-oscillatory (WENO) schemes have recently been developed for finite difference and finite volume methods both in structural and in unstructured meshes. A key idea in WENO scheme is a linear combination of lower order fluxes or reconstructions to obtain a high order approximation. The combination coefficients, also called linear weights, are determined by local geometry of the mesh and order of accuracy and may become negative. WENO procedures cannot be applied directly to obtain a stable scheme if negative linear weights are present. Previous strategy for handling this difficulty is by either regrouping of stencils or reducing the order of accuracy to get rid of the negative linear weights. In this paper we present a simple and effective technique for handling negative linear weights without a need to get rid of them.
Minică, Camelia C.; Genovese, Giulio; Hultman, Christina M.; Pool, René; Vink, Jacqueline M.; Neale, Michael C.; Dolan, Conor V.; Neale, Benjamin M.
2017-01-01
Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds in sequence data association analyses. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. We found that the two tests have equal power if the set of weights resembled the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia – the PRRC2A (P=1.020E-06) and the VARS2 (P=2.383E-06) – in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances the score test is the most powerful. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches. PMID:28238293
Optimal Weight Assignment for a Chinese Signature File.
ERIC Educational Resources Information Center
Liang, Tyne; And Others
1996-01-01
Investigates the performance of a character-based Chinese text retrieval scheme in which monogram keys and bigram keys are encoded into document signatures. Tests and verifies the theoretical predictions of the optimal weight assignments and the minimal false hit rate in experiments using a real Chinese corpus for disyllabic queries of different…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlen, Lisa, E-mail: lisa.dahlen@ltu.s; Lagerkvist, Anders
2010-01-15
Householders' response to weight-based billing for the collection of household waste was investigated with the aim of providing decision support for waste management policies. Three questions were addressed: How much and what kind of information on weight-based billing is discernible in generic Swedish waste collection statistics? Why do local authorities implement weight-based billing, and how do they perceive the results? and, Which strengths and weaknesses of weight-based billing have been observed on the local level? The study showed that municipalities with pay-by-weight schemes collected 20% less household waste per capita than other municipalities. Surprisingly, no part of this difference couldmore » be explained by higher recycling rates. Nevertheless, the majority of waste management professionals were convinced that recycling had increased as a result of the billing system. A number of contradicting strengths and weaknesses of weight-based billing were revealed.« less
Estepp, Jeremie H; Melloni, Chiara; Thornburg, Courtney D; Wiczling, Paweł; Rogers, Zora; Rothman, Jennifer A; Green, Nancy S; Liem, Robert; Brandow, Amanda M; Crary, Shelley E; Howard, Thomas H; Morris, Maurine H; Lewandowski, Andrew; Garg, Uttam; Jusko, William J; Neville, Kathleen A
2016-03-01
Hydroxyurea (HU) is a crucial therapy for children with sickle cell anemia, but its off-label use is a barrier to widespread acceptance. We found HU exposure is not significantly altered by liquid vs capsule formulation, and weight-based dosing schemes provide consistent exposure. HU is recommended for all children starting as young as 9 months of age with sickle cell anemia (SCA; HbSS and HbSβspan(0) thalassemia); however; a paucity of pediatric data exists regarding the pharmacokinetics (PK) or the exposure-response relationship of HU. This trial aimed to characterize the PK of HU in children and to evaluate and compare the bioavailability of a liquid vs capsule formulation. This multicenter; prospective; open-label trial enrolled 39 children with SCA who provided 682 plasma samples for PK analysis following administration of HU. Noncompartmental and population PK models are described. We report that liquid and capsule formulations of HU are bioequivalent; weight-based dosing schemes provide consistent drug exposure; and age-based dosing schemes are unnecessary. These data support the use of liquid HU in children unable to swallow capsules and in those whose weight precludes the use of fixed capsule formulations. Taken with existing safety and efficacy literature; these findings should encourage the use of HU across the spectrum of age and weight in children with SCA; and they should facilitate the expanded use of HU as recommended in the National Heart; Lung; and Blood Institute guidelines for individuals with SCA. © 2015, The American College of Clinical Pharmacology.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
NASA Astrophysics Data System (ADS)
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
Paul C. Van Deusen; Linda S. Heath
2010-01-01
Weighted estimation methods for analysis of mapped plot forest inventory data are discussed. The appropriate weighting scheme can vary depending on the type of analysis and graphical display. Both statistical issues and user expectations need to be considered in these methods. A weighting scheme is proposed that balances statistical considerations and the logical...
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
NASA Astrophysics Data System (ADS)
Skare, Stefan; Hedehus, Maj; Moseley, Michael E.; Li, Tie-Qiang
2000-12-01
Diffusion tensor mapping with MRI can noninvasively track neural connectivity and has great potential for neural scientific research and clinical applications. For each diffusion tensor imaging (DTI) data acquisition scheme, the diffusion tensor is related to the measured apparent diffusion coefficients (ADC) by a transformation matrix. With theoretical analysis we demonstrate that the noise performance of a DTI scheme is dependent on the condition number of the transformation matrix. To test the theoretical framework, we compared the noise performances of different DTI schemes using Monte-Carlo computer simulations and experimental DTI measurements. Both the simulation and the experimental results confirmed that the noise performances of different DTI schemes are significantly correlated with the condition number of the associated transformation matrices. We therefore applied numerical algorithms to optimize a DTI scheme by minimizing the condition number, hence improving the robustness to experimental noise. In the determination of anisotropic diffusion tensors with different orientations, MRI data acquisitions using a single optimum b value based on the mean diffusivity can produce ADC maps with regional differences in noise level. This will give rise to rotational variances of eigenvalues and anisotropy when diffusion tensor mapping is performed using a DTI scheme with a limited number of diffusion-weighting gradient directions. To reduce this type of artifact, a DTI scheme with not only a small condition number but also a large number of evenly distributed diffusion-weighting gradients in 3D is preferable.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
ERIC Educational Resources Information Center
Greenberg, Kathleen Puglisi
2012-01-01
The scoring instrument described in this article is based on a deconstruction of the seven sections of an American Psychological Association (APA)-style empirical research report into a set of learning outcomes divided into content-, expression-, and format-related categories. A double-weighting scheme used to score the report yields a final grade…
Combination of GRACE monthly gravity field solutions from different processing strategies
NASA Astrophysics Data System (ADS)
Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian
2018-02-01
We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.
Literature-based concept profiles for gene annotation: the issue of weighting.
Jelier, Rob; Schuemie, Martijn J; Roes, Peter-Jan; van Mulligen, Erik M; Kors, Jan A
2008-05-01
Text-mining has been used to link biomedical concepts, such as genes or biological processes, to each other for annotation purposes or the generation of new hypotheses. To relate two concepts to each other several authors have used the vector space model, as vectors can be compared efficiently and transparently. Using this model, a concept is characterized by a list of associated concepts, together with weights that indicate the strength of the association. The associated concepts in the vectors and their weights are derived from a set of documents linked to the concept of interest. An important issue with this approach is the determination of the weights of the associated concepts. Various schemes have been proposed to determine these weights, but no comparative studies of the different approaches are available. Here we compare several weighting approaches in a large scale classification experiment. Three different techniques were evaluated: (1) weighting based on averaging, an empirical approach; (2) the log likelihood ratio, a test-based measure; (3) the uncertainty coefficient, an information-theory based measure. The weighting schemes were applied in a system that annotates genes with Gene Ontology codes. As the gold standard for our study we used the annotations provided by the Gene Ontology Annotation project. Classification performance was evaluated by means of the receiver operating characteristics (ROC) curve using the area under the curve (AUC) as the measure of performance. All methods performed well with median AUC scores greater than 0.84, and scored considerably higher than a binary approach without any weighting. Especially for the more specific Gene Ontology codes excellent performance was observed. The differences between the methods were small when considering the whole experiment. However, the number of documents that were linked to a concept proved to be an important variable. When larger amounts of texts were available for the generation of the concepts' vectors, the performance of the methods diverged considerably, with the uncertainty coefficient then outperforming the two other methods.
Statistical process control based chart for information systems security
NASA Astrophysics Data System (ADS)
Khan, Mansoor S.; Cui, Lirong
2015-07-01
Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Chuanghong
2018-02-01
As a sustainable form of ecological structure, green building is widespread concerned and advocated in society increasingly nowadays. In the survey and design phase of preliminary project construction, carrying out the evaluation and selection of green building design scheme, which is in accordance with the scientific and reasonable evaluation index system, can improve the ecological benefits of green building projects largely and effectively. Based on the new Green Building Evaluation Standard which came into effect on January 1, 2015, the evaluation index system of green building design scheme is constructed taking into account the evaluation contents related to the green building design scheme. We organized experts who are experienced in construction scheme optimization to mark and determine the weight of each evaluation index through the AHP method. The correlation degree was calculated between each evaluation scheme and ideal scheme by using multilevel gray relational analysis model and then the optimal scheme was determined. The feasibility and practicability of the evaluation method are verified by introducing examples.
Liu, Ying; Navathe, Shamkant B; Pivoshenko, Alex; Dasigi, Venu G; Dingledine, Ray; Ciliax, Brian J
2006-01-01
One of the key challenges of microarray studies is to derive biological insights from the gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the functional links among genes. However, the quality of the keyword lists significantly affects the clustering results. We compared two keyword weighting schemes: normalised z-score and term frequency-inverse document frequency (TFIDF). Two gene sets were tested to evaluate the effectiveness of the weighting schemes for keyword extraction for gene clustering. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords outperformed those produced from normalised z-score weighted keywords. The optimised algorithms should be useful for partitioning genes from microarray lists into functionally discrete clusters.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
IRRA at TREC 2009: Index Term Weighting based on Divergence From Independence Model
2009-11-01
weighting scheme ( Salton and Buckley, 1988), where TF stands for the term frequency and IDF stands for the inverse document frequency. In contrast to TF...IDF is a collection dependent factor, which identifies the terms that concentrates in a few documents of the collection. Salton and Buckley (1988...chapter 4, pages 35–56. Butterworths, Oxford, UK, 1981. G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. In Information Processing and Management, pages 513–523, 1988. 15
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.
New Term Weighting Formulas for the Vector Space Method in Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chisholm, E.; Kolda, T.G.
The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.
Thurman, E.M.; Malcolm, R.L.
1979-01-01
A scheme is presented which used adsorption chromatography with pH gradient elution and size-exclusion chromatography to concentrate and separate hydrophobic organic acids from water. A review of chromatographic processes involved in the flow scheme is also presented. Organic analytes which appear in each aqueous fraction are quantified by dissolved organic carbon analysis. Hydrophobic organic acids in a water sample are concentrated on a porous acrylic resin. These acids usually constitute approximately 30-50 percent of the dissolved organic carbon in an unpolluted water sample and are eluted with an aqueous eluent (dilute base). The concentrate is then passed through a column of polyacryloylmorpholine gel, which separates the acids into high- and low-molecular-weight fractions. The high- and low-molecular-weight eluates are reconcentrated by adsorption chromatography, then are eluted with a pH gradient into strong acids (predominately carboxylic acids) and weak acids (predominately phenolic compounds). For standard compounds and samples of unpolluted waters, the scheme fractionates humic substances into strong and weak acid fractions that are separated from the low molecular weight acids. A new method utilizing conductivity is also presented to estimate the acidic components in the methanol fraction.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Zong, Guo; Wang, Ahong; Wang, Lu; Liang, Guohua; Gu, Minghong; Sang, Tao; Han, Bin
2012-07-20
1000-Grain weight and spikelet number per panicle are two important components for rice grain yield. In our previous study, eight quantitative trait loci (QTLs) conferring spikelet number per panicle and 1000-grain weight were mapped through sequencing-based genotyping of 150 rice recombinant inbred lines (RILs). In this study, we validated the effects of four QTLs from Nipponbare using chromosome segment substitution lines (CSSLs), and pyramided eight grain yield related QTLs. The new lines containing the eight QTLs with positive effects showed increased panicle and spikelet size as compared with the parent variety 93-11. We further proposed a novel pyramid breeding scheme based on marker-assistant and phenotype selection (MAPS). This scheme allowed pyramiding of as many as 24 QTLs at a single hybridization without massive cross work. This study provided insights into the molecular basis of rice grain yield for direct wealth for high-yielding rice breeding. Copyright © 2012. Published by Elsevier Ltd.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-12-07
Pathological brain detection has made notable stride in the past years, as a consequence many pathological brain detection systems (PBDSs) have been proposed. But, the accuracy of these systems still needs significant improvement in order to meet the necessity of real world diagnostic situations. In this paper, an efficient PBDS based on MR images is proposed that markedly improves the recent results. The proposed system makes use of contrast limited adaptive histogram equalization (CLAHE) to enhance the quality of the input MR images. Thereafter, two-dimensional PCA (2DPCA) strategy is employed to extract the features and subsequently, a PCA+LDA approach is used to generate a compact and discriminative feature set. Finally, a new learning algorithm called MDE-ELM is suggested that combines modified differential evolution (MDE) and extreme learning machine (ELM) for segregation of MR images as pathological or healthy. The MDE is utilized to optimize the input weights and hidden biases of single-hidden-layer feed-forward neural networks (SLFN), whereas an analytical method is used for determining the output weights. The proposed algorithm performs optimization based on both the root mean squared error (RMSE) and norm of the output weights of SLFNs. The suggested scheme is benchmarked on three standard datasets and the results are compared against other competent schemes. The experimental outcomes show that the proposed scheme offers superior results compared to its counterparts. Further, it has been noticed that the proposed MDE-ELM classifier obtains better accuracy with compact network architecture than conventional algorithms.
Local classifier weighting by quadratic programming.
Cevikalp, Hakan; Polikar, Robi
2008-10-01
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.
Information Security Scheme Based on Computational Temporal Ghost Imaging.
Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing
2017-08-09
An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Z.; Ching, W.Y.
Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Levin, Deborah A.
2011-03-01
In the present work, we have simulated the homogeneous condensation of carbon dioxide and ethanol using the Bhatnagar-Gross-Krook based approach. In an earlier work of Gallagher-Rogers et al. [J. Thermophys. Heat Transfer 22, 695 (2008)], it was found that it was not possible to simulate condensation experiments of Wegener et al. [Phys. Fluids 15, 1869 (1972)] using the direct simulation Monte Carlo method. Therefore, in this work, we have used the statistical Bhatnagar-Gross-Krook approach, which was found to be numerically more efficient than direct simulation Monte Carlo method in our previous studies [Kumar et al., AIAA J. 48, 1531 (2010)], to model homogeneous condensation of two small polyatomic systems, carbon dioxide and ethanol. A new weighting scheme is developed in the Bhatnagar-Gross-Krook framework to reduce the computational load associated with the study of homogeneous condensation flows. The solutions obtained by the use of the new scheme are compared with those obtained by the baseline Bhatnagar-Gross-Krook condensation model (without the species weighting scheme) for the condensing flow of carbon dioxide in the stagnation pressure range of 1-5 bars. Use of the new weighting scheme in the present work makes the simulation of homogeneous condensation of ethanol possible. We obtain good agreement between our simulated predictions for homogeneous condensation of ethanol and experiments in terms of the point of condensation onset and the distribution of mass fraction of ethanol condensed along the nozzle centerline.
High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
Integrating Iris and Signature Traits for Personal Authentication Using User-Specific Weighting
Viriri, Serestina; Tapamo, Jules R.
2012-01-01
Biometric systems based on uni-modal traits are characterized by noisy sensor data, restricted degrees of freedom, non-universality and are susceptible to spoof attacks. Multi-modal biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. In this paper, a user-score-based weighting technique for integrating the iris and signature traits is presented. This user-specific weighting technique has proved to be an efficient and effective fusion scheme which increases the authentication accuracy rate of multi-modal biometric systems. The weights are used to indicate the importance of matching scores output by each biometrics trait. The experimental results show that our biometric system based on the integration of iris and signature traits achieve a false rejection rate (FRR) of 0.08% and a false acceptance rate (FAR) of 0.01%. PMID:22666032
NASA Astrophysics Data System (ADS)
Jin, G.
2012-12-01
Multiphase flow modeling is an important numerical tool for a better understanding of transport processes in the fields including, but not limited to, petroleum reservoir engineering, remedy of ground water contamination, and risk evaluation of greenhouse gases such as CO2 injected into deep saline reservoirs. However, accurate numerical modeling for multiphase flow remains many challenges that arise from the inherent tight coupling and strong non-linear nature of the governing equations and the highly heterogeneous media. The existence of counter current flow which is caused by the effect of adverse relative mobility contrast and gravitational and capillary forces will introduce additional numerical instability. Recently multipoint flux approximation (MPFA) has become a subject of extensive research and has been demonstrated with great success in reducing considerable grid orientation effects compared to the conventional single point upstream (SPU) weighting scheme, especially in higher dimensions. However, the present available MPFA schemes are mathematically targeted to certain types of grids in two dimensions, a more general form of MPFA scheme is needed for both 2-D and 3-D problems. In this work a new upstream weighting scheme based on multipoint directional incoming fluxes is proposed which incorporates full permeability tensor to account for the heterogeneity of the porous media. First, the multiphase governing equations are decoupled into an elliptic pressure equation and a hyperbolic or parabolic saturation depends on whether the gravitational and capillary pressures are presented or not. Next, a dual secondary grid (called finite volume grid) is formulated from a primary grid (called finite element grid) to create interaction regions for each grid cell over the entire simulation domain. Such a discretization must ensure the conservation of mass and maintain the continuity of the Darcy velocity across the boundaries between neighboring interaction regions. The pressure field is then implicitly calculated from the pressure equation, which in turn results in the derived velocity field for directional flux calculation at each grid node. Directional flux at the center of each interaction surface is also calculated by interpolation from the element nodal fluxes using shape functions. The MPFA scheme is performed by a specific linear combination of all incoming fluxes into the upstream cell represented by either nodal fluxes or interpolated surface boundary fluxes to produce an upwind directional fluxed weighted relative mobility at the center of the interaction region boundary. Such an upwind weighted relative mobility is then used for calculating the saturations of each fluid phase explicitly. The proposed upwind weighting scheme has been implemented into a mixed finite element-finite volume (FE-FV) method, which allows for handling complex reservoir geometry with second-order accuracies in approximating primary variables. The numerical solver has been tested with several bench mark test problems. The application of the proposed scheme to migration path analysis of CO2 injected into deep saline reservoirs in 3-D has demonstrated its ability and robustness in handling multiphase flow with adverse mobility contrast in highly heterogeneous porous media.
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
NASA Technical Reports Server (NTRS)
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
Reliability analysis of the epidural spinal cord compression scale.
Bilsky, Mark H; Laufer, Ilya; Fourney, Daryl R; Groff, Michael; Schmidt, Meic H; Varga, Peter Paul; Vrionis, Frank D; Yamada, Yoshiya; Gerszten, Peter C; Kuklo, Timothy R
2010-09-01
The evolution of imaging techniques, along with highly effective radiation options has changed the way metastatic epidural tumors are treated. While high-grade epidural spinal cord compression (ESCC) frequently serves as an indication for surgical decompression, no consensus exists in the literature about the precise definition of this term. The advancement of the treatment paradigms in patients with metastatic tumors for the spine requires a clear grading scheme of ESCC. The degree of ESCC often serves as a major determinant in the decision to operate or irradiate. The purpose of this study was to determine the reliability and validity of a 6-point, MR imaging-based grading system for ESCC. To determine the reliability of the grading scale, a survey was distributed to 7 spine surgeons who participate in the Spine Oncology Study Group. The MR images of 25 cervical or thoracic spinal tumors were distributed consisting of 1 sagittal image and 3 axial images at the identical level including T1-weighted, T2-weighted, and Gd-enhanced T1-weighted images. The survey was administered 3 times at 2-week intervals. The inter- and intrarater reliability was assessed. The inter- and intrarater reliability ranged from good to excellent when surgeons were asked to rate the degree of spinal cord compression using T2-weighted axial images. The T2-weighted images were superior indicators of ESCC compared with T1-weighted images with and without Gd. The ESCC scale provides a valid and reliable instrument that may be used to describe the degree of ESCC based on T2-weighted MR images. This scale accounts for recent advances in the treatment of spinal metastases and may be used to provide an ESCC classification scheme for multicenter clinical trial and outcome studies.
A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2008-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Influence diagnostics in meta-regression model.
Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua
2017-09-01
This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.
Bi-orthogonal Symbol Mapping and Detection in Optical CDMA Communication System
NASA Astrophysics Data System (ADS)
Liu, Maw-Yang
2017-12-01
In this paper, the bi-orthogonal symbol mapping and detection scheme is investigated in time-spreading wavelength-hopping optical CDMA communication system. The carrier-hopping prime code is exploited as signature sequence, whose put-of-phase autocorrelation is zero. Based on the orthogonality of carrier-hopping prime code, the equal weight orthogonal signaling scheme can be constructed, and the proposed scheme using bi-orthogonal symbol mapping and detection can be developed. The transmitted binary data bits are mapped into corresponding bi-orthogonal symbols, where the orthogonal matrix code and its complement are utilized. In the receiver, the received bi-orthogonal data symbol is fed into the maximum likelihood decoder for detection. Under such symbol mapping and detection, the proposed scheme can greatly enlarge the Euclidean distance; hence, the system performance can be drastically improved.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-02-19
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain.
High-performance packaging for monolithic microwave and millimeter-wave integrated circuits
NASA Technical Reports Server (NTRS)
Shalkhauser, K. A.; Li, K.; Shih, Y. C.
1992-01-01
Packaging schemes are developed that provide low-loss, hermetic enclosure for enhanced monolithic microwave and millimeter-wave integrated circuits. These package schemes are based on a fused quartz substrate material offering improved RF performance through 44 GHz. The small size and weight of the packages make them useful for a number of applications, including phased array antenna systems. As part of the packaging effort, a test fixture was developed to interface the single chip packages to conventional laboratory instrumentation for characterization of the packaged devices.
DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).
Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer
2018-05-12
Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.
DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)
Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer
2018-01-01
Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208
Corrections to the General (2,4) and (4,4) FDTD Schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min
The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less
Adaptive neural network motion control of manipulators with experimental evaluations.
Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.
Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations
Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.
2014-01-01
A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Frankel, Arthur D.; Petersen, Mark D.
2008-01-01
The geometry and recurrence times of large earthquakes associated with the Cascadia Subduction Zone (CSZ) were discussed and debated at a March 28-29, 2006 Pacific Northwest workshop for the USGS National Seismic Hazard Maps. The CSZ is modeled from Cape Mendocino in California to Vancouver Island in British Columbia. We include the same geometry and weighting scheme as was used in the 2002 model (Frankel and others, 2002) based on thermal constraints (Fig. 1; Fluck and others, 1997 and a reexamination by Wang et al., 2003, Fig. 11, eastern edge of intermediate shading). This scheme includes four possibilities for the lower (eastern) limit of seismic rupture: the base of elastic zone (weight 0.1), the base of transition zone (weight 0.2), the midpoint of the transition zone (weight 0.2), and a model with a long north-south segment at 123.8? W in the southern and central portions of the CSZ, with a dogleg to the northwest in the northern portion of the zone (weight 0.5). The latter model was derived from the approximate average longitude of the contour of the 30 km depth of the CSZ as modeled by Fluck et al. (1997). A global study of the maximum depth of thrust earthquakes on subduction zones by Tichelaar and Ruff (1993) indicated maximum depths of about 40 km for most of the subduction zones studied, although the Mexican subduction zone had a maximum depth of about 25 km (R. LaForge, pers. comm., 2006). The recent inversion of GPS data by McCaffrey et al. (2007) shows a significant amount of coupling (a coupling factor of 0.2-0.3) as far east as 123.8? West in some portions of the CSZ. Both of these lines of evidence lend support to the model with a north-south segment at 123.8? W.
Bento and Buffet: Two Approaches to Flexible Summative Assessment
ERIC Educational Resources Information Center
Didicher, Nicky
2016-01-01
This practice-sharing piece outlines two main approaches to flexible summative assessment schemes, including for each approach one example from my practice and another from a published study. The bento approach offers the same assessments to all students but a variety of grade weighting schemes, allowing students to change weighting during the…
Wang, Ning; Sun, Jing-Chao; Han, Min; Zheng, Zhongjiu; Er, Meng Joo
2017-09-06
In this paper, for a general class of uncertain nonlinear (cascade) systems, including unknown dynamics, which are not feedback linearizable and cannot be solved by existing approaches, an innovative adaptive approximation-based regulation control (AARC) scheme is developed. Within the framework of adding a power integrator (API), by deriving adaptive laws for output weights and prediction error compensation pertaining to single-hidden-layer feedforward network (SLFN) from the Lyapunov synthesis, a series of SLFN-based approximators are explicitly constructed to exactly dominate completely unknown dynamics. By the virtue of significant advancements on the API technique, an adaptive API methodology is eventually established in combination with SLFN-based adaptive approximators, and it contributes to a recursive mechanism for the AARC scheme. As a consequence, the output regulation error can asymptotically converge to the origin, and all other signals of the closed-loop system are uniformly ultimately bounded. Simulation studies and comprehensive comparisons with backstepping- and API-based approaches demonstrate that the proposed AARC scheme achieves remarkable performance and superiority in dealing with unknown dynamics.
Optimal placement of fast cut back units based on the theory of cellular automata and agent
NASA Astrophysics Data System (ADS)
Yan, Jun; Yan, Feng
2017-06-01
The thermal power generation units with the function of fast cut back could serve power for auxiliary system and keep island operation after a major blackout, so they are excellent substitute for the traditional black-start power sources. Different placement schemes for FCB units have different influence on the subsequent restoration process. Considering the locality of the emergency dispatching rules, the unpredictability of specific dispatching instructions and unexpected situations like failure of transmission line energization, a novel deduction model for network reconfiguration based on the theory of cellular automata and agent is established. Several indexes are then defined for evaluating the placement schemes for FCB units. The attribute weights determination method based on subjective and objective integration and grey relational analysis are combinatorically used to determine the optimal placement scheme for FCB unit. The effectiveness of the proposed method is validated by the test results on the New England 10-unit 39-bus power system.
Goodin, Douglas S.; Jones, Jason; Li, David; Traboulsee, Anthony; Reder, Anthony T.; Beckmann, Karola; Konieczny, Andreas; Knappertz, Volker
2011-01-01
Context Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain. Objective Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients. Design/Setting To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data. Main Outcome Measure Long-Term Disability. Results In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30–0.42; p<0.0001). Conclusions Early initiation and sustained use of interferon-beta-1b has a beneficial impact on long-term outcome in MS. Our analysis strategy provides a methodological framework for bias-mitigation in the analysis of non-randomized clinical data. Trial Registration Clinicaltrials.gov NCT00206635 PMID:22140424
Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes
NASA Astrophysics Data System (ADS)
Don, Wai-Sun; Borges, Rafael
2013-10-01
In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.
Yang, L M; Shu, C; Wang, Y
2016-03-01
In this work, a discrete gas-kinetic scheme (DGKS) is presented for simulation of two-dimensional viscous incompressible and compressible flows. This scheme is developed from the circular function-based GKS, which was recently proposed by Shu and his co-workers [L. M. Yang, C. Shu, and J. Wu, J. Comput. Phys. 274, 611 (2014)]. For the circular function-based GKS, the integrals for conservation forms of moments in the infinity domain for the Maxwellian function-based GKS are simplified to those integrals along the circle. As a result, the explicit formulations of conservative variables and fluxes are derived. However, these explicit formulations of circular function-based GKS for viscous flows are still complicated, which may not be easy for the application by new users. By using certain discrete points to represent the circle in the phase velocity space, the complicated formulations can be replaced by a simple solution process. The basic requirement is that the conservation forms of moments for the circular function-based GKS can be accurately satisfied by weighted summation of distribution functions at discrete points. In this work, it is shown that integral quadrature by four discrete points on the circle, which forms the D2Q4 discrete velocity model, can exactly match the integrals. Numerical results showed that the present scheme can provide accurate numerical results for incompressible and compressible viscous flows with roughly the same computational cost as that needed by the Roe scheme.
Assessment strategies for municipal selective waste collection schemes.
Ferreira, Fátima; Avelino, Catarina; Bentes, Isabel; Matos, Cristina; Teixeira, Carlos Afonso
2017-01-01
An important strategy to promote a strong sustainable growth relies on an efficient municipal waste management, and phasing out waste landfilling through waste prevention and recycling emerges as a major target. For this purpose, effective collection schemes are required, in particular those regarding selective waste collection, pursuing a more efficient and high quality recycling of reusable materials. This paper addresses the assessment and benchmarking of selective collection schemes, relevant to guide future operational improvements. In particular, the assessment is based on the monitoring and statistical analysis of a core-set of performance indicators that highlights collection trends, complemented with a performance index that gathers a weighted linear combination of these indicators. This combined analysis underlines a potential tool to support decision makers involved in the process of selecting the collection scheme with best overall performance. The presented approach was applied to a case study conducted in Oporto Municipality, with data gathered from two distinct selective collection schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids
NASA Astrophysics Data System (ADS)
Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.
2017-12-01
Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.
Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang
2016-01-01
Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain. PMID:26907285
Dimitriadis, Stavros I.; Salis, Christos; Tarnanas, Ioannis; Linden, David E.
2017-01-01
The human brain is a large-scale system of functionally connected brain regions. This system can be modeled as a network, or graph, by dividing the brain into a set of regions, or “nodes,” and quantifying the strength of the connections between nodes, or “edges,” as the temporal correlation in their patterns of activity. Network analysis, a part of graph theory, provides a set of summary statistics that can be used to describe complex brain networks in a meaningful way. The large-scale organization of the brain has features of complex networks that can be quantified using network measures from graph theory. The adaptation of both bivariate (mutual information) and multivariate (Granger causality) connectivity estimators to quantify the synchronization between multichannel recordings yields a fully connected, weighted, (a)symmetric functional connectivity graph (FCG), representing the associations among all brain areas. The aforementioned procedure leads to an extremely dense network of tens up to a few hundreds of weights. Therefore, this FCG must be filtered out so that the “true” connectivity pattern can emerge. Here, we compared a large number of well-known topological thresholding techniques with the novel proposed data-driven scheme based on orthogonal minimal spanning trees (OMSTs). OMSTs filter brain connectivity networks based on the optimization between the global efficiency of the network and the cost preserving its wiring. We demonstrated the proposed method in a large EEG database (N = 101 subjects) with eyes-open (EO) and eyes-closed (EC) tasks by adopting a time-varying approach with the main goal to extract features that can totally distinguish each subject from the rest of the set. Additionally, the reliability of the proposed scheme was estimated in a second case study of fMRI resting-state activity with multiple scans. Our results demonstrated clearly that the proposed thresholding scheme outperformed a large list of thresholding schemes based on the recognition accuracy of each subject compared to the rest of the cohort (EEG). Additionally, the reliability of the network metrics based on the fMRI static networks was improved based on the proposed topological filtering scheme. Overall, the proposed algorithm could be used across neuroimaging and multimodal studies as a common computationally efficient standardized tool for a great number of neuroscientists and physicists working on numerous of projects. PMID:28491032
Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin
2017-01-01
This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
Research on comprehensive decision-making of PV power station connecting system
NASA Astrophysics Data System (ADS)
Zhou, Erxiong; Xin, Chaoshan; Ma, Botao; Cheng, Kai
2018-04-01
In allusion to the incomplete indexes system and not making decision on the subjectivity and objectivity of PV power station connecting system, based on the combination of improved Analytic Hierarchy Process (AHP), Criteria Importance Through Intercriteria Correlation (CRITIC) as well as grey correlation degree analysis (GCDA) is comprehensively proposed to select the appropriate system connecting scheme of PV power station. Firstly, indexes of PV power station connecting system are divided the recursion order hierarchy and calculated subjective weight by the improved AHP. Then, CRITIC is adopted to determine the objective weight of each index through the comparison intensity and conflict between indexes. The last the improved GCDA is applied to screen the optimal scheme, so as to, from the subjective and objective angle, select the connecting system. Comprehensive decision of Xinjiang PV power station is conducted and reasonable analysis results are attained. The research results might provide scientific basis for investment decision.
Capturing planar shapes by approximating their outlines
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Riyazuddin, M.; Baig, M. H.
2006-05-01
A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.
Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)
NASA Astrophysics Data System (ADS)
Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.
2016-08-01
We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.
Latifoğlu, Fatma; Polat, Kemal; Kara, Sadik; Güneş, Salih
2008-02-01
In this study, we proposed a new medical diagnosis system based on principal component analysis (PCA), k-NN based weighting pre-processing, and Artificial Immune Recognition System (AIRS) for diagnosis of atherosclerosis from Carotid Artery Doppler Signals. The suggested system consists of four stages. First, in the feature extraction stage, we have obtained the features related with atherosclerosis disease using Fast Fourier Transformation (FFT) modeling and by calculating of maximum frequency envelope of sonograms. Second, in the dimensionality reduction stage, the 61 features of atherosclerosis disease have been reduced to 4 features using PCA. Third, in the pre-processing stage, we have weighted these 4 features using different values of k in a new weighting scheme based on k-NN based weighting pre-processing. Finally, in the classification stage, AIRS classifier has been used to classify subjects as healthy or having atherosclerosis. Hundred percent of classification accuracy has been obtained by the proposed system using 10-fold cross validation. This success shows that the proposed system is a robust and effective system in diagnosis of atherosclerosis disease.
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio
1995-01-01
The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.
Parks, Sean A; McKelvey, Kevin S; Schwartz, Michael K
2013-02-01
The importance of movement corridors for maintaining connectivity within metapopulations of wild animals is a cornerstone of conservation. One common approach for determining corridor locations is least-cost corridor (LCC) modeling, which uses algorithms within a geographic information system to search for routes with the lowest cumulative resistance between target locations on a landscape. However, the presentation of multiple LCCs that connect multiple locations generally assumes all corridors contribute equally to connectivity, regardless of the likelihood that animals will use them. Thus, LCCs may overemphasize seldom-used longer routes and underemphasize more frequently used shorter routes. We hypothesize that, depending on conservation objectives and available biological information, weighting individual corridors on the basis of species-specific movement, dispersal, or gene flow data may better identify effective corridors. We tested whether locations of key connectivity areas, defined as the highest 75th and 90th percentile cumulative weighted value of approximately 155,000 corridors, shift under different weighting scenarios. In addition, we quantified the amount and location of private land that intersect key connectivity areas under each weighting scheme. Some areas that appeared well connected when analyzed with unweighted corridors exhibited much less connectivity compared with weighting schemes that discount corridors with large effective distances. Furthermore, the amount and location of key connectivity areas that intersected private land varied among weighting schemes. We believe biological assumptions and conservation objectives should be explicitly incorporated to weight corridors when assessing landscape connectivity. These results are highly relevant to conservation planning because on the basis of recent interest by government agencies and nongovernmental organizations in maintaining and enhancing wildlife corridors, connectivity will likely be an important criterion for prioritization of land purchases and swaps. ©2012 Society for Conservation Biology.
Analysis of fault-tolerant neurocontrol architectures
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1992-01-01
The fault-tolerance of analog parallel distributed implementations of a multivariable aircraft neurocontroller is analyzed by simulating weight and neuron failures in a simplified scheme of analog processing based on the functional architecture of the ETANN chip (Electrically Trainable Artificial Neural Network). The neural information processing is found to be only partially distributed throughout the set of weights of the neurocontroller synthesized with the backpropagation algorithm. Although the degree of distribution of the neural processing, and consequently the fault-tolerance of the neurocontroller, could be enhanced using Locally Distributed Weight and Neuron Approaches, a satisfactory level of fault-tolerance could only be obtained by retraining the degrated VLSI neurocontroller. The possibility of maintaining neurocontrol performance and stability in the presence of single weight of neuron failures was demonstrated through an automated retraining procedure of the neurocontroller based on a pre-programmed choice and sequence of the training parameters.
Integrated neuron circuit for implementing neuromorphic system with synaptic device
NASA Astrophysics Data System (ADS)
Lee, Jeong-Jun; Park, Jungjin; Kwon, Min-Woo; Hwang, Sungmin; Kim, Hyungjin; Park, Byung-Gook
2018-02-01
In this paper, we propose and fabricate Integrate & Fire neuron circuit for implementing neuromorphic system. Overall operation of the circuit is verified by measuring discrete devices and the output characteristics of the circuit. Since the neuron circuit shows asymmetric output characteristic that can drive synaptic device with Spike-Timing-Dependent-Plasticity (STDP) characteristic, the autonomous weight update process is also verified by connecting the synaptic device and the neuron circuit. The timing difference of the pre-neuron and the post-neuron induce autonomous weight change of the synaptic device. Unlike 2-terminal devices, which is frequently used to implement neuromorphic system, proposed scheme of the system enables autonomous weight update and simple configuration by using 4-terminal synapse device and appropriate neuron circuit. Weight update process in the multi-layer neuron-synapse connection ensures implementation of the hardware-based artificial intelligence, based on Spiking-Neural- Network (SNN).
Simulation of violent free surface flow by AMR method
NASA Astrophysics Data System (ADS)
Hu, Changhong; Liu, Cheng
2018-05-01
A novel CFD approach based on adaptive mesh refinement (AMR) technique is being developed for numerical simulation of violent free surface flows. CIP method is applied to the flow solver and tangent of hyperbola for interface capturing with slope weighting (THINC/SW) scheme is implemented as the free surface capturing scheme. The PETSc library is adopted to solve the linear system. The linear solver is redesigned and modified to satisfy the requirement of the AMR mesh topology. In this paper, our CFD method is outlined and newly obtained results on numerical simulation of violent free surface flows are presented.
An online outlier identification and removal scheme for improving fault detection performance.
Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej
2014-05-01
Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
NASA Astrophysics Data System (ADS)
Cordier, G.; Choi, J.; Raguin, L. G.
2008-11-01
Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.
NASA Astrophysics Data System (ADS)
Chao, I.-Fen; Zhang, Tsung-Min
2015-06-01
Long-reach passive optical networks (LR-PONs) have been considered to be promising solutions for future access networks. In this paper, we propose a distributed medium access control (MAC) scheme over an advantageous LR-PON network architecture that reroutes the control information from and back to all ONUs through an (N + 1) × (N + 1) star coupler (SC) deployed near the ONUs, thereby overwhelming the extremely long propagation delay problem in LR-PONs. In the network, the control slot is designed to contain all bandwidth requirements of all ONUs and is in-band time-division-multiplexed with a number of data slots within a cycle. In the proposed MAC scheme, a novel profit-weight-based dynamic bandwidth allocation (P-DBA) scheme is presented. The algorithm is designed to efficiently and fairly distribute the amount of excess bandwidth based on a profit value derived from the excess bandwidth usage of each ONU, which resolves the problems of previously reported DBA schemes that are either unfair or inefficient. The simulation results show that the proposed decentralized algorithms exhibit a nearly three-order-of-magnitude improvement in delay performance compared to the centralized algorithms over LR-PONs. Moreover, the newly proposed P-DBA scheme guarantees low delay performance and fairness even when under attack by the malevolent ONU irrespective of traffic loads and burstiness.
NASA Technical Reports Server (NTRS)
Wang, Shugong; Liang, Xu
2013-01-01
A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.
Jagannathan, Sarangapani; He, Pingan
2008-12-01
In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.
Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks
Tham, Mau-Luen
2016-01-01
This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398
Weighted Global Artificial Bee Colony Algorithm Makes Gas Sensor Deployment Efficient
Jiang, Ye; He, Ziqing; Li, Yanhai; Xu, Zhengyi; Wei, Jianming
2016-01-01
This paper proposes an improved artificial bee colony algorithm named Weighted Global ABC (WGABC) algorithm, which is designed to improve the convergence speed in the search stage of solution search equation. The new method not only considers the effect of global factors on the convergence speed in the search phase, but also provides the expression of global factor weights. Experiment on benchmark functions proved that the algorithm can improve the convergence speed greatly. We arrive at the gas diffusion concentration based on the theory of CFD and then simulate the gas diffusion model with the influence of buildings based on the algorithm. Simulation verified the effectiveness of the WGABC algorithm in improving the convergence speed in optimal deployment scheme of gas sensors. Finally, it is verified that the optimal deployment method based on WGABC algorithm can improve the monitoring efficiency of sensors greatly as compared with the conventional deployment methods. PMID:27322262
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1997-01-01
In these lecture notes we describe the construction, analysis, and application of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes for hyperbolic conservation laws and related Hamilton- Jacobi equations. ENO and WENO schemes are high order accurate finite difference schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation procedure as much as possible. ENO and WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures, such as compressible turbulence simulations and aeroacoustics. These lecture notes are basically self-contained. It is our hope that with these notes and with the help of the quoted references, the reader can understand the algorithms and code them up for applications.
The fundamentals of adaptive grid movement
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.
1990-01-01
Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.
H. Li; X. Deng; Andy Dolloff; E. P. Smith
2015-01-01
A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...
Truncation-based energy weighting string method for efficiently resolving small energy barriers
NASA Astrophysics Data System (ADS)
Carilli, Michael F.; Delaney, Kris T.; Fredrickson, Glenn H.
2015-08-01
The string method is a useful numerical technique for resolving minimum energy paths in rare-event barrier-crossing problems. However, when applied to systems with relatively small energy barriers, the string method becomes inconvenient since many images trace out physically uninteresting regions where the barrier has already been crossed and recrossing is unlikely. Energy weighting alleviates this difficulty to an extent, but typical implementations still require the string's endpoints to evolve to stable states that may be far from the barrier, and deciding upon a suitable energy weighting scheme can be an iterative process dependent on both the application and the number of images used. A second difficulty arises when treating nucleation problems: for later images along the string, the nucleus grows to fill the computational domain. These later images are unphysical due to confinement effects and must be discarded. In both cases, computational resources associated with unphysical or uninteresting images are wasted. We present a new energy weighting scheme that eliminates all of the above difficulties by actively truncating the string as it evolves and forcing all images, including the endpoints, to remain within and cover uniformly a desired barrier region. The calculation can proceed in one step without iterating on strategy, requiring only an estimate of an energy value below which images become uninteresting.
A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging
Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.
2014-01-01
Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990
NASA Astrophysics Data System (ADS)
Do, Seongju; Li, Haojun; Kang, Myungjoo
2017-06-01
In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
High speed, high performance, portable, dual-channel, optical fiber Bragg grating (FBG) demodulator
NASA Astrophysics Data System (ADS)
Zhang, Hongtao; Wei, Zhanxiong; Fan, Lingling; Wang, Pengfei; Zhao, Xilin; Wang, Zhenhua; Yang, Shangming; Cui, Hong-Liang
2009-10-01
A high speed, high performance, portable, dual-channel, optical Fiber Bragg Grating demodulator based on fiber Fabry- Pérot tunable filter (FFP-FT) is reported in this paper. The high speed demodulation can be achieved to detect the dynamical loads of vehicles with speed of 15 mph. However, the drifts of piezoelectric transducer (PZT) in the cavity of FFP-FT dramatically degrade the stability of system. Two schemes are implemented to improve the stability of system. Firstly, a temperature control system is installed to effectively remove the thermal drifts of PZT. Secondly, a scheme of changing the bias voltage of FFP-FT to restrain non-thermal drifts has been realized at lab and will be further developed to an automatic control system based on microcontroller. Although this demodulator is originally used in Weight-In- Motion (WIM) sensing system, it can be extended into other aspects and the schemes presented in this paper will be useful in many applications.
Neural network-based systems for handprint OCR applications.
Ganis, M D; Wilson, C L; Blue, J L
1998-01-01
Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
Wörz, Stefan; Rohr, Karl
2006-01-01
We introduce an elastic registration approach which is based on a physical deformation model and uses Gaussian elastic body splines (GEBS). We formulate an extended energy functional related to the Navier equation under Gaussian forces which also includes landmark localization uncertainties. These uncertainties are characterized by weight matrices representing anisotropic errors. Since the approach is based on a physical deformation model, cross-effects in elastic deformations can be taken into account. Moreover, we have a free parameter to control the locality of the transformation for improved registration of local geometric image differences. We demonstrate the applicability of our scheme based on 3D CT images from the Truth Cube experiment, 2D MR images of the brain, as well as 2D gel electrophoresis images. It turns out that the new scheme achieves more accurate results compared to previous approaches.
On-Line Method and Apparatus for Coordinated Mobility and Manipulation of Mobile Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1996-01-01
A simple and computationally efficient approach is disclosed for on-line coordinated control of mobile robots consisting of a manipulator arm mounted on a mobile base. The effect of base mobility on the end-effector manipulability index is discussed. The base mobility and arm manipulation degrees-of-freedom are treated equally as the joints of a kinematically redundant composite robot. The redundancy introduced by the mobile base is exploited to satisfy a set of user-defined additional tasks during the end-effector motion. A simple on-line control scheme is proposed which allows the user to assign weighting factors to individual degrees-of-mobility and degrees-of-manipulation, as well as to each task specification. The computational efficiency of the control algorithm makes it particularly suitable for real-time implementations. Four case studies are discussed in detail to demonstrate the application of the coordinated control scheme to various mobile robots.
A Resource Management Tool for Implementing Strategic Direction in an Academic Department
ERIC Educational Resources Information Center
Ringwood, John V.; Devitt, Frank; Doherty, Sean; Farrell, Ronan; Lawlor, Bob; McLoone, Sean C.; McLoone, Seamus F.; Rogers, Alan; Villing, Rudi; Ward, Tomas
2005-01-01
This paper reports on a load balancing system for an academic department, which can be used as an implementation mechanism for strategic planning. In essence, it consists of weighting each activity within the department and performing workload allocation based on this transparent scheme. The experience to date has been very positive, in terms of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
2011-01-01
Background Inferring regulatory interactions between genes from transcriptomics time-resolved data, yielding reverse engineered gene regulatory networks, is of paramount importance to systems biology and bioinformatics studies. Accurate methods to address this problem can ultimately provide a deeper insight into the complexity, behavior, and functions of the underlying biological systems. However, the large number of interacting genes coupled with short and often noisy time-resolved read-outs of the system renders the reverse engineering a challenging task. Therefore, the development and assessment of methods which are computationally efficient, robust against noise, applicable to short time series data, and preferably capable of reconstructing the directionality of the regulatory interactions remains a pressing research problem with valuable applications. Results Here we perform the largest systematic analysis of a set of similarity measures and scoring schemes within the scope of the relevance network approach which are commonly used for gene regulatory network reconstruction from time series data. In addition, we define and analyze several novel measures and schemes which are particularly suitable for short transcriptomics time series. We also compare the considered 21 measures and 6 scoring schemes according to their ability to correctly reconstruct such networks from short time series data by calculating summary statistics based on the corresponding specificity and sensitivity. Our results demonstrate that rank and symbol based measures have the highest performance in inferring regulatory interactions. In addition, the proposed scoring scheme by asymmetric weighting has shown to be valuable in reducing the number of false positive interactions. On the other hand, Granger causality as well as information-theoretic measures, frequently used in inference of regulatory networks, show low performance on the short time series analyzed in this study. Conclusions Our study is intended to serve as a guide for choosing a particular combination of similarity measures and scoring schemes suitable for reconstruction of gene regulatory networks from short time series data. We show that further improvement of algorithms for reverse engineering can be obtained if one considers measures that are rooted in the study of symbolic dynamics or ranks, in contrast to the application of common similarity measures which do not consider the temporal character of the employed data. Moreover, we establish that the asymmetric weighting scoring scheme together with symbol based measures (for low noise level) and rank based measures (for high noise level) are the most suitable choices. PMID:21771321
Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems
NASA Astrophysics Data System (ADS)
Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun
2018-01-01
In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.
Cycle of a closed gas-turbine plant with a gas-dynamic energy-separation device
NASA Astrophysics Data System (ADS)
Leontiev, A. I.; Burtsev, S. A.
2017-09-01
The efficiency of closed gas-turbine space-based plants is analyzed. The weight-size characteristics of closed gas-turbine plants are shown in many respects as determined by the refrigerator-radiator parameters. The scheme of closed gas-turbine plants with a gas-dynamic temperature-stratification device is proposed, and a calculation model is developed. This model shows that the cycle efficiency decreases by 2% in comparison with that of the closed gas-turbine plants operating by the traditional scheme with increasing temperature at the output from the refrigerator-radiator by 28 K and decreasing its area by 13.7%.
Data on industrial new orders for the euro area.
de Bondt, Gabe J; Dieden, Heinz C; Muzikarova, Sona; Pavlova, Iskra
2016-12-01
This data article provides time series on euro area industrial new orders and is related to the research article entitled "Modelling industrial new orders" (G.J. de Bondt, H.C. Dieden, S. Muzikarova, I. Vincze, 2014b) [3]. The data are in index format with a fixed base year (currently 2010) for total new orders as well as a number of breakdowns. The euro area data are based on the official national data for countries that still collect data and on European Central Bank (ECB) model estimates for countries that discontinued the data collection. The weighting scheme to calculate euro area aggregates makes use of country weights derived from industrial turnover statistics as published by Eurostat.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Advanced control design for hybrid turboelectric vehicle
NASA Technical Reports Server (NTRS)
Abban, Joseph; Norvell, Johnesta; Momoh, James A.
1995-01-01
The new environment standards are a challenge and opportunity for industry and government who manufacture and operate urban mass transient vehicles. A research investigation to provide control scheme for efficient power management of the vehicle is in progress. Different design requirements using functional analysis and trade studies of alternate power sources and controls have been performed. The design issues include portability, weight and emission/fuel efficiency of induction motor, permanent magnet and battery. A strategic design scheme to manage power requirements using advanced control systems is presented. It exploits fuzzy logic, technology and rule based decision support scheme. The benefits of our study will enhance the economic and technical feasibility of technological needs to provide low emission/fuel efficient urban mass transit bus. The design team includes undergraduate researchers in our department. Sample results using NASA HTEV simulation tool are presented.
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
NASA Astrophysics Data System (ADS)
Zargari Khuzani, Abolfazl; Danala, Gopichandh; Heidari, Morteza; Du, Yue; Mashhadi, Najmeh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Higher recall rates are a major challenge in mammography screening. Thus, developing computer-aided diagnosis (CAD) scheme to classify between malignant and benign breast lesions can play an important role to improve efficacy of mammography screening. Objective of this study is to develop and test a unique image feature fusion framework to improve performance in classifying suspicious mass-like breast lesions depicting on mammograms. The image dataset consists of 302 suspicious masses detected on both craniocaudal and mediolateral-oblique view images. Amongst them, 151 were malignant and 151 were benign. The study consists of following 3 image processing and feature analysis steps. First, an adaptive region growing segmentation algorithm was used to automatically segment mass regions. Second, a set of 70 image features related to spatial and frequency characteristics of mass regions were initially computed. Third, a generalized linear regression model (GLM) based machine learning classifier combined with a bat optimization algorithm was used to optimally fuse the selected image features based on predefined assessment performance index. An area under ROC curve (AUC) with was used as a performance assessment index. Applying CAD scheme to the testing dataset, AUC was 0.75+/-0.04, which was significantly higher than using a single best feature (AUC=0.69+/-0.05) or the classifier with equally weighted features (AUC=0.73+/-0.05). This study demonstrated that comparing to the conventional equal-weighted approach, using an unequal-weighted feature fusion approach had potential to significantly improve accuracy in classifying between malignant and benign breast masses.
Multi-Shell Hybrid Diffusion Imaging (HYDI) at 7 Tesla in TgF344-AD Transgenic Alzheimer Rats.
Daianu, Madelaine; Jacobs, Russell E; Weitz, Tara M; Town, Terrence C; Thompson, Paul M
2015-01-01
Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. Diffusion tensor imaging (DTI) and high-angular resolution imaging (HARDI) are frequently used in radiology and neuroscience research but can be limited in describing the signal behavior in composite nerve fiber structures. Here, we developed and assessed the benefit of a comprehensive diffusion encoding scheme, known as hybrid diffusion imaging (HYDI), composed of 300 DWI volumes acquired at 7-Tesla with diffusion weightings at b = 1000, 3000, 4000, 8000 and 12000 s/mm2 and applied it in transgenic Alzheimer rats (line TgF344-AD) that model the full clinico-pathological spectrum of the human disease. We studied and visualized the effects of the multiple concentric "shells" when computing three distinct anisotropy maps-fractional anisotropy (FA), generalized fractional anisotropy (GFA) and normalized quantitative anisotropy (NQA). We tested the added value of the multi-shell q-space sampling scheme, when reconstructing neural pathways using mathematical frameworks from DTI and q-ball imaging (QBI). We show a range of properties of HYDI, including lower apparent anisotropy when using high b-value shells in DTI-based reconstructions, and increases in apparent anisotropy in QBI-based reconstructions. Regardless of the reconstruction scheme, HYDI improves FA-, GFA- and NQA-aided tractography. HYDI may be valuable in human connectome projects and clinical research, as well as magnetic resonance research in experimental animals.
Multi-Shell Hybrid Diffusion Imaging (HYDI) at 7 Tesla in TgF344-AD Transgenic Alzheimer Rats
Daianu, Madelaine; Jacobs, Russell E.; Weitz, Tara M.; Town, Terrence C.; Thompson, Paul M.
2015-01-01
Diffusion weighted imaging (DWI) is widely used to study microstructural characteristics of the brain. Diffusion tensor imaging (DTI) and high-angular resolution imaging (HARDI) are frequently used in radiology and neuroscience research but can be limited in describing the signal behavior in composite nerve fiber structures. Here, we developed and assessed the benefit of a comprehensive diffusion encoding scheme, known as hybrid diffusion imaging (HYDI), composed of 300 DWI volumes acquired at 7-Tesla with diffusion weightings at b = 1000, 3000, 4000, 8000 and 12000 s/mm2 and applied it in transgenic Alzheimer rats (line TgF344-AD) that model the full clinico-pathological spectrum of the human disease. We studied and visualized the effects of the multiple concentric “shells” when computing three distinct anisotropy maps–fractional anisotropy (FA), generalized fractional anisotropy (GFA) and normalized quantitative anisotropy (NQA). We tested the added value of the multi-shell q-space sampling scheme, when reconstructing neural pathways using mathematical frameworks from DTI and q-ball imaging (QBI). We show a range of properties of HYDI, including lower apparent anisotropy when using high b-value shells in DTI-based reconstructions, and increases in apparent anisotropy in QBI-based reconstructions. Regardless of the reconstruction scheme, HYDI improves FA-, GFA- and NQA-aided tractography. HYDI may be valuable in human connectome projects and clinical research, as well as magnetic resonance research in experimental animals. PMID:26683657
Speed Sensorless Induction Motor Drives for Electrical Actuators: Schemes, Trends and Tradeoffs
NASA Technical Reports Server (NTRS)
Elbuluk, Malik E.; Kankam, M. David
1997-01-01
For a decade, induction motor drive-based electrical actuators have been under investigation as potential replacement for the conventional hydraulic and pneumatic actuators in aircraft. Advantages of electric actuator include lower weight and size, reduced maintenance and operating costs, improved safety due to the elimination of hazardous fluids and high pressure hydraulic and pneumatic actuators, and increased efficiency. Recently, the emphasis of research on induction motor drives has been on sensorless vector control which eliminates flux and speed sensors mounted on the motor. Also, the development of effective speed and flux estimators has allowed good rotor flux-oriented (RFO) performance at all speeds except those close to zero. Sensorless control has improved the motor performance, compared to the Volts/Hertz (or constant flux) controls. This report evaluates documented schemes for speed sensorless drives, and discusses the trends and tradeoffs involved in selecting a particular scheme. These schemes combine the attributes of the direct and indirect field-oriented control (FOC) or use model adaptive reference systems (MRAS) with a speed-dependent current model for flux estimation which tracks the voltage model-based flux estimator. Many factors are important in comparing the effectiveness of a speed sensorless scheme. Among them are the wide speed range capability, motor parameter insensitivity and noise reduction. Although a number of schemes have been proposed for solving the speed estimation, zero-speed FOC with robustness against parameter variations still remains an area of research for speed sensorless control.
NASA Astrophysics Data System (ADS)
Kou, Yanbin; Liu, Siming; Zhang, Weiheng; Shen, Guansheng; Tian, Huiping
2017-03-01
We present a dynamic capacity allocation mechanism based on the Quality of Service (QoS) for different mobile users (MU) in 60 GHz radio-over-fiber (RoF) local access networks. The proposed mechanism is capable for collecting the request information of MUs to build a full list of MU capacity demands and service types at the Central Office (CO). A hybrid algorithm is introduced to implement the capacity allocation which can satisfy the requirements of different MUs at different network traffic loads. Compared with the weight dynamic frames assignment (WDFA) scheme, the Hybrid scheme can keep high priority MUs in low delay and maintain the packet loss rate less than 1% simultaneously. At the same time, low priority MUs have a relatively better performance.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
A fuzzy call admission control scheme in wireless networks
NASA Astrophysics Data System (ADS)
Ma, Yufeng; Gong, Shenguang; Hu, Xiulin; Zhang, Yunyu
2007-11-01
Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in wireless networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities. Simulation compares the proposed fuzzy scheme with an adaptive channel reservation scheme. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.
Narayanan, Vignesh; Jagannathan, Sarangapani
2017-09-07
In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
NASA Astrophysics Data System (ADS)
Zhang, Jing; Chen, Xuemei; Deng, Mingliang; Zeng, Dengke; Yang, Heming; Qiu, Kun
2015-08-01
We propose a novel ICI cancellation using opposite weighting on symmetric subcarrier pairs to combat the linear phase noise of laser source and the nonlinear phase noise resulted from the fiber nonlinearity. We compare the proposed ICI cancellation scheme with conventional OFDM and the ICI self-cancellation at the same raw bit rate of 35.6 Gb/s. In simulations, the proposed ICI cancellation scheme shows better phase noise tolerance compared with conventional OFDM and has similar phase noise tolerance with the ICI self-cancellation. The laser linewidth is about 13 MHz at BER of 2 × 10-3 with ICI cancellation scheme while it is 5 MHz in conventional OFDM. We also study the nonlinearity tolerance and find that the proposed ICI cancellation scheme is better compared with the other two schemes which due to the first order nonlinearity mitigation. The launch power is 7 dBm for the proposed ICI cancellation scheme and its SNR improves by 4 dB or 3 dB compared with the ICI self-cancellation or conventional OFDM at BER of 1.1 × 10-3, respectively.
Fair ranking of researchers and research teams.
Vavryčuk, Václav
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Wang, Y.; Sun, Y.
2016-08-01
The sphere function-based gas kinetic scheme (GKS), which was presented by Shu and his coworkers [23] for simulation of inviscid compressible flows, is extended to simulate 3D viscous incompressible and compressible flows in this work. Firstly, we use certain discrete points to represent the spherical surface in the phase velocity space. Then, integrals along the spherical surface for conservation forms of moments, which are needed to recover 3D Navier-Stokes equations, are approximated by integral quadrature. The basic requirement is that these conservation forms of moments can be exactly satisfied by weighted summation of distribution functions at discrete points. It was found that the integral quadrature by eight discrete points on the spherical surface, which forms the D3Q8 discrete velocity model, can exactly match the integral. In this way, the conservative variables and numerical fluxes can be computed by weighted summation of distribution functions at eight discrete points. That is, the application of complicated formulations resultant from integrals can be replaced by a simple solution process. Several numerical examples including laminar flat plate boundary layer, 3D lid-driven cavity flow, steady flow through a 90° bending square duct, transonic flow around DPW-W1 wing and supersonic flow around NACA0012 airfoil are chosen to validate the proposed scheme. Numerical results demonstrate that the present scheme can provide reasonable numerical results for 3D viscous flows.
How Molecular Size Impacts RMSD Applications in Molecular Dynamics Simulations.
Sargsyan, Karen; Grauffel, Cédric; Lim, Carmay
2017-04-11
The root-mean-square deviation (RMSD) is a similarity measure widely used in analysis of macromolecular structures and dynamics. As increasingly larger macromolecular systems are being studied, dimensionality effects such as the "curse of dimensionality" (a diminishing ability to discriminate pairwise differences between conformations with increasing system size) may exist and significantly impact RMSD-based analyses. For such large bimolecular systems, whether the RMSD or other alternative similarity measures might suffer from this "curse" and lose the ability to discriminate different macromolecular structures had not been explicitly addressed. Here, we show such dimensionality effects for both weighted and nonweighted RMSD schemes. We also provide a mechanism for the emergence of the "curse of dimensionality" for RMSD from the law of large numbers by showing that the conformational distributions from which RMSDs are calculated become increasingly similar as the system size increases. Our findings suggest the use of weighted RMSD schemes for small proteins (less than 200 residues) and nonweighted RMSD for larger proteins when analyzing molecular dynamics trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levakhina, Y. M.; Mueller, J.; Buzug, T. M.
Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position anmore » individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.« less
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
Chinese Version of the EQ-5D Preference Weights: Applicability in a Chinese General Population
Wu, Chunmei; Gong, Yanhong; Wu, Jiang; Zhang, Shengchao; Yin, Xiaoxv; Dong, Xiaoxin; Li, Wenzhen; Cao, Shiyi; Mkandawire, Naomie; Lu, Zuxun
2016-01-01
Objectives This study aimed to test the reliability, validity and sensitivity of Chinese version of the EQ-5D preference weights in Chinese general people, examine the differences between the China value set and the UK, Japan and Korea value sets, and provide methods for evaluating and comparing the EQ-5D value sets of different countries. Methods A random sample of 2984 community residents (15 years or older) were interviewed using a questionnaire including the EQ-5D scale. Level of agreement, convergent validity, known-groups validity and sensitivity of the EQ-5D China, United Kingdom (UK), Japan and Korea value sets were determined. Results The mean EQ-5D index scores were significantly (P<0.05) different among the UK (0.964), Japan (0.981), Korea (0.987), and China (0.985) weights. High level of agreement (intraclass correlations coefficients > 0.75) and convergent validity (Pearson’s correlation coefficients > 0.95) were found between each paired schemes. The EQ-5D index scores discriminated equally well for the four versions between levels of 10 known-groups (P< 0.05). The effect size and the relative efficiency statistics showed that the China weights had better sensitivity. Conclusions The China EQ-5D preference weights show equivalent psychometric properties with those from the UK, Japan and Korea weights while slightly more sensitive to known group differences than those from the Japan and Korea weights. Considering both psychometric and sociocultural issues, the China scheme should be a priority as an EQ-5D based measure of the health related quality of life in Chinese general population. PMID:27711169
Katiyar, Prateek; Divine, Mathew R; Kohlhofer, Ursula; Quintanilla-Martinez, Leticia; Schölkopf, Bernhard; Pichler, Bernd J; Disselhorst, Jonathan A
2017-04-01
In this study, we described and validated an unsupervised segmentation algorithm for the assessment of tumor heterogeneity using dynamic 18 F-FDG PET. The aim of our study was to objectively evaluate the proposed method and make comparisons with compartmental modeling parametric maps and SUV segmentations using simulations of clinically relevant tumor tissue types. Methods: An irreversible 2-tissue-compartmental model was implemented to simulate clinical and preclinical 18 F-FDG PET time-activity curves using population-based arterial input functions (80 clinical and 12 preclinical) and the kinetic parameter values of 3 tumor tissue types. The simulated time-activity curves were corrupted with different levels of noise and used to calculate the tissue-type misclassification errors of spectral clustering (SC), parametric maps, and SUV segmentation. The utility of the inverse noise variance- and Laplacian score-derived frame weighting schemes before SC was also investigated. Finally, the SC scheme with the best results was tested on a dynamic 18 F-FDG measurement of a mouse bearing subcutaneous colon cancer and validated using histology. Results: In the preclinical setup, the inverse noise variance-weighted SC exhibited the lowest misclassification errors (8.09%-28.53%) at all noise levels in contrast to the Laplacian score-weighted SC (16.12%-31.23%), unweighted SC (25.73%-40.03%), parametric maps (28.02%-61.45%), and SUV (45.49%-45.63%) segmentation. The classification efficacy of both weighted SC schemes in the clinical case was comparable to the unweighted SC. When applied to the dynamic 18 F-FDG measurement of colon cancer, the proposed algorithm accurately identified densely vascularized regions from the rest of the tumor. In addition, the segmented regions and clusterwise average time-activity curves showed excellent correlation with the tumor histology. Conclusion: The promising results of SC mark its position as a robust tool for quantification of tumor heterogeneity using dynamic PET studies. Because SC tumor segmentation is based on the intrinsic structure of the underlying data, it can be easily applied to other cancer types as well. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
A Part-Of-Speech term weighting scheme for biomedical information retrieval.
Wang, Yanshan; Wu, Stephen; Li, Dingcheng; Mehrabi, Saeed; Liu, Hongfang
2016-10-01
In the era of digitalization, information retrieval (IR), which retrieves and ranks documents from large collections according to users' search queries, has been popularly applied in the biomedical domain. Building patient cohorts using electronic health records (EHRs) and searching literature for topics of interest are some IR use cases. Meanwhile, natural language processing (NLP), such as tokenization or Part-Of-Speech (POS) tagging, has been developed for processing clinical documents or biomedical literature. We hypothesize that NLP can be incorporated into IR to strengthen the conventional IR models. In this study, we propose two NLP-empowered IR models, POS-BoW and POS-MRF, which incorporate automatic POS-based term weighting schemes into bag-of-word (BoW) and Markov Random Field (MRF) IR models, respectively. In the proposed models, the POS-based term weights are iteratively calculated by utilizing a cyclic coordinate method where golden section line search algorithm is applied along each coordinate to optimize the objective function defined by mean average precision (MAP). In the empirical experiments, we used the data sets from the Medical Records track in Text REtrieval Conference (TREC) 2011 and 2012 and the Genomics track in TREC 2004. The evaluation on TREC 2011 and 2012 Medical Records tracks shows that, for the POS-BoW models, the mean improvement rates for IR evaluation metrics, MAP, bpref, and P@10, are 10.88%, 4.54%, and 3.82%, compared to the BoW models; and for the POS-MRF models, these rates are 13.59%, 8.20%, and 8.78%, compared to the MRF models. Additionally, we experimentally verify that the proposed weighting approach is superior to the simple heuristic and frequency based weighting approaches, and validate our POS category selection. Using the optimal weights calculated in this experiment, we tested the proposed models on the TREC 2004 Genomics track and obtained average of 8.63% and 10.04% improvement rates for POS-BoW and POS-MRF, respectively. These significant improvements verify the effectiveness of leveraging POS tagging for biomedical IR tasks. Copyright © 2016 Elsevier Inc. All rights reserved.
Rest requirements and rest management of personnel in shift work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammell, B.D.; Scheuerle, A.
1995-12-31
A difficulty-weighted shift assignment scheme is proposed for use in prolonged and strenuous field operations such as emergency response, site testing, and short term hazardous waste remediation projects. The purpose of the work rotation plan is to increase productivity, safety, and moral of workers. Job weighting is accomplished by assigning adjustments to the mental and physical intensity of the task, the protective equipment worn, and the climatic conditions. The plan is based on medical studies of sleep deprivation, the effects of rest adjustments, and programs to reduce sleep deprivation and normalize shift schedules.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*
Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.
2011-01-01
This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guozhu, E-mail: gzhang6@ncsu.edu
Zebrafish have become a key alternative model for studying health effects of environmental stressors, partly due to their genetic similarity to humans, fast generation time, and the efficiency of generating high-dimensional systematic data. Studies aiming to characterize adverse health effects in zebrafish typically include several phenotypic measurements (endpoints). While there is a solid biomedical basis for capturing a comprehensive set of endpoints, making summary judgments regarding health effects requires thoughtful integration across endpoints. Here, we introduce a Bayesian method to quantify the informativeness of 17 distinct zebrafish endpoints as a data-driven weighting scheme for a multi-endpoint summary measure, called weightedmore » Aggregate Entropy (wAggE). We implement wAggE using high-throughput screening (HTS) data from zebrafish exposed to five concentrations of all 1060 ToxCast chemicals. Our results show that our empirical weighting scheme provides better performance in terms of the Receiver Operating Characteristic (ROC) curve for identifying significant morphological effects and improves robustness over traditional curve-fitting approaches. From a biological perspective, our results suggest that developmental cascade effects triggered by chemical exposure can be recapitulated by analyzing the relationships among endpoints. Thus, wAggE offers a powerful approach for analysis of multivariate phenotypes that can reveal underlying etiological processes. - Highlights: • Introduced a data-driven weighting scheme for multiple phenotypic endpoints. • Weighted Aggregate Entropy (wAggE) implies differential importance of endpoints. • Endpoint relationships reveal developmental cascade effects triggered by exposure. • wAggE is generalizable to multi-endpoint data of different shapes and scales.« less
Real Gas Computation Using an Energy Relaxation Method and High-Order WENO Schemes
NASA Technical Reports Server (NTRS)
Montarnal, Philippe; Shu, Chi-Wang
1998-01-01
In this paper, we use a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition into two parts: one part is associated with a simpler pressure law and the other part (the nonlinear deviation) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the first part. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.
A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing
NASA Astrophysics Data System (ADS)
Chernogorova, Tatiana; Vulkov, Lubin
2009-11-01
This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.
Teaching Weight-Gravity and Gravitation in Middle School. Testing a New Instructional Approach
NASA Astrophysics Data System (ADS)
Galili, Igal; Bar, Varda; Brosh, Yaffa
2016-12-01
This study deals with the school instruction of the concept of weight. The historical review reveals the major steps in changing weight definition reflecting the epistemological changes in physics. The latest change drawing on the operation of weighing has been not widely copied into physics education. We compared the older instruction based on the gravitational definition of weight with the newer one based on the operational definition. The experimental teaching was applied in two versions, simpler and extended. The study examined the impact of this instruction on the middle school students in regular teaching environment. The experiment involved three groups ( N = 486) of 14-year-old students (ninth grade). The assessment drew on a written questionnaire and personal interviews. The elicited schemes of conceptual knowledge allowed to evaluate the impact on students' pertinent knowledge. The advantage of the new teaching manifested itself in the significant decrease of the well-known misconceptions such as "space causes weightlessness," "weight is an unchanged property of the body considered," and "heavier objects fall faster". The twofold advantage—epistemological and conceptual—of the operational definition of weight supports the correspondent curricular changes of its adoption.
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.; Yamaleev, Nail K.; Frankel, Steven H.
2009-01-01
A general strategy exists for constructing Energy Stable Weighted Essentially Non Oscillatory (ESWENO) finite difference schemes up to eighth-order on periodic domains. These ESWENO schemes satisfy an energy norm stability proof for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, boundary closures are developed for the fourth-order ESWENO scheme that maintain wherever possible the WENO stencil biasing properties, while satisfying the summation-by-parts (SBP) operator convention, thereby ensuring stability in an L2 norm. Second-order, and third-order boundary closures are developed that achieve stability in diagonal and block norms, respectively. The global accuracy for the second-order closures is three, and for the third-order closures is four. A novel set of non-uniform flux interpolation points is necessary near the boundaries to simultaneously achieve 1) accuracy, 2) the SBP convention, and 3) WENO stencil biasing mechanics.
NASA Astrophysics Data System (ADS)
Zhang, Hongtao; Wang, Pengfei
2012-06-01
The current schemes of detecting the status of passengers in airplanes cannot satisfy the more strict regulations recently released by the United States Transportation Security Administration. In basis of investigation on the current seat occupancy sensors for vehicles, in this paper we present a novel scheme of seat occupancy sensors based on Fiber Bragg Grating technology to improve the in-flight security of airplanes. This seat occupancy sensor system can be used to detect the status of passengers and to trigger the airbags to control the inflation of air bags, which have been installed in the airplanes of some major airlines under the new law. This scheme utilizes our previous research results of Weight-In- Motion sensor system based on optical fiber Bragg grating. In contrast to the current seat occupancy sensors for vehicles, this new seat occupancy sensor has so many merits that it is very suitable to be applied in aerospace industry or high speed railway system. Moreover, combined with existing Fiber Bragg Grating strain or temperature sensor systems built in airplanes, this proposed method can construct a complete airline passenger management system.
Numerical scoring for the Classic BILAG index.
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Isenberg, David A; Gordon, Caroline
2009-12-01
To develop an additive numerical scoring scheme for the Classic BILAG index. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0.
Numerical scoring for the Classic BILAG index
Cresswell, Lynne; Yee, Chee-Seng; Farewell, Vernon; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N.; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Toescu, Veronica; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Isenberg, David A.
2009-01-01
Objective. To develop an additive numerical scoring scheme for the Classic BILAG index. Methods. SLE patients were recruited into this multi-centre cross-sectional study. At every assessment, data were collected on disease activity and therapy. Logistic regression was used to model an increase in therapy, as an indicator of active disease, by the Classic BILAG score in eight systems. As both indicate inactivity, scores of D and E were set to 0 and used as the baseline in the fitted model. The coefficients from the fitted model were used to determine the numerical values for Grades A, B and C. Different scoring schemes were then compared using receiver operating characteristic (ROC) curves. Validation analysis was performed using assessments from a single centre. Results. There were 1510 assessments from 369 SLE patients. The currently used coding scheme (A = 9, B = 3, C = 1 and D/E = 0) did not fit the data well. The regression model suggested three possible numerical scoring schemes: (i) A = 11, B = 6, C = 1 and D/E = 0; (ii) A = 12, B = 6, C = 1 and D/E = 0; and (iii) A = 11, B = 7, C = 1 and D/E = 0. These schemes produced comparable ROC curves. Based on this, A = 12, B = 6, C = 1 and D/E = 0 seemed a reasonable and practical choice. The validation analysis suggested that although the A = 12, B = 6, C = 1 and D/E = 0 coding is still reasonable, a scheme with slightly less weighting for B, such as A = 12, B = 5, C = 1 and D/E = 0, may be more appropriate. Conclusions. A reasonable additive numerical scoring scheme based on treatment decision for the Classic BILAG index is A = 12, B = 5, C = 1, D = 0 and E = 0. PMID:19779027
H∞ control problem of linear periodic piecewise time-delay systems
NASA Astrophysics Data System (ADS)
Xie, Xiaochen; Lam, James; Li, Panshuo
2018-04-01
This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
Reliability Constrained Priority Load Shedding for Aerospace Power System Automation
NASA Technical Reports Server (NTRS)
Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)
2000-01-01
The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
Dynamo-based scheme for forecasting the magnitude of solar activity cycles
NASA Technical Reports Server (NTRS)
Layden, A. C.; Fox, P. A.; Howard, J. M.; Sarajedini, A.; Schatten, K. H.
1991-01-01
This paper presents a general framework for forecasting the smoothed maximum level of solar activity in a given cycle, based on a simple understanding of the solar dynamo. This type of forecasting requires knowledge of the sun's polar magnetic field strength at the preceding activity minimum. Because direct measurements of this quantity are difficult to obtain, the quality of a number of proxy indicators already used by other authors is evaluated, which are physically related to the sun's polar field. These indicators are subjected to a rigorous statistical analysis, and the analysis technique for each indicator is specified in detail in order to simplify and systematize reanalysis for future use. It is found that several of these proxies are in fact poorly correlated or uncorrelated with solar activity, and thus are of little value for predicting activity maxima. Also presented is a scheme in which the predictions of the individual proxies are combined via an appropriately weighted mean to produce a compound prediction. The scheme is then applied to the current cycle 22, and a maximum smoothed international sunspot number of 171 + or - 26 is estimated.
Fair ranking of researchers and research teams
2018-01-01
The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier). PMID:29621316
Talebi, H A; Khorasani, K; Tafazoli, S
2009-01-01
This paper presents a robust fault detection and isolation (FDI) scheme for a general class of nonlinear systems using a neural-network-based observer strategy. Both actuator and sensor faults are considered. The nonlinear system considered is subject to both state and sensor uncertainties and disturbances. Two recurrent neural networks are employed to identify general unknown actuator and sensor faults, respectively. The neural network weights are updated according to a modified backpropagation scheme. Unlike many previous methods developed in the literature, our proposed FDI scheme does not rely on availability of full state measurements. The stability of the overall FDI scheme in presence of unknown sensor and actuator faults as well as plant and sensor noise and uncertainties is shown by using the Lyapunov's direct method. The stability analysis developed requires no restrictive assumptions on the system and/or the FDI algorithm. Magnetorquer-type actuators and magnetometer-type sensors that are commonly employed in the attitude control subsystem (ACS) of low-Earth orbit (LEO) satellites for attitude determination and control are considered in our case studies. The effectiveness and capabilities of our proposed fault diagnosis strategy are demonstrated and validated through extensive simulation studies.
Comparison of bird community indices for riparian restoration planning and monitoring
Young, Jock S.; Ammon, Elisabeth M.; Weisburg, Peter J.; Dilts, Thomas E.; Newton, Wesley E.; Wong-Kone, Diane C.; Heki, Lisa G.
2013-01-01
The use of a bird community index that characterizes ecosystem integrity is very attractive to conservation planners and habitat managers, particularly in the absence of any single focal species. In riparian areas of the western USA, several attempts at arriving at a community index signifying a functioning riparian bird community have been made previously, mostly resorting to expert opinions or national conservation rankings for species weights. Because extensive local and regional bird monitoring data were available for Nevada, we were able to develop three different indices that were derived empirically, rather than from expert opinion. We formally examined the use of three species weighting schemes in comparison with simple species richness, using different definitions of riparian species assemblage size, for the purpose of predicting community response to changes in vegetation structure from riparian restoration. For the three indices, species were weighted according to the following criteria: (1) the degree of riparian habitat specialization based on regional data, (2) the relative conservation ranking of landbird species, and (3) the degree to which a species is under-represented compared to the regional species pool for riparian areas. To evaluate the usefulness of these indices for habitat restoration planning and monitoring, we modeled them using habitat variables that are expected to respond to riparian restoration efforts, using data from 64 sampling sites in the Walker River Basin in Nevada and California. We found that none of the species-weighting schemes performed any better as an index for evaluating overall habitat condition than using species richness alone as a community index. Based on our findings, the use of a fairly complete list of 30–35 riparian specialists appears to be the best indicator group for predicting the response of bird communities to the restoration of riparian vegetation.
Liu, Yan-Jun; Tong, Shaocheng
2016-11-01
In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network
NASA Astrophysics Data System (ADS)
Cha, Daehyun; Hwang, Chansik
Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.
Additive schemes for certain operator-differential equations
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2010-12-01
Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.
Predictors of 2,4-dichlorophenoxyacetic acid exposure among herbicide applicators
BHATTI, PARVEEN; BLAIR, AARON; BELL, ERIN M.; ROTHMAN, NATHANIEL; LAN, QING; BARR, DANA B.; NEEDHAM, LARRY L.; PORTENGEN, LUTZEN; FIGGS, LARRY W.; VERMEULEN, ROEL
2009-01-01
To determine the major factors affecting the urinary levels of 2,4-dichlorophenoxyacetic acid (2,4-D) among county noxious weed applicators in Kansas, we used a regression technique that accounted for multiple days of exposure. We collected 136 12-h urine samples from 31 applicators during the course of two spraying seasons (April to August of 1994 and 1995). Using mixed-effects models, we constructed exposure models that related urinary 2,4-D measurements to weighted self-reported work activities from daily diaries collected over 5 to 7 days before the collection of the urine sample. Our primary weights were based on an earlier pharmacokinetic analysis of turf applicators; however, we examined a series of alternative weighting schemes to assess the impact of the specific weights and the number of days before urine sample collection that were considered. The derived models accounting for multiple days of exposure related to a single urine measurement seemed robust with regard to the exact weights, but less to the number of days considered; albeit the determinants from the primary model could be fitted with marginal losses of fit to the data from the other weighting schemes that considered a different numbers of days. In the primary model, the total time of all activities (spraying, mixing, other activities), spraying method, month of observation, application concentration, and wet gloves were significant determinants of urinary 2,4-D concentration and explained 16% of the between-worker variance and 23% of the within-worker variance of urinary 2,4-D levels. As a large proportion of the variance remained unexplained, further studies should be conducted to try to systematically assess other exposure determinants. PMID:19319162
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
NASA Astrophysics Data System (ADS)
ul Amin, Rooh; Aijun, Li; Khan, Muhammad Umer; Shamshirband, Shahaboddin; Kamsin, Amirrudin
2017-01-01
In this paper, an adaptive trajectory tracking controller based on extended normalized radial basis function network (ENRBFN) is proposed for 3-degree-of-freedom four rotor hover vehicle subjected to external disturbance i.e. wind turbulence. Mathematical model of four rotor hover system is developed using equations of motions and a new computational intelligence based technique ENRBFN is introduced to approximate the unmodeled dynamics of the hover vehicle. The adaptive controller based on the Lyapunov stability approach is designed to achieve tracking of the desired attitude angles of four rotor hover vehicle in the presence of wind turbulence. The adaptive weight update based on the Levenberg-Marquardt algorithm is used to avoid weight drift in case the system is exposed to external disturbances. The closed-loop system stability is also analyzed using Lyapunov stability theory. Simulations and experimental results are included to validate the effectiveness of the proposed control scheme.
Scheper, Carsten; Wensch-Dorendorf, Monika; Yin, Tong; Dressel, Holger; Swalve, Herrmann; König, Sven
2016-06-29
Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
He, Pingan; Jagannathan, S
2007-04-01
A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the "strategic" utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysis.
Robust Fault Detection for Switched Fuzzy Systems With Unknown Input.
Han, Jian; Zhang, Huaguang; Wang, Yingchun; Sun, Xun
2017-10-03
This paper investigates the fault detection problem for a class of switched nonlinear systems in the T-S fuzzy framework. The unknown input is considered in the systems. A novel fault detection unknown input observer design method is proposed. Based on the proposed observer, the unknown input can be removed from the fault detection residual. The weighted H∞ performance level is considered to ensure the robustness. In addition, the weighted H₋ performance level is introduced, which can increase the sensibility of the proposed detection method. To verify the proposed scheme, a numerical simulation example and an electromechanical system simulation example are provided at the end of this paper.
A similarity retrieval approach for weighted track and ambient field of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun
2018-03-01
Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Role of atmosphere-ocean interactions in supermodeling the tropical Pacific climate
NASA Astrophysics Data System (ADS)
Shen, Mao-Lin; Keenlyside, Noel; Bhatt, Bhuwan C.; Duane, Gregory S.
2017-12-01
The supermodel strategy interactively combines several models to outperform the individual models comprising it. A key advantage of the approach is that nonlinear improvements can be achieved, in contrast to the linear weighted combination of individual unconnected models. This property is found in a climate supermodel constructed by coupling two versions of an atmospheric model differing only in their convection scheme to a single ocean model. The ocean model receives a weighted combination of the momentum and heat fluxes. Optimal weights can produce a supermodel with a basic state similar to observations: a single Intertropical Convergence zone (ITCZ), with a western Pacific warm pool and an equatorial cold tongue. This is in stark contrast to the erroneous double ITCZ pattern simulated by both of the two stand-alone coupled models. By varying weights, we develop a conceptual scheme to explain how combining the momentum fluxes of the two different atmospheric models affects equatorial upwelling and surface wind feedback so as to give a realistic basic state in the tropical Pacific. In particular, we propose a mechanism based on the competing influences of equatorial zonal wind and off-equatorial wind stress curl in driving equatorial upwelling in the coupled models. Our results show how nonlinear ocean-atmosphere interaction is essential in combining these two effects to build different sea surface temperature structures, some of which are realistic. They also provide some insight into observed and modelled tropical Pacific climate.
Role of atmosphere-ocean interactions in supermodeling the tropical Pacific climate.
Shen, Mao-Lin; Keenlyside, Noel; Bhatt, Bhuwan C; Duane, Gregory S
2017-12-01
The supermodel strategy interactively combines several models to outperform the individual models comprising it. A key advantage of the approach is that nonlinear improvements can be achieved, in contrast to the linear weighted combination of individual unconnected models. This property is found in a climate supermodel constructed by coupling two versions of an atmospheric model differing only in their convection scheme to a single ocean model. The ocean model receives a weighted combination of the momentum and heat fluxes. Optimal weights can produce a supermodel with a basic state similar to observations: a single Intertropical Convergence zone (ITCZ), with a western Pacific warm pool and an equatorial cold tongue. This is in stark contrast to the erroneous double ITCZ pattern simulated by both of the two stand-alone coupled models. By varying weights, we develop a conceptual scheme to explain how combining the momentum fluxes of the two different atmospheric models affects equatorial upwelling and surface wind feedback so as to give a realistic basic state in the tropical Pacific. In particular, we propose a mechanism based on the competing influences of equatorial zonal wind and off-equatorial wind stress curl in driving equatorial upwelling in the coupled models. Our results show how nonlinear ocean-atmosphere interaction is essential in combining these two effects to build different sea surface temperature structures, some of which are realistic. They also provide some insight into observed and modelled tropical Pacific climate.
Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan
2012-01-01
Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tian, Jiajun; Zhang, Qi; Han, Ming
2013-03-11
Active ultrasonic testing is widely used for medical diagnosis, material characterization and structural health monitoring. Ultrasonic transducer is a key component in active ultrasonic testing. Due to their many advantages such as small size, light weight, and immunity to electromagnetic interference, fiber-optic ultrasonic transducers are particularly attractive for permanent, embedded applications in active ultrasonic testing for structural health monitoring. However, current fiber-optic transducers only allow effective ultrasound generation at a single location of the fiber end. Here we demonstrate a fiber-optic device that can effectively generate ultrasound at multiple, selected locations along a fiber in a controllable manner based on a smart light tapping scheme that only taps out the light of a particular wavelength for laser-ultrasound generation and allow light of longer wavelengths pass by without loss. Such a scheme may also find applications in remote fiber-optic device tuning and quasi-distributed biochemical fiber-optic sensing.
Analytical minimization of synchronicity errors in stochastic identification
NASA Astrophysics Data System (ADS)
Bernal, D.
2018-01-01
An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.
NASA Astrophysics Data System (ADS)
Song, Qingguana; Wang, Cheng; Han, Yong; Gao, Dayuan; Duan, Yingliang
2017-06-01
Since detonation often initiates and propagates in the non-homogeneous mixtures, investigating its behavior in non-uniform mixtures is significant not only for the industrial explosion in the leakage combustible gas, but also for the experimental investigations with a vertical concentration gradient caused by the difference in the molecular weight of gas mixture. Objective of this work is to show the detonation behavior in the mixture with different concentration gradients with detailed chemical reaction mechanism. A globally planar detonation in H2-O2 system is simulated by a high-resolution code based on the fifth-order weighted essentially non-oscillatory (WENO) scheme in spatial discretization and the third-order Additive Runge-Kutta schemes in time discretization. The different shocked combustion modes appear in the rich-fuel and poor-fuel layers due to the concentration gradient effect. Globally, for the cases with the lower gradient detonation can be sustained in a way of the alternation of the multi-heads mode and single-head mode, whereas for the cases with the higher gradient detonation propagates with a single-head mode. Institute of Chemical Materials, CAEP.
Approximated affine projection algorithm for feedback cancellation in hearing aids.
Lee, Sangmin; Kim, In-Young; Park, Young-Cheol
2007-09-01
We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
An Indoor Positioning Method for Smartphones Using Landmarks and PDR.
Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing
2016-12-15
Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m.
An Indoor Positioning Method for Smartphones Using Landmarks and PDR †
Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing
2016-01-01
Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m. PMID:27983670
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale
Diao, Yuzhu; Hu, Aqin
2018-01-01
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation. PMID:29498699
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.
Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin
2018-03-02
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
Adaptive Tracking Control for Robots With an Interneural Computing Scheme.
Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang
2018-04-01
Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.
A climate model projection weighting scheme accounting for performance and interdependence
NASA Astrophysics Data System (ADS)
Knutti, Reto; Sedláček, Jan; Sanderson, Benjamin M.; Lorenz, Ruth; Fischer, Erich M.; Eyring, Veronika
2017-02-01
Uncertainties of climate projections are routinely assessed by considering simulations from different models. Observations are used to evaluate models, yet there is a debate about whether and how to explicitly weight model projections by agreement with observations. Here we present a straightforward weighting scheme that accounts both for the large differences in model performance and for model interdependencies, and we test reliability in a perfect model setup. We provide weighted multimodel projections of Arctic sea ice and temperature as a case study to demonstrate that, for some questions at least, it is meaningless to treat all models equally. The constrained ensemble shows reduced spread and a more rapid sea ice decline than the unweighted ensemble. We argue that the growing number of models with different characteristics and considerable interdependence finally justifies abandoning strict model democracy, and we provide guidance on when and how this can be achieved robustly.
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
Neural model of gene regulatory network: a survey on supportive meta-heuristics.
Biswas, Surama; Acharyya, Sriyankar
2016-06-01
Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.
Site selection model for new metro stations based on land use
NASA Astrophysics Data System (ADS)
Zhang, Nan; Chen, Xuewu
2015-12-01
Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.
Using new aggregation operators in rule-based intelligent control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.
1990-01-01
A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.
Stubbs, R James; Pallister, Carolyn; Whybrow, Stephen; Avery, Amanda; Lavin, Jacquie
2011-01-01
This project audited rate and extent of weight loss in a primary care/commercial weight management organisation partnership scheme. 34,271 patients were referred to Slimming World for 12 weekly sessions. Data were analysed using individual weekly weight records. Average (SD) BMI change was -1.5 kg/m² (1.3), weight change -4.0 kg (3.7), percent weight change -4.0% (3.6), rate of weight change -0.3 kg/week, and number of sessions attended 8.9 (3.6) of 12. For patients attending at least 10 of 12 sessions (n = 19,907 or 58.1%), average (SD) BMI change was -2.0 kg/m² (1.3), weight change -5.5 kg (3.8), percent weight change -5.5% (3.5), rate of weight change -0.4 kg/week, and average number of sessions attended was 11.5 (0.7) (p < 0.001, compared to all patients). Weight loss was greater in men (n = 3,651) than in women (n = 30,620) (p < 0.001). 35.8% of all patients enrolled and 54.7% in patients attending 10 or more sessions achieved at least 5% weight loss. Weight gain was prevented in 92.1% of all patients referred. Attendance explained 29.6% and percent weight lost in week 1 explained 18.4% of the variance in weight loss. Referral to a commercial organisation is a practical option for National Health Service (NHS) weight management strategies, which achieves clinically safe and effective weight loss. Copyright © 2011 S. Karger AG, Basel.
White-nose syndrome pathology grading in Nearctic and Palearctic bats
Pikula, Jiri; Amelon, Sybill K.; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G.; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity. PMID:28767673
White-nose syndrome pathology grading in Nearctic and Palearctic bats.
Pikula, Jiri; Amelon, Sybill K; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity.
Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-11-01
In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
Quantitative DLA-based compressed sensing for T1-weighted acquisitions
NASA Astrophysics Data System (ADS)
Svehla, Pavel; Nguyen, Khieu-Van; Li, Jing-Rebecca; Ciobanu, Luisa
2017-08-01
High resolution Manganese Enhanced Magnetic Resonance Imaging (MEMRI), which uses manganese as a T1 contrast agent, has great potential for functional imaging of live neuronal tissue at single neuron scale. However, reaching high resolutions often requires long acquisition times which can lead to reduced image quality due to sample deterioration and hardware instability. Compressed Sensing (CS) techniques offer the opportunity to significantly reduce the imaging time. The purpose of this work is to test the feasibility of CS acquisitions based on Diffusion Limited Aggregation (DLA) sampling patterns for high resolution quantitative T1-weighted imaging. Fully encoded and DLA-CS T1-weighted images of Aplysia californica neural tissue were acquired on a 17.2T MRI system. The MR signal corresponding to single, identified neurons was quantified for both versions of the T1 weighted images. For a 50% undersampling, DLA-CS can accurately quantify signal intensities in T1-weighted acquisitions leading to only 1.37% differences when compared to the fully encoded data, with minimal impact on image spatial resolution. In addition, we compared the conventional polynomial undersampling scheme with the DLA and showed that, for the data at hand, the latter performs better. Depending on the image signal to noise ratio, higher undersampling ratios can be used to further reduce the acquisition time in MEMRI based functional studies of living tissues.
A hybrid linear/nonlinear training algorithm for feedforward neural networks.
McLoone, S; Brown, M D; Irwin, G; Lightbody, A
1998-01-01
This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Content relatedness in the social web based on social explicit semantic analysis
NASA Astrophysics Data System (ADS)
Ntalianis, Klimis; Otterbacher, Jahna; Mastorakis, Nikolaos
2017-06-01
In this paper a novel content relatedness algorithm for social media content is proposed, based on the Explicit Semantic Analysis (ESA) technique. The proposed scheme takes into consideration social interactions. In particular starting from the vector space representation model, similarity is expressed by a summation of term weight products. In this paper, term weights are estimated by a social computing method, where the strength of each term is calculated by the attention the terms receives. For this reason each post is split into two parts, title and comments area, while attention is defined by the number of social interactions such as likes and shares. The overall approach is named Social Explicit Semantic Analysis. Experimental results on real data show the advantages and limitations of the proposed approach, while an initial comparison between ESA and S-ESA is very promising.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
Gianini, Loren; Roberto, Christina A; Attia, Evelyn; Walsh, B Timothy; Thomas, Jennifer J; Eddy, Kamryn T; Grilo, Carlos M; Weigel, Thomas; Sysko, Robyn
2017-08-01
This study evaluated the DSM-5 severity specifiers for treatment-seeking groups of participants with anorexia nervosa (AN), the purging form of bulimia nervosa (BN), and binge-eating disorder (BED). Hundred and sixty-two participants with AN, 93 participants with BN, and 343 participants with BED were diagnosed using semi-structured interviews, sub-categorized using DSM-5 severity specifiers and compared on demographic and cross-sectional clinical measures. In AN, the number of previous hospitalizations and the duration of illness increased with severity, but there was no difference across severity groups on measures of eating pathology, depression, or measures of self-reported physical or emotional functioning. In BN, the level of eating concerns increased across the severity groups, but the groups did not differ on measures of depression, self-esteem, and most eating pathology variables. In BN, support was also found for an alternative severity classification scheme based upon number of methods of purging. In BED, levels of several measures of eating pathology and self-reported physical and emotional functioning increased across the severity groups. For BED, however, support was also found for an alternative severity classification scheme based upon overvaluation of shape and weight. Preliminary evidence was also found for a transdiagnostic severity index based upon overvaluation of shape and weight. Overall, these data show limited support for the DSM-5 severity specifiers for BN and modest support for the DSM-5 severity specifiers for AN and BED. © 2017 Wiley Periodicals, Inc.
Adaptive Neural Network Control for the Trajectory Tracking of the Furuta Pendulum.
Moreno-Valenzuela, Javier; Aguilar-Avelar, Carlos; Puga-Guzman, Sergio A; Santibanez, Victor
2016-12-01
The purpose of this paper is to introduce a novel adaptive neural network-based control scheme for the Furuta pendulum, which is a two degree-of-freedom underactuated system. Adaptation laws for the input and output weights are also provided. The proposed controller is able to guarantee tracking of a reference signal for the arm while the pendulum remains in the upright position. The key aspect of the derivation of the controller is the definition of an output function that depends on the position and velocity errors. The internal and external dynamics are rigorously analyzed, thereby proving the uniform ultimate boundedness of the error trajectories. By using real-time experiments, the new scheme is compared with other control methodologies, therein demonstrating the improved performance of the proposed adaptive algorithm.
Transmit Designs for the MIMO Broadcast Channel With Statistical CSI
NASA Astrophysics Data System (ADS)
Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan
2014-09-01
We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
Thermal control extravehicular life support system
NASA Technical Reports Server (NTRS)
1975-01-01
The results of a comprehensive study which defined an Extravehicular Life Support System Thermal Control System (TCS) are presented. The design of the prototype hardware and a detail summary of the prototype TCS fabrication and test effort are given. Several heat rejection subsystems, water management subsystems, humidity control subsystems, pressure control schemes and temperature control schemes were evaluated. Alternative integrated TCS systems were studied, and an optimum system was selected based on quantitative weighing of weight, volume, cost, complexity and other factors. The selected subsystem contains a sublimator for heat rejection, bubble expansion tank for water management, a slurper and rotary separator for humidity control, and a pump, a temperature control valve, a gas separator and a vehicle umbilical connector for water transport. The prototype hardware complied with program objectives.
Efficient weighting strategy for enhancing synchronizability of complex networks
NASA Astrophysics Data System (ADS)
Wang, Youquan; Yu, Feng; Huang, Shucheng; Tu, Juanjuan; Chen, Yan
2018-04-01
Networks with high propensity to synchronization are desired in many applications ranging from biology to engineering. In general, there are two ways to enhance the synchronizability of a network: link rewiring and/or link weighting. In this paper, we propose a new link weighting strategy based on the concept of the neighborhood subgroup. The neighborhood subgroup of a node i through node j in a network, i.e. Gi→j, means that node u belongs to Gi→j if node u belongs to the first-order neighbors of j (not include i). Our proposed weighting schema used the local and global structural properties of the networks such as the node degree, betweenness centrality and closeness centrality measures. We applied the method on scale-free and Watts-Strogatz networks of different structural properties and show the good performance of the proposed weighting scheme. Furthermore, as model networks cannot capture all essential features of real-world complex networks, we considered a number of undirected and unweighted real-world networks. To the best of our knowledge, the proposed weighting strategy outperformed the previously published weighting methods by enhancing the synchronizability of these real-world networks.
A simple algorithm to improve the performance of the WENO scheme on non-uniform grids
NASA Astrophysics Data System (ADS)
Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong
2018-02-01
This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.
Feasibility study of a soil-based rubberized CLSM.
Wu, Jason Y; Tsai, Mufan
2009-02-01
The development of beneficial uses of recycled scrap tires is always in great demand around the world. The disposal of on-site surplus excavated soil and the production of standard engineering aggregates have also been facing increasing environmental and ecological challenges in congested islands, such as Taiwan. This paper presents an experimental study using recycled crumb rubber and native silty sand to produce a lightweight, soil-based, rubberized controlled low strength material (CLSM) for a bridge approach repair. To assess the technical feasibility of this material, the effects of weight ratios of cement-to-water (C/W) and water-to-solid (W/S), and of rubber content on the engineering properties for different mixtures were investigated. The presented test results include flowability, unit weight, strength, settlement potential, and bearing capacity. Based on the findings, we conclude that a soil-based rubberized CLSM with 40% sand by weight and an optimal design ratio of 0.7 for C/W and 0.35 for W/S can be used for the proposed bridge approach repair. Such a mixture has demonstrated acceptable flowability, strength, and bearing capacity. Its lower unit weight, negligible compressibility, and hydrocollapse potential also help ensure that detrimental settlement is unlikely to occur. The results illustrate a novel scheme of CLSM production, and suggest a beneficial alternative for the reduction of scrap tires as well as conservation of resources and environment.
Modified weighted fair queuing for packet scheduling in mobile WiMAX networks
NASA Astrophysics Data System (ADS)
Satrya, Gandeva B.; Brotoharsono, Tri
2013-03-01
The increase of user mobility and the need for data access anytime also increases the interest in broadband wireless access (BWA). The best available quality of experience for mobile data service users are assured for IEEE 802.16e based users. The main problem of assuring a high QOS value is how to allocate available resources among users in order to meet the QOS requirement for criteria such as delay, throughput, packet loss and fairness. There is no specific standard scheduling mechanism stated by IEEE standards, which leaves it for implementer differentiation. There are five QOS service classes defined by IEEE 802.16: Unsolicited Grant Scheme (UGS), Extended Real Time Polling Service (ertPS), Real Time Polling Service (rtPS), Non Real Time Polling Service (nrtPS) and Best Effort Service (BE). Each class has different QOS parameter requirements for throughput and delay/jitter constraints. This paper proposes Modified Weighted Fair Queuing (MWFQ) scheduling scenario which was based on Weighted Round Robin (WRR) and Weighted Fair Queuing (WFQ). The performance of MWFQ was assessed by using above five QoS criteria. The simulation shows that using the concept of total packet size calculation improves the network's performance.
NASA Astrophysics Data System (ADS)
Borgelt, Christian
In clustering we often face the situation that only a subset of the available attributes is relevant for forming clusters, even though this may not be known beforehand. In such cases it is desirable to have a clustering algorithm that automatically weights attributes or even selects a proper subset. In this paper I study such an approach for fuzzy clustering, which is based on the idea to transfer an alternative to the fuzzifier (Klawonn and Höppner, What is fuzzy about fuzzy clustering? Understanding and improving the concept of the fuzzifier, In: Proc. 5th Int. Symp. on Intelligent Data Analysis, 254-264, Springer, Berlin, 2003) to attribute weighting fuzzy clustering (Keller and Klawonn, Int J Uncertain Fuzziness Knowl Based Syst 8:735-746, 2000). In addition, by reformulating Gustafson-Kessel fuzzy clustering, a scheme for weighting and selecting principal axes can be obtained. While in Borgelt (Feature weighting and feature selection in fuzzy clustering, In: Proc. 17th IEEE Int. Conf. on Fuzzy Systems, IEEE Press, Piscataway, NJ, 2008) I already presented such an approach for a global selection of attributes and principal axes, this paper extends it to a cluster-specific selection, thus arriving at a fuzzy subspace clustering algorithm (Parsons, Haque, and Liu, 2004).
Analog hardware implementation of neocognitron networks
NASA Astrophysics Data System (ADS)
Inigo, Rafael M.; Bonde, Allen, Jr.; Holcombe, Bradford
1990-08-01
This paper deals with the analog implementation of neocognitron based neural networks. All of Fukushima''s and related work on the neocognitron is based on digital computer simulations. To fully take advantage of the power of this network paradigm an analog electronic approach is proposed. We first implemented a 6-by-6 sensor network with discrete analog components and fixed weights. The network was given weight values to recognize the characters U L and F. These characters are recognized regardless of their location on the sensor and with various levels of distortion and noise. The network performance has also shown an excellent correlation with software simulation results. Next we implemented a variable weight network which can be trained to recognize simple patterns by means of self-organization. The adaptable weights were implemented with PETs configured as voltage-controlled resistors. To implement a variable weight there must be some type of " memory" to store the weight value and hold it while the value is reinforced or incremented. Two methods were evaluated: an analog sample-hold circuit and a digital storage scheme using binary counters. The latter is preferable for VLSI implementation because it uses standard components and does not require the use of capacitors. The analog design and implementation of these small-scale networks demonstrates the feasibility of implementing more complicated ANNs in electronic hardware. The circuits developed can also be designed for VLSI implementation. 1.
Boosting specificity of MEG artifact removal by weighted support vector machine.
Duan, Fang; Phothisonothai, Montri; Kikuchi, Mitsuru; Yoshimura, Yuko; Minabe, Yoshio; Watanabe, Kastumi; Aihara, Kazuyuki
2013-01-01
An automatic artifact removal method of magnetoencephalogram (MEG) was presented in this paper. The method proposed is based on independent components analysis (ICA) and support vector machine (SVM). However, different from the previous studies, in this paper we consider two factors which would influence the performance. First, the imbalance factor of independent components (ICs) of MEG is handled by weighted SVM. Second, instead of simply setting a fixed weight to each class, a re-weighting scheme is used for the preservation of useful MEG ICs. Experimental results on manually marked MEG dataset showed that the method proposed could correctly distinguish the artifacts from the MEG ICs. Meanwhile, 99.72% ± 0.67 of MEG ICs were preserved. The classification accuracy was 97.91% ± 1.39. In addition, it was found that this method was not sensitive to individual differences. The cross validation (leave-one-subject-out) results showed an averaged accuracy of 97.41% ± 2.14.
Computerized Liver Volumetry on MRI by Using 3D Geodesic Active Contour Segmentation
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
OBJECTIVE Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. SUBJECTS AND METHODS Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. RESULTS The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). CONCLUSION The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time. PMID:24370139
Computerized liver volumetry on MRI by using 3D geodesic active contour segmentation.
Huynh, Hieu Trung; Karademir, Ibrahim; Oto, Aytekin; Suzuki, Kenji
2014-01-01
Our purpose was to develop an accurate automated 3D liver segmentation scheme for measuring liver volumes on MRI. Our scheme for MRI liver volumetry consisted of three main stages. First, the preprocessing stage was applied to T1-weighted MRI of the liver in the portal venous phase to reduce noise and produce the boundary-enhanced image. This boundary-enhanced image was used as a speed function for a 3D fast-marching algorithm to generate an initial surface that roughly approximated the shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the initial surface to precisely determine the liver boundaries. The liver volumes determined by our scheme were compared with those manually traced by a radiologist, used as the reference standard. The two volumetric methods reached excellent agreement (intraclass correlation coefficient, 0.98) without statistical significance (p = 0.42). The average (± SD) accuracy was 99.4% ± 0.14%, and the average Dice overlap coefficient was 93.6% ± 1.7%. The mean processing time for our automated scheme was 1.03 ± 0.13 minutes, whereas that for manual volumetry was 24.0 ± 4.4 minutes (p < 0.001). The MRI liver volumetry based on our automated scheme agreed excellently with reference-standard volumetry, and it required substantially less completion time.
Orbit Estimation of Non-Cooperative Maneuvering Spacecraft
2015-06-01
only take on values that generate real sigma points; therefore, λ > −n. The additional weighting scheme is outlined in the following equations κ = α2...orbit shapes resulted in a similar model weighting. Additional cases of this orbit type also resulted in heavily weighting smaller η value models. It is...determined using both the symmetric and additional parameters UTs. The best values for the weighting parameters are then compared for each test case
The Cladophora complex (Chlorophyta): new views based on 18S rRNA gene sequences.
Bakker, F T; Olsen, J L; Stam, W T; van den Hoek, C
1994-12-01
Evolutionary relationships among species traditionally ascribed to the Siphonocladales/Cladophorales have remained unclear due to a lack of phylogenetically informative characters and extensive morphological plasticity resulting in morphological convergence. This study explores some of the diversity within the generic complex Cladophora and its siphonocladalaen allies. Twelve species of Cladophora representing 6 of the 11 morphological sections recognized by van den Hoek were analyzed along with 8 siphonocladalaen species using 18S rRNA gene sequences. The final alignment consisted of 1460 positions containing 92 phylogenetically informative substitutions. Weighting schemes (EOR weighting, combinatorial weighting) were applied in maximum parsimony analysis to correct for substitution bias. Stem characters were weighted 0.66 relative to single-stranded characters to correct for secondary structural constraints. Both weighting approaches resulted in greater phylogenetic resolution. Results confirm that there is no basis for the independent recognition of the Cladophorales and Siphonocladales. The Siphonocladales is polyphyletic, and Cladophora is paraphyletic. All analyses support two principal lineages, of which one contains predominantly tropical members including almost all siphonocladalean taxa, while the other lineage consists of mostly warm- to cold-temperate species of Cladophora.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
NASA Astrophysics Data System (ADS)
Zhang, Ziyu; Jiang, Wen; Dolbow, John E.; Spencer, Benjamin W.
2018-01-01
We present a strategy for the numerical integration of partial elements with the eXtended finite element method (X-FEM). The new strategy is specifically designed for problems with propagating cracks through a bulk material that exhibits inelasticity. Following a standard approach with the X-FEM, as the crack propagates new partial elements are created. We examine quadrature rules that have sufficient accuracy to calculate stiffness matrices regardless of the orientation of the crack with respect to the element. This permits the number of integration points within elements to remain constant as a crack propagates, and for state data to be easily transferred between successive discretizations. In order to maintain weights that are strictly positive, we propose an approach that blends moment-fitted weights with volume-fraction based weights. To demonstrate the efficacy of this simple approach, we present results from numerical tests and examples with both elastic and plastic material response.
Seismic hazard in the Nation's breadbasket
Boyd, Oliver; Haller, Kathleen; Luco, Nicolas; Moschetti, Morgan P.; Mueller, Charles; Petersen, Mark D.; Rezaeian, Sanaz; Rubinstein, Justin L.
2015-01-01
The USGS National Seismic Hazard Maps were updated in 2014 and included several important changes for the central United States (CUS). Background seismicity sources were improved using a new moment-magnitude-based catalog; a new adaptive, nearest-neighbor smoothing kernel was implemented; and maximum magnitudes for background sources were updated. Areal source zones developed by the Central and Eastern United States Seismic Source Characterization for Nuclear Facilities project were simplified and adopted. The weighting scheme for ground motion models was updated, giving more weight to models with a faster attenuation with distance compared to the previous maps. Overall, hazard changes (2% probability of exceedance in 50 years, across a range of ground-motion frequencies) were smaller than 10% in most of the CUS relative to the 2008 USGS maps despite new ground motion models and their assigned logic tree weights that reduced the probabilistic ground motions by 5–20%.
Quality of Recovery Evaluation of the Protection Schemes for Fiber-Wireless Access Networks
NASA Astrophysics Data System (ADS)
Fu, Minglei; Chai, Zhicheng; Le, Zichun
2016-03-01
With the rapid development of fiber-wireless (FiWi) access network, the protection schemes have got more and more attention due to the risk of huge data loss when failures occur. However, there are few studies on the performance evaluation of the FiWi protection schemes by the unified evaluation criterion. In this paper, quality of recovery (QoR) method was adopted to evaluate the performance of three typical protection schemes (MPMC scheme, OBOF scheme and RPMF scheme) against the segment-level failure in FiWi access network. The QoR models of the three schemes were derived in terms of availability, quality of backup path, recovery time and redundancy. To compare the performance of the three protection schemes comprehensively, five different classes of network services such as emergency service, prioritized elastic service, conversational service, etc. were utilized by means of assigning different QoR weights. Simulation results showed that, for the most service cases, RPMF scheme was proved to be the best solution to enhance the survivability when planning the FiWi access network.
Li, Qi; Chen, Li-ding; Qi, Xin; Zhang, Xin-yu; Ma, Yan; Fu, Bo-jie
2007-01-01
Guanting Reservoir, one of the drinking water supply sources of Beijing, suffers from water eutrophication. It is mainly supplied by Guishui River. Thus, to investigate the reasons of phosphorus (P) loss and improve the P management strategies in Guishui River watershed are important for the safety of drinking water in this region. In this study, a Revised Field P Ranking Scheme (PRS) was developed to reflect the field vulnerability of P loss at the field scale based on the Field PRS. In this new scheme, six factors are included, and each one was assigned a relative weight and a determination method. The affecting factors were classified into transport factors and source factors, and, the standards of environmental quality on surface water and soil erosion classification and degradation of the China were used in this scheme. By the new scheme, thirty-four fields in the Guishui River were categorized as "low", "medium" or "high" potential for P loss into the runoff. The results showed that the P loss risks of orchard and vegetable fields were higher than that of corn and soybean fields. The source factors were the main factors to affect P loss from the study area. In the study area, controlling P input and improving P usage efficiency are critical to decrease P loss. Based on the results, it was suggested that more attention should be paid on the fields of vegetable and orchard since they have extremely high usage rate of P and high soil test of P. Compared with P surplus by field measurements, the Revised Field PRS was more suitable for reflecting the characteristics of fields, and had higher potential capacity to identify critical source areas of P loss than PRS.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
NASA Astrophysics Data System (ADS)
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
Wang, Long; Liu, Yong; Yin, Zengshan
2018-01-01
To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions. PMID:29757243
Wang, Long; Liu, Yong; Yin, Zengshan
2018-05-12
To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions.
Fuzzy adaptive integration scheme for low-cost SINS/GPS navigation system
NASA Astrophysics Data System (ADS)
Nourmohammadi, Hossein; Keighobadi, Jafar
2018-01-01
Due to weak stand-alone accuracy as well as poor run-to-run stability of micro-electro mechanical system (MEMS)-based inertial sensors, special approaches are required to integrate low-cost strap-down inertial navigation system (SINS) with global positioning system (GPS), particularly in long-term applications. This paper aims to enhance long-term performance of conventional SINS/GPS navigation systems using a fuzzy adaptive integration scheme. The main concept behind the proposed adaptive integration is the good performance of attitude-heading reference system (AHRS) in low-accelerated motions and its degradation in maneuvered or accelerated motions. Depending on vehicle maneuvers, gravity-based attitude angles can be intelligently utilized to improve orientation estimation in the SINS. Knowledge-based fuzzy inference system is developed for decision-making between the AHRS and the SINS according to vehicle maneuvering conditions. Inertial measurements are the main input data of the fuzzy system to determine the maneuvering level during the vehicle motions. Accordingly, appropriate weighting coefficients are produced to combine the SINS/GPS and the AHRS, efficiently. The assessment of the proposed integrated navigation system is conducted via real data in airborne tests.
Luo, Shaohua; Wu, Songli; Gao, Ruizhen
2015-07-01
This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua; Department of Mechanical Engineering, Chongqing Aerospace Polytechnic, Chongqing, 400021; Wu, Songli
2015-07-15
This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in themore » closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.« less
Efficient spares matrix multiplication scheme for the CYBER 203
NASA Technical Reports Server (NTRS)
Lambiotte, J. J., Jr.
1984-01-01
This work has been directed toward the development of an efficient algorithm for performing this computation on the CYBER-203. The desire to provide software which gives the user the choice between the often conflicting goals of minimizing central processing (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of three types of storage is selected for each diagonal. For each storage type, an initialization sub-routine estimates the CPU and storage requirements based upon results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the resources. The three storage types employed were chosen to be efficient on the CYBER-203 for diagonals which are sparse, moderately sparse, or dense; however, for many densities, no diagonal type is most efficient with respect to both resource requirements. The user-supplied weights dictate the choice.
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong
2016-02-01
Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.
NASA Astrophysics Data System (ADS)
Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo
2016-10-01
Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.
Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
Transport on Riemannian manifold for functional connectivity-based classification.
Ng, Bernard; Dressler, Martin; Varoquaux, Gaël; Poline, Jean Baptiste; Greicius, Michael; Thirion, Bertrand
2014-01-01
We present a Riemannian approach for classifying fMRI connectivity patterns before and after intervention in longitudinal studies. A fundamental difficulty with using connectivity as features is that covariance matrices live on the positive semi-definite cone, which renders their elements inter-related. The implicit independent feature assumption in most classifier learning algorithms is thus violated. In this paper, we propose a matrix whitening transport for projecting the covariance estimates onto a common tangent space to reduce the statistical dependencies between their elements. We show on real data that our approach provides significantly higher classification accuracy than directly using Pearson's correlation. We further propose a non-parametric scheme for identifying significantly discriminative connections from classifier weights. Using this scheme, a number of neuroanatomically meaningful connections are found, whereas no significant connections are detected with pure permutation testing.
Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.
Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq
2016-01-01
This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.
NASA Astrophysics Data System (ADS)
Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay
2004-10-01
A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.
Bio-inspired adaptive feedback error learning architecture for motor control.
Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo
2012-10-01
This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
TUW at the First Total Recall Track
2015-11-20
Appen- dix A were ignored. 2.1. Term weighting. The bmi used the basic tf.idf weighting scheme, as given by: (1) weightT (t, d) = (1 + log( tft ,d)) ⇤ log(N...dft); where t is a term, d a document, tft ,d the term frequency, dft the document frequency, and N is the number of documents in the collection. For...save us some training e↵ort. The used weight, marked by “B” in the run names, is given by: (2) weightB(t, d) = tft ,d⇣ 1 mavgtf avgtfd mavgtf + (1 1
Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.
Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier
2015-05-01
Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.
Mars, Rogier B.; Jbabdi, Saad; Sallet, Jérôme; O’Reilly, Jill X.; Croxson, Paula L.; Olivier, Etienne; Noonan, MaryAnn P.; Bergmann, Caroline; Mitchell, Anna S.; Baxter, Mark G.; Behrens, Timothy E.J.; Johansen-Berg, Heidi; Tomassini, Valentina; Miller, Karla L.; Rushworth, Matthew F.S.
2011-01-01
Despite the prominence of parietal activity in human neuromaging investigations of sensorimotor and cognitive processes there remains uncertainty about basic aspects of parietal cortical anatomical organization. Descriptions of human parietal cortex draw heavily on anatomical schemes developed in other primate species but the validity of such comparisons has been questioned by claims that there are fundamental differences between the parietal cortex in humans and other primates. A scheme is presented for parcellation of human lateral parietal cortex into component regions on the basis of anatomical connectivity and the functional interactions of the resulting clusters with other brain regions. Anatomical connectivity was estimated using diffusion-weighted magnetic resonance image (MRI) based tractography and functional interactions were assessed by correlations in activity measured with functional MRI (fMRI) at rest. Resting state functional connectivity was also assessed directly in the rhesus macaque lateral parietal cortex in an additional experiment and the patterns found reflected known neuroanatomical connections. Cross-correlation in the tractography-based connectivity patterns of parietal voxels reliably parcellated human lateral parietal cortex into ten component clusters. The resting state functional connectivity of human superior parietal and intraparietal clusters with frontal and extrastriate cortex suggested correspondences with areas in macaque superior and intraparietal sulcus. Functional connectivity patterns with parahippocampal cortex and premotor cortex again suggested fundamental correspondences between inferior parietal cortex in humans and macaques. In contrast, the human parietal cortex differs in the strength of its interactions between the central inferior parietal lobule region and the anterior prefrontal cortex. PMID:21411650
On Asymptotically Good Ramp Secret Sharing Schemes
NASA Astrophysics Data System (ADS)
Geil, Olav; Martin, Stefano; Martínez-Peñas, Umberto; Matsumoto, Ryutaroh; Ruano, Diego
Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
ERIC Educational Resources Information Center
Soh, Kaycheng
2015-01-01
In the various world university ranking schemes, the "Overall" is a sum of the weighted indicator scores. As the indicators are of a different nature from each other, "Overall" conceals important differences. Factor analysis of the data from three prominent ranking schemes reveals that there are two factors in each of the…
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
MRI-based quantification of Duchenne muscular dystrophy in a canine model
NASA Astrophysics Data System (ADS)
Wang, Jiahui; Fan, Zheng; Kornegay, Joe N.; Styner, Martin A.
2011-03-01
Duchenne muscular dystrophy (DMD) is a progressive and fatal X-linked disease caused by mutations in the DMD gene. Magnetic resonance imaging (MRI) has shown potential to provide non-invasive and objective biomarkers for monitoring disease progression and therapeutic effect in DMD. In this paper, we propose a semi-automated scheme to quantify MRI features of golden retriever muscular dystrophy (GRMD), a canine model of DMD. Our method was applied to a natural history data set and a hydrodynamic limb perfusion data set. The scheme is composed of three modules: pre-processing, muscle segmentation, and feature analysis. The pre-processing module includes: calculation of T2 maps, spatial registration of T2 weighted (T2WI) images, T2 weighted fat suppressed (T2FS) images, and T2 maps, and intensity calibration of T2WI and T2FS images. We then manually segment six pelvic limb muscles. For each of the segmented muscles, we finally automatically measure volume and intensity statistics of the T2FS images and T2 maps. For the natural history study, our results showed that four of six muscles in affected dogs had smaller volumes and all had higher mean intensities in T2 maps as compared to normal dogs. For the perfusion study, the muscle volumes and mean intensities in T2FS were increased in the post-perfusion MRI scans as compared to pre-perfusion MRI scans, as predicted. We conclude that our scheme successfully performs quantitative analysis of muscle MRI features of GRMD.
Re-formulation and Validation of Cloud Microphysics Schemes
NASA Astrophysics Data System (ADS)
Wang, J.; Georgakakos, K. P.
2007-12-01
The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
Research on personalized recommendation algorithm based on spark
NASA Astrophysics Data System (ADS)
Li, Zeng; Liu, Yu
2018-04-01
With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.
Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline
Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin
2014-01-01
Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures. PMID:24567717
Rapid evaluation of high-performance systems
NASA Astrophysics Data System (ADS)
Forbes, G. W.; Ruoff, J.
2017-11-01
System assessment for design often involves averages, such as rms wavefront error, that are estimated by ray tracing through a sample of points within the pupil. Novel general-purpose sampling and weighting schemes are presented and it is also shown that optical design can benefit from tailored versions of these schemes. It turns out that the type of Gaussian quadrature that has long been recognized for efficiency in this domain requires about 40-50% more ray tracing to attain comparable accuracy to generic versions of the new schemes. Even greater efficiency gains can be won, however, by tailoring such sampling schemes to the optical context where azimuthal variation in the wavefront is generally weaker than the radial variation. These new schemes are special cases of what is known in the mathematical world as cubature. Our initial results also led to the consideration of simpler sampling configurations that approximate the newfound cubature schemes. We report on the practical application of a selection of such schemes and make observations that aid in the discovery of novel cubature schemes relevant to optical design of systems with circular pupils.
NASA Astrophysics Data System (ADS)
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunham, Mark Edward; Baker, Zachary K; Stettler, Matthew W
2009-01-01
Los Alamos has recently completed the latest in a series of Reconfigurable Software Radios, which incorporates several key innovations in both hardware design and algorithms. Due to our focus on satellite applications, each design must extract the best size, weight, and power performance possible from the ensemble of Commodity Off-the-Shelf (COTS) parts available at the time of design. In this case we have achieved 1 TeraOps/second signal processing on a 1920 Megabit/second datastream, while using only 53 Watts mains power, 5.5 kg, and 3 liters. This processing capability enables very advanced algorithms such as our wideband RF compression scheme tomore » operate remotely, allowing network bandwidth constrained applications to deliver previously unattainable performance.« less
NASA Astrophysics Data System (ADS)
Wang, W.; Wang, D.; Peng, Z. H.
2017-09-01
Without assuming that the communication topologies among the neural network (NN) weights are to be undirected and the states of each agent are measurable, the cooperative learning NN output feedback control is addressed for uncertain nonlinear multi-agent systems with identical structures in strict-feedback form. By establishing directed communication topologies among NN weights to share their learned knowledge, NNs with cooperative learning laws are employed to identify the uncertainties. By designing NN-based κ-filter observers to estimate the unmeasurable states, a new cooperative learning output feedback control scheme is proposed to guarantee that the system outputs can track nonidentical reference signals with bounded tracking errors. A simulation example is given to demonstrate the effectiveness of the theoretical results.
NASA Astrophysics Data System (ADS)
Jones, J. D.; Ma, Xia; Clements, B. E.; Gibson, L. L.; Gustavsen, R. L.
2017-06-01
Gas-gun driven plate-impact techniques were used to study the shock to detonation transition in LX-14 (95.5 weight % HMX, 4.5 weight % estane binder). The transition was recorded using embedded electromagnetic particle velocity gauges. Initial shock pressures, P, ranged from 2.5 to 8 GPa and the resulting distances to detonation, xD, were in the range 1.9 to 14 mm. Numerical simulations using the SURF reactive burn scheme coupled with a linear US -up / Mie-Grueneisen equation of state for the reactant and a JWL equation of state for the products, match the experimental data well. Comparison of simulation with experiment as well as the ``best fit'' parameter set for the simulations is presented.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-11-12
The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.
Weight Optimization of Active Thermal Management Using a Novel Heat Pump
NASA Technical Reports Server (NTRS)
Lear, William E.; Sherif, S. A.
2004-01-01
Efficient lightweight power generation and thermal management are two important aspects for space applications. Weight is added to the space platforms due to the inherent weight of the onboard power generation equipment and the additional weight of the required thermal management systems. Thermal management of spacecraft relies on rejection of heat via radiation, a process that can result in large radiator mass, depending upon the heat rejection temperature. For some missions, it is advantageous to incorporate an active thermal management system, allowing the heat rejection temperature to be greater than the load temperature. This allows a reduction of radiator mass at the expense of additional system complexity. A particular type of active thermal management system is based on a thermodynamic cycle, developed by the authors, called the Solar Integrated Thermal Management and Power (SITMAP) cycle. This system has been a focus of the authors research program in the recent past (see Fig. 1). One implementation of the system requires no moving parts, which decreases the vibration level and enhances reliability. Compression of the refrigerant working fluid is accomplished in this scheme via an ejector.
Multigroup cross section library for GFR2400
NASA Astrophysics Data System (ADS)
Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír
2017-09-01
In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.
Science-based Region-of-Interest Image Compression
NASA Technical Reports Server (NTRS)
Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.
2004-01-01
As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.
The genetic code as a periodic table: algebraic aspects.
Bashford, J D; Jarvis, P D
2000-01-01
The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.
An Inverter Packaging Scheme for an Integrated Segmented Traction Drive System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Gui-Jia; Tang, Lixin; Ayers, Curtis William
The standard voltage source inverter (VSI), widely used in electric vehicle/hybrid electric vehicle (EV/HEV) traction drives, requires a bulky dc bus capacitor to absorb the large switching ripple currents and prevent them from shortening the battery s life. The dc bus capacitor presents a significant barrier to meeting inverter cost, volume, and weight requirements for mass production of affordable EVs/HEVs. The large ripple currents become even more problematic for the film capacitors (the capacitor technology of choice for EVs/HEVs) in high temperature environments as their ripple current handling capability decreases rapidly with rising temperatures. It is shown in previous workmore » that segmenting the VSI based traction drive system can significantly decrease the ripple currents and thus the size of the dc bus capacitor. This paper presents an integrated packaging scheme to reduce the system cost of a segmented traction drive.« less
Adaptive critic neural network-based object grasping control using a three-finger gripper.
Jagannathan, S; Galan, Gustavo
2004-03-01
Grasping of objects has been a challenging task for robots. The complex grasping task can be defined as object contact control and manipulation subtasks. In this paper, object contact control subtask is defined as the ability to follow a trajectory accurately by the fingers of a gripper. The object manipulation subtask is defined in terms of maintaining a predefined applied force by the fingers on the object. A sophisticated controller is necessary since the process of grasping an object without a priori knowledge of the object's size, texture, softness, gripper, and contact dynamics is rather difficult. Moreover, the object has to be secured accurately and considerably fast without damaging it. Since the gripper, contact dynamics, and the object properties are not typically known beforehand, an adaptive critic neural network (NN)-based hybrid position/force control scheme is introduced. The feedforward action generating NN in the adaptive critic NN controller compensates the nonlinear gripper and contact dynamics. The learning of the action generating NN is performed on-line based on a critic NN output signal. The controller ensures that a three-finger gripper tracks a desired trajectory while applying desired forces on the object for manipulation. Novel NN weight tuning updates are derived for the action generating and critic NNs so that Lyapunov-based stability analysis can be shown. Simulation results demonstrate that the proposed scheme successfully allows fingers of a gripper to secure objects without the knowledge of the underlying gripper and contact dynamics of the object compared to conventional schemes.
Danner, Marion; Vennedey, Vera; Hiligsmann, Mickaël; Fauser, Sascha; Gross, Christian; Stock, Stephanie
2017-09-01
In this study, we conducted an analytic hierarchy process (AHP) and a discrete choice experiment (DCE) to elicit the preferences of patients with age-related macular degeneration using identical attributes and levels. To compare preference-based weights for age-related macular degeneration treatment attributes and levels generated by two elicitation methods. The properties of both methods were assessed, including ease of instrument use. A DCE and an AHP experiment were designed on the basis of five attributes. Preference-based weights were generated using the matrix multiplication method for attributes and levels in AHP and a mixed multinomial logit model for levels in the DCE. Attribute importance was further compared using coefficient (DCE) and weight (AHP) level ranges. The questionnaire difficulty was rated on a qualitative scale. Patients were asked to think aloud while providing their judgments. AHP and DCE generated similar results regarding levels, stressing a preference for visual improvement, frequent monitoring, on-demand and less frequent injection schemes, approved drugs, and mild side effects. Attribute weights derived on the basis of level ranges led to a ranking that was opposite to the AHP directly calculated attribute weights. For example, visual function ranked first in the AHP and last on the basis of level ranges. The results across the methods were similar, with one exception: the directly measured AHP attribute weights were different from the level-based interpretation of attribute importance in both DCE and AHP. The dependence/independence of attribute importance on level ranges in DCE and AHP, respectively, should be taken into account when choosing a method to support decision making. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho
2015-12-01
A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times higher contrast-to-noise ratio, 1.04 times higher universal quality index, and 1.39 times higher normalized mutual information. © The Author(s) 2014.
Intelligent call admission control for multi-class services in mobile cellular networks
NASA Astrophysics Data System (ADS)
Ma, Yufeng; Hu, Xiulin; Zhang, Yunyu
2005-11-01
Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in mobile cellular networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities of each service class. Simulation compares the proposed fuzzy scheme with complete sharing and guard channel policies. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.
NASA Astrophysics Data System (ADS)
Huang, Juntao; Shu, Chi-Wang
2018-05-01
In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
NASA Astrophysics Data System (ADS)
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
NASA Astrophysics Data System (ADS)
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-01
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method. PMID:26805835
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-20
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Analysis on influencing factors of EV charging station planning based on AHP
NASA Astrophysics Data System (ADS)
Yan, F.; Ma, X. F.
2016-08-01
As a new means of transport, electric vehicle (EV) is of great significance to alleviate the energy crisis. EV charging station planning has a far-reaching significance for the development of EV industry. This paper analyzes the impact factors of EV charging station planning, and then uses the analytic hierarchy process (AHP) to carry on the further analysis to the influencing factors, finally it gets the weight of each influence factor, and provides the basis for the evaluation scheme of the planning of charging stations for EV.
Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots
NASA Astrophysics Data System (ADS)
Chellal, R.; Cuvillon, L.; Laroche, E.
2017-04-01
This paper presents methodologies for the identification and control of 6-degrees of freedom (6-DoF) cable-driven parallel robots (CDPRs). First a two-step identification methodology is proposed to accurately estimate the kinematic parameters independently and prior to the dynamic parameters of a physics-based model of CDPRs. Second, an original control scheme is developed, including a vision-based position controller tuned with the H∞ methodology and a cable tension distribution algorithm. The position is controlled in the operational space, making use of the end-effector pose measured by a motion-tracking system. A four-block H∞ design scheme with adjusted weighting filters ensures good trajectory tracking and disturbance rejection properties for the CDPR system, which is a nonlinear-coupled MIMO system with constrained states. The tension management algorithm generates control signals that maintain the cables under feasible tensions. The paper makes an extensive review of the available methods and presents an extension of one of them. The presented methodologies are evaluated by simulations and experimentally on a redundant 6-DoF INCA 6D CDPR with eight cables, equipped with a motion-tracking system.
Wastewater quality monitoring system using sensor fusion and machine learning techniques.
Qin, Xusong; Gao, Furong; Chen, Guohua
2012-03-15
A multi-sensor water quality monitoring system incorporating an UV/Vis spectrometer and a turbidimeter was used to monitor the Chemical Oxygen Demand (COD), Total Suspended Solids (TSS) and Oil & Grease (O&G) concentrations of the effluents from the Chinese restaurant on campus and an electrocoagulation-electroflotation (EC-EF) pilot plant. In order to handle the noise and information unbalance in the fused UV/Vis spectra and turbidity measurements during the calibration model building, an improved boosting method, Boosting-Iterative Predictor Weighting-Partial Least Squares (Boosting-IPW-PLS), was developed in the present study. The Boosting-IPW-PLS method incorporates IPW into boosting scheme to suppress the quality-irrelevant variables by assigning small weights, and builds up the models for the wastewater quality predictions based on the weighted variables. The monitoring system was tested in the field with satisfactory results, underlying the potential of this technique for the online monitoring of water quality. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Novel Algorithm for Detecting Protein Complexes with the Breadth First Search
Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi
2014-01-01
Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
NASA Technical Reports Server (NTRS)
Milner, G. Martin; Black, Mike; Hovenga, Mike; Mcclure, Paul; Miller, Patrice
1988-01-01
The application of vibration monitoring to the rotating machinery typical of ECLSS components in advanced NASA spacecraft was studied. It is found that the weighted summation of the accelerometer power spectrum is the most successful detection scheme for a majority of problem types. Other detection schemes studied included high-frequency demodulation, cepstrum, clustering, and amplitude processing.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
High-performance object tracking and fixation with an online neural estimator.
Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian
2007-02-01
Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.
Numerical study of dam-break induced tsunami-like bore with a hump of different slopes
NASA Astrophysics Data System (ADS)
Cheng, Du; Zhao, Xi-zeng; Zhang, Da-ke; Chen, Yong
2017-12-01
Numerical simulation of dam-break wave, as an imitation of tsunami hydraulic bore, with a hump of different slopes is performed in this paper using an in-house code, named a Constrained Interpolation Profile (CIP)-based model. The model is built on a Cartesian grid system with the Navier Stokes equations using a CIP method for the flow solver, and employs an immersed boundary method (IBM) for the treatment of solid body boundary. A more accurate interface capturing scheme, the Tangent of hyperbola for interface capturing/Slope weighting (THINC/SW) scheme, is adopted as the interface capturing method. Then, the CIP-based model is applied to simulate the dam break flow problem in a bumpy channel. Considerable attention is paid to the spilling type reflected bore, the following spilling type wave breaking, free surface profiles and water level variations over time. Computations are compared with available experimental data and other numerical results quantitatively and qualitatively. Further investigation is conducted to analyze the influence of variable slopes on the flow features of the tsunami-like bore.
Fuzzy Classification of Ocean Color Satellite Data for Bio-optical Algorithm Constituent Retrievals
NASA Technical Reports Server (NTRS)
Campbell, Janet W.
1998-01-01
The ocean has been traditionally viewed as a 2 class system. Morel and Prieur (1977) classified ocean water according to the dominant absorbent particle suspended in the water column. Case 1 is described as having a high concentration of phytoplankton (and detritus) relative to other particles. Conversely, case 2 is described as having inorganic particles such as suspended sediments in high concentrations. Little work has gone into the problem of mixing bio-optical models for these different water types. An approach is put forth here to blend bio-optical algorithms based on a fuzzy classification scheme. This scheme involves two procedures. First, a clustering procedure identifies classes and builds class statistics from in-situ optical measurements. Next, a classification procedure assigns satellite pixels partial memberships to these classes based on their ocean color reflectance signature. These membership assignments can be used as the basis for a weighting retrievals from class-specific bio-optical algorithms. This technique is demonstrated with in-situ optical measurements and an image from the SeaWiFS ocean color satellite.
A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.
2011-01-01
Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…
TFM classification and staging of oral submucous fibrosis: A new proposal.
Arakeri, Gururaj; Thomas, Deepak; Aljabab, Abdulsalam S; Hunasgi, Santosh; Rai, Kirthi Kumar; Hale, Beverley; Fonseca, Felipe Paiva; Gomez, Ricardo Santiago; Rahimi, Siavash; Merkx, Matthias A W; Brennan, Peter A
2018-04-01
We have evaluated the rationale of existing grading and staging schemes of oral submucous fibrosis (OSMF) based on how they are categorized. A novel classification and staging scheme is proposed. A total of 300 OSMF patients were evaluated for agreement between functional, clinical, and histopathological staging. Bilateral biopsies were assessed in 25 patients to evaluate for any differences in histopathological staging of OSMF in the same mouth. Extent of clinician agreement for categorized staging data was evaluated using Cohen's weighted kappa analysis. Cross-tabulation was performed on categorical grading data to understand the intercorrelation, and the unweighted kappa analysis was used to assess the bilateral grade agreement. Probabilities of less than 0.05 were considered significant. Data were analyzed using SPSS Statistics (version 25.0, IBM, USA). A low agreement was found between all the stages depicting the independent nature of trismus, clinical features, and histopathological components (K = 0.312, 0.167, 0.152) in OSMF. Following analysis, a three-component classification scheme (TFM classification) was developed that describes the severity of each independently, grouping them using a novel three-tier staging scheme as a guide to the treatment plan. The proposed classification and staging could be useful for effective communication, categorization, and for recording data and prognosis, and for guiding treatment plans. Furthermore, the classification considers OSMF malignant transformation in detail. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Feasibility of reusing time-matched controls in an overlapping cohort.
Delcoigne, Bénédicte; Hagenbuch, Niels; Schelin, Maria Ec; Salim, Agus; Lindström, Linda S; Bergh, Jonas; Czene, Kamila; Reilly, Marie
2018-06-01
The methods developed for secondary analysis of nested case-control data have been illustrated only in simplified settings in a common cohort and have not found their way into biostatistical practice. This paper demonstrates the feasibility of reusing prior nested case-control data in a realistic setting where a new outcome is available in an overlapping cohort where no new controls were gathered and where all data have been anonymised. Using basic information about the background cohort and sampling criteria, the new cases and prior data are "aligned" to identify the common underlying study base. With this study base, a Kaplan-Meier table of the prior outcome extracts the risk sets required to calculate the weights to assign to the controls to remove the sampling bias. A weighted Cox regression, implemented in standard statistical software, provides unbiased hazard ratios. Using the method to compare cases of contralateral breast cancer to available controls from a prior study of metastases, we identified a multifocal tumor as a risk factor that has not been reported previously. We examine the sensitivity of the method to an imperfect weighting scheme and discuss its merits and pitfalls to provide guidance for its use in medical research studies.
NASA Astrophysics Data System (ADS)
Hayatbini, N.; Faridzad, M.; Yang, T.; Akbari Asanjan, A.; Gao, X.; Sorooshian, S.
2016-12-01
The Artificial Neural Networks (ANNs) are useful in many fields, including water resources engineering and management. However, due to the non-linear and chaotic characteristics associated with natural processes and human decision making, the use of ANNs in real-world applications is still limited, and its performance needs to be further improved for a broader practical use. The commonly used Back-Propagation (BP) scheme and gradient-based optimization in training the ANNs have already found to be problematic in some cases. The BP scheme and gradient-based optimization methods are associated with the risk of premature convergence, stuck in local optimums, and the searching is highly dependent on initial conditions. Therefore, as an alternative to BP and gradient-based searching scheme, we propose an effective and efficient global searching method, termed the Shuffled Complex Evolutionary Global optimization algorithm with Principal Component Analysis (SP-UCI), to train the ANN connectivity weights. Large number of real-world datasets are tested with the SP-UCI-based ANN, as well as various popular Evolutionary Algorithms (EAs)-enhanced ANNs, i.e., Particle Swarm Optimization (PSO)-, Genetic Algorithm (GA)-, Simulated Annealing (SA)-, and Differential Evolution (DE)-enhanced ANNs. Results show that SP-UCI-enhanced ANN is generally superior over other EA-enhanced ANNs with regard to the convergence and computational performance. In addition, we carried out a case study for hydropower scheduling in the Trinity Lake in the western U.S. In this case study, multiple climate indices are used as predictors for the SP-UCI-enhanced ANN. The reservoir inflows and hydropower releases are predicted up to sub-seasonal to seasonal scale. Results show that SP-UCI-enhanced ANN is able to achieve better statistics than other EAs-based ANN, which implies the usefulness and powerfulness of proposed SP-UCI-enhanced ANN for reservoir operation, water resources engineering and management. The SP-UCI-enhanced ANN is universally applicable to many other regression and prediction problems, and it has a good potential to be an alternative to the classical BP scheme and gradient-based optimization methods.
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
Control of parallel manipulators using force feedback
NASA Technical Reports Server (NTRS)
Nanua, Prabjot
1994-01-01
Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.
NASA Astrophysics Data System (ADS)
Georgiou, Andreas; Skarlatos, Dimitrios
2016-07-01
Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study region for sitting solar parks is appraised through the summation rule. Strict application of the framework depicts 3.0 % of the study region scoring a best-suitability index for solar resource exploitation, hence minimizing the risk in a potential investment. However, using different weighting schemes for criteria, suitable areas may reach up to 83 % of the study region. The suggested methodological framework applied can be easily utilized by potential investors and renewable energy developers, through a front end web-based application with proper GUI for personalized weighting schemes.
Weighted statistical parameters for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
Teede, Helena J; Joham, Anju E; Paul, Eldho; Moran, Lisa J; Loxton, Deborah; Jolley, Damien; Lombard, Catherine
2013-08-01
Polycystic ovary syndrome (PCOS) affects 6-18% of women. The natural history of weight gain in women with PCOS has not been well described. Here we aimed to examine longitudinal weight gain in women with and without PCOS and to assess the association between obesity and PCOS prevalence. The observational study was set in the general community. Participants were women randomly selected from the national health insurance scheme (Medicare) database. Mailed survey data were collected by the Australian Longitudinal Study on Women's Health. Data from respondents to survey 4, aged 28-33 years (2006, n = 9,145) were analyzed. The main outcome measures were PCOS prevalence and body mass index (BMI). Self-reported PCOS prevalence was 5.8% (95% CI: 5.3%-6.4%). Women reporting PCOS had higher weight, mean BMI [2.5 kg/m(2) (95% CI: 1.9-3.1)], and greater 10-year weight gain [2.6 kg (95% CI: 1.2-4.0)]. BMI was the strongest correlate of PCOS status with every BMI increment increasing the risk of reporting PCOS by 9.2% (95% CI: 6%-12%). This community based observational study with longitudinal reporting of weight shows that weight, BMI, and 10-year weight gain were higher in PCOS. We report the novel finding that obesity and greater weight gain are significantly associated with PCOS status. Considering the prevalence, major health and economic burden of PCOS, the increasing weight gain in young women, and established benefits of weight loss, these results have major public health implications. Copyright © 2012 The Obesity Society.
The role of lignin and lignin-like materials during wood hydrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaher, F.A.
1981-01-01
The nature of the material precipitating from the acid prehydrolysates and hydrolysates of wood upon storage has been investigated. This material was analyzed for its sugar content, ultraviolet spectra, elemental composition, molecular weight distribution, and thermogravimetric behavior. All the results indicate that this material has the same properties as lignin. The results suggest also that this material is neither a resinification product from sugar decomposition nor extraneous materials of wood (resins, tannins, etc.). It is suggested, too, that the extraction of this material along with sugar during hydrolysis and prehydrolysis causes a considerable error in the results of wood analysismore » using standard methods based on weight loss. The actual percentages of lignin in the wood samples tested appear to vary from two to four times their values measured by standard methods. Consequently, the actual cellulose content of these materials may be far lower than has been reported. This has serious implications for schemes based on biomass conversion.« less
An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router
NASA Astrophysics Data System (ADS)
Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua
2016-10-01
Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
Hyperviscosity for unstructured ALE meshes
NASA Astrophysics Data System (ADS)
Cook, Andrew W.; Ulitsky, Mark S.; Miller, Douglas S.
2013-01-01
An artificial viscosity, originally designed for Eulerian schemes, is adapted for use in arbitrary Lagrangian-Eulerian simulations. Changes to the Eulerian model (dubbed 'hyperviscosity') are discussed, which enable it to work within a Lagrangian framework. New features include a velocity-weighted grid scale and a generalised filtering procedure, applicable to either structured or unstructured grids. The model employs an artificial shear viscosity for treating small-scale vorticity and an artificial bulk viscosity for shock capturing. The model is based on the Navier-Stokes form of the viscous stress tensor, including the diagonal rate-of-expansion tensor. A second-order version of the model is presented, in which Laplacian operators act on the velocity divergence and the grid-weighted strain-rate magnitude to ensure that the velocity field remains smooth at the grid scale. Unlike sound-speed-based artificial viscosities, the hyperviscosity model is compatible with the low Mach number limit. The new model outperforms a commonly used Lagrangian artificial viscosity on a variety of test problems.
Riehle, Natascha; Götz, Tobias; Kandelbauer, Andreas; Tovar, Günter E M; Lorenz, Günter
2018-06-01
This article contains data on the synthesis and mechanical characterization of polysiloxane-based urea-elastomers (PSUs) and is related to the research article entitled "Influence of PDMS molecular weight on transparency and mechanical properties of soft polysiloxane-urea-elastomers for intraocular lens application" (Riehle et al., 2018) [1]. These elastomers were prepared by a two-step polyaddition using the aliphatic diisocyanate 4,4'-Methylenbis(cyclohexylisocyanate) (H 12 MDI), a siloxane-based chain extender 1,3-Bis(3-aminopropyl)-1,1,3,3-tetramethyldisiloxane (APTMDS) and amino-terminated polydimethylsiloxanes (PDMS) or polydimethyl-methyl-phenyl-siloxane-copolymers (PDMS-Me,Ph), respectively. (More details about the synthesis procedure and the reaction scheme can be found in the related research article (Riehle et al., 2018) [1]). Amino-terminated polydimethylsiloxanes with varying molecular weights and PDMS-Me,Ph-copolymers were prepared prior by a base-catalyzed ring-chain equilibration of a cyclic siloxane and the endblocker APTMDS. This DiB article contains a procedure for the synthesis of the base catalyst tetramethylammonium-3-aminopropyl-dimethylsilanolate and a generic synthesis procedure for the preparation of a PDMS having a targeted number average molecular weight M ¯ n of 3000 g mol -1 . Molecular weights and the amount of methyl-phenyl-siloxane within the polysiloxane-copolymers were determined by 1 H NMR and 29 Si NMR spectroscopy. The corresponding NMR spectra and data are described in this article. Additionally, this DiB article contains processed data on in line and off line FTIR-ATR spectroscopy, which was used to follow the reaction progress of the polyaddition by showing the conversion of the diisocyanate. All relevant IR band assignments of a polydimethylsiloxane-urea spectrum are described in this article. Finally, data on the tensile properties and the mechanical hysteresis-behaviour at 100% elongation of PDMS-based polyurea-elastomers are shown in dependence to the PDMS molecular weight.
Relationships of pediatric anthropometrics for CT protocol selection.
Phillips, Grace S; Stanescu, Arta-Luana; Alessio, Adam M
2014-07-01
Determining the optimal CT technique to minimize patient radiation exposure while maintaining diagnostic utility requires patient-specific protocols that are based on patient characteristics. This work develops relationships between different anthropometrics and CT image noise to determine appropriate protocol classification schemes. We measured the image noise in 387 CT examinations of pediatric patients (222 boys, 165 girls) of the chest, abdomen, and pelvis and generated mathematic relationships between image noise and patient lateral and anteroposterior dimensions, age, and weight. At the chest level, lateral distance (ld) across the body is strongly correlated with weight (ld = 0.23 × weight + 16.77; R(2) = 0.93) and is less well correlated with age (ld = 1.10 × age + 17.13; R(2) = 0.84). Similar trends were found for anteroposterior dimensions and at the abdomen level. Across all studies, when acquisition-specific parameters are factored out of the noise, the log of image noise was highly correlated with lateral distance (R(2) = 0.72) and weight (R(2) = 0.72) and was less correlated with age (R(2) = 0.62). Following first-order relationships of image noise and scanner technique, plots were formed to show techniques that could achieve matched noise across the pediatric population. Patient lateral distance and weight are essentially equally effective metrics to base maximum technique settings for pediatric patient-specific protocols. These metrics can also be used to help categorize appropriate reference levels for CT technique and size-specific dose estimates across the pediatric population.
Spray algorithm without interface construction
NASA Astrophysics Data System (ADS)
Al-Kadhem Majhool, Ahmed Abed; Watkins, A. P.
2012-05-01
This research is aimed to create a new and robust family of convective schemes to capture the interface between the dispersed and the carrier phases in a spray without the need to build up the interface boundary. The selection of the Weighted Average Flux (WAF) scheme is due to this scheme being designed to deal with random flux scheme which is second-order accurate in space and time. The convective flux in each cell face utilizes the WAF scheme blended with Switching Technique for Advection and Capturing of Surfaces (STACS) scheme for high resolution flux limiters. In the next step, the high resolution scheme is blended with the WAF scheme to provide the sharpness and boundedness of the interface by using switching strategy. In this work, the Eulerian-Eulerian framework of non-reactive turbulent spray is set in terms of theoretical proposed methodology namely spray moments of drop size distribution, presented by Beck and Watkins [1]. The computational spray model avoids the need to segregate the local droplet number distribution into parcels of identical droplets. The proposed scheme is tested on capturing the spray edges in modelling hollow cone sprays without need to reconstruct two-phase interface. A test is made on simple comparison between TVD scheme and WAF scheme using the same flux limiter on convective flow hollow cone spray. Results show the WAF scheme gives a better prediction than TVD scheme. The only way to check the accuracy of the presented models is by evaluating the spray sheet thickness.
Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme.
Zhu, Hui; Gao, Lijuan; Li, Hui
2016-02-01
With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users' personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users' query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users' queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs.
Secure and Privacy-Preserving Body Sensor Data Collection and Query Scheme
Zhu, Hui; Gao, Lijuan; Li, Hui
2016-01-01
With the development of body sensor networks and the pervasiveness of smart phones, different types of personal data can be collected in real time by body sensors, and the potential value of massive personal data has attracted considerable interest recently. However, the privacy issues of sensitive personal data are still challenging today. Aiming at these challenges, in this paper, we focus on the threats from telemetry interface and present a secure and privacy-preserving body sensor data collection and query scheme, named SPCQ, for outsourced computing. In the proposed SPCQ scheme, users’ personal information is collected by body sensors in different types and converted into multi-dimension data, and each dimension is converted into the form of a number and uploaded to the cloud server, which provides a secure, efficient and accurate data query service, while the privacy of sensitive personal information and users’ query data is guaranteed. Specifically, based on an improved homomorphic encryption technology over composite order group, we propose a special weighted Euclidean distance contrast algorithm (WEDC) for multi-dimension vectors over encrypted data. With the SPCQ scheme, the confidentiality of sensitive personal data, the privacy of data users’ queries and accurate query service can be achieved in the cloud server. Detailed analysis shows that SPCQ can resist various security threats from telemetry interface. In addition, we also implement SPCQ on an embedded device, smart phone and laptop with a real medical database, and extensive simulation results demonstrate that our proposed SPCQ scheme is highly efficient in terms of computation and communication costs. PMID:26840319
Optical Frequency Standards Based on Neutral Atoms and Molecules
NASA Astrophysics Data System (ADS)
Riehle, Fritz; Helmcke, Juergen
The current status and prospects of optical frequency standards based on neutral atomic and molecular absorbers are reviewed. Special attention is given to an optical frequency standard based on cold Ca atoms which are interrogated with a pulsed excitation scheme leading to resolved line structures with a quality factor Q > 10^12. The optical frequency was measured by comparison with PTB's primary clock to be νCa = 455 986 240 494.13 kHz with a total relative uncertainty of 2.5 x10^-13. After a recent recommendation of the International Committee of Weights and Measures (CIPM), this frequency standard now represents one of the most accurate realizations of the length unit.
Concept of combinatorial de novo design of drug-like molecules by particle swarm optimization.
Hartenfeller, Markus; Proschak, Ewgenij; Schüller, Andreas; Schneider, Gisbert
2008-07-01
We present a fast stochastic optimization algorithm for fragment-based molecular de novo design (COLIBREE, Combinatorial Library Breeding). The search strategy is based on a discrete version of particle swarm optimization. Molecules are represented by a scaffold, which remains constant during optimization, and variable linkers and side chains. Different linkers represent virtual chemical reactions. Side-chain building blocks were obtained from pseudo-retrosynthetic dissection of large compound databases. Here, ligand-based design was performed using chemically advanced template search (CATS) topological pharmacophore similarity to reference ligands as fitness function. A weighting scheme was included for particle swarm optimization-based molecular design, which permits the use of many reference ligands and allows for positive and negative design to be performed simultaneously. In a case study, the approach was applied to the de novo design of potential peroxisome proliferator-activated receptor subtype-selective agonists. The results demonstrate the ability of the technique to cope with large combinatorial chemistry spaces and its applicability to focused library design. The technique was able to perform exploitation of a known scheme and at the same time explorative search for novel ligands within the framework of a given molecular core structure. It thereby represents a practical solution for compound screening in the early hit and lead finding phase of a drug discovery project.
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Kuusipalo, Heli; Maleta, Kenneth; Briend, André; Manary, Mark; Ashorn, Per
2006-10-01
Fortified spreads (FSs) have proven effective in the rehabilitation of severely malnourished children. We examined acceptability, growth and change in blood haemoglobin (Hb) concentration among moderately underweight ambulatory infants given FS. This was a randomised, controlled, parallel-group, investigator-blind clinical trial in rural Malawi. Six- to 17-month-old underweight infants (weight for age < -2), whose weight was greater than 5.5 kg and weight-for-height z score greater than -3 received for 12 weeks at home 1 of 8 food supplementation schemes: nothing, 5, 25, 50, or 75 g/day milk-based FS or 25, 50, or 75 g/day soy-based FS. Outcome measures included change in weight, length and blood Hb concentration. A total of 126 infants started and 125 completed the intervention. All infants accepted the spread well, and no intolerance was recorded. Average weight and length gains were higher among infants receiving daily 25 to 75 g FS than among those receiving only 0 to 5 g FS. Mean Hb concentration remained unchanged among unsupplemented controls but increased by 10 to 17 g/L among infants receiving any FS. All average gains were largest among infants receiving 50 g of FS daily: mean difference (95% confidence interval) in the 12-week gain between infants in 50 g milk-based FS group and the unsupplemented group was 290 g (range, -130 to 700 g), 0.9 cm (range, -0.3 to 2.2 cm), and 17 g/L (range, 0 to 34 g/L) for weight, length and blood Hb concentration, respectively. In soy- vs milk-based FS groups, average outcomes were comparable. Supplementation with 25 to 75 g/day of highly fortified spread is feasible and may promote growth and alleviate anaemia among moderately malnourished infants. Further trials should test this hypothesis.
Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi
2014-01-01
We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.
Localization with a mobile beacon in underwater acoustic sensor networks.
Lee, Sangho; Kim, Kiseon
2012-01-01
Localization is one of the most important issues associated with underwater acoustic sensor networks, especially when sensor nodes are randomly deployed. Given that it is difficult to deploy beacon nodes at predetermined locations, localization schemes with a mobile beacon on the sea surface or along the planned path are inherently convenient, accurate, and energy-efficient. In this paper, we propose a new range-free Localization with a Mobile Beacon (LoMoB). The mobile beacon periodically broadcasts a beacon message containing its location. Sensor nodes are individually localized by passively receiving the beacon messages without inter-node communications. For location estimation, a set of potential locations are obtained as candidates for a node's location and then the node's location is determined through the weighted mean of all the potential locations with the weights computed based on residuals.
Localization with a Mobile Beacon in Underwater Acoustic Sensor Networks
Lee, Sangho; Kim, Kiseon
2012-01-01
Localization is one of the most important issues associated with underwater acoustic sensor networks, especially when sensor nodes are randomly deployed. Given that it is difficult to deploy beacon nodes at predetermined locations, localization schemes with a mobile beacon on the sea surface or along the planned path are inherently convenient, accurate, and energy-efficient. In this paper, we propose a new range-free Localization with a Mobile Beacon (LoMoB). The mobile beacon periodically broadcasts a beacon message containing its location. Sensor nodes are individually localized by passively receiving the beacon messages without inter-node communications. For location estimation, a set of potential locations are obtained as candidates for a node's location and then the node's location is determined through the weighted mean of all the potential locations with the weights computed based on residuals. PMID:22778597
Potential applications of the white rot fungus Pleurotus in bioregenerative life support systems
NASA Astrophysics Data System (ADS)
Manukovsky, N. S.; Kovalev, V. S.; Yu, Ch.; Gurevich, Yu. L.; Liu, H.
Earlier we demonstrated the possibility of using soil-like substrate SLS for plant cultivation in bioregenerative life support systems BLSS We suggest dividing the process of SLS bioregeneration at BLSS conditions into two stages At the first stage plant residues should be used for growing of white rot fungus Pleurotus ostreatus Pleurotus florida etc The fruit bodies could be used as food Spent mushroom compost is carried in SLS and treated by microorganisms and worms at the second stage The possibility of extension of human food ration is only one of the reasons for realization of the suggested two-stage SLS regeneration scheme people s daily consumption of mushrooms is limited to 200 -250 g of wet weight or 20 -25 g of dry weight Multiple tests showed what is more important is that inclusion of mushrooms into the system cycle scheme contributes through various mechanisms to the more stable functioning of vegetative cenosis in general Taking into account the given experimental data we determined the scheme of mushroom module material balance The technological peculiarities of mushroom cultivation at BLSS conditions are being discussed
Conformal Electromagnetic Particle in Cell: A Review
Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...
2015-10-26
We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Jeffrey H. Gove
2003-01-01
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...
THERAPY WITH P$sup 32$ IN POLYCYTHEMIA (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waschulewski, H.; Dorffel, E.W.
1958-01-01
Therapy vdth P/sup 32/ is being used more and more in polycythemia vera rubra. There ls no generally valid dosage scheme. Body-weight, blood-pictare and general condition furnish centaln clues. At present, we administer about 0.08 mc/kg body-weight as initial dose adding later, if necessary, further quantities under careful control of the blood-picture. (auth)
Invited commentary: the incremental value of customization in defining abnormal fetal growth status.
Zhang, Jun; Sun, Kun
2013-10-15
Reference tools based on birth weight percentiles at a given gestational week have long been used to define fetuses or infants that are small or large for their gestational ages. However, important deficiencies of the birth weight reference are being increasingly recognized. Overwhelming evidence indicates that an ultrasonography-based fetal weight reference should be used to classify fetal and newborn sizes during pregnancy and at birth, respectively. Questions have been raised as to whether further adjustments for race/ethnicity, parity, sex, and maternal height and weight are helpful to improve the accuracy of the classification. In this issue of the Journal, Carberry et al. (Am J Epidemiol. 2013;178(8):1301-1308) show that adjustment for race/ethnicity is useful, but that additional fine tuning for other factors (i.e., full customization) in the classification may not further improve the ability to predict infant morbidity, mortality, and other fetal growth indicators. Thus, the theoretical advantage of full customization may have limited incremental value for pediatric outcomes, particularly in term births. Literature on the prediction of short-term maternal outcomes and very long-term outcomes (adult diseases) is too scarce to draw any conclusions. Given that each additional variable being incorporated in the classification scheme increases complexity and costs in practice, the clinical utility of full customization in obstetric practice requires further testing.
Impacts of weighting climate models for hydro-meteorological climate change studies
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe; Caya, Daniel
2017-06-01
Weighting climate models is controversial in climate change impact studies using an ensemble of climate simulations from different climate models. In climate science, there is a general consensus that all climate models should be considered as having equal performance or in other words that all projections are equiprobable. On the other hand, in the impacts and adaptation community, many believe that climate models should be weighted based on their ability to better represent various metrics over a reference period. The debate appears to be partly philosophical in nature as few studies have investigated the impact of using weights in projecting future climate changes. The present study focuses on the impact of assigning weights to climate models for hydrological climate change studies. Five methods are used to determine weights on an ensemble of 28 global climate models (GCMs) adapted from the Coupled Model Intercomparison Project Phase 5 (CMIP5) database. Using a hydrological model, streamflows are computed over a reference (1961-1990) and future (2061-2090) periods, with and without post-processing climate model outputs. The impacts of using different weighting schemes for GCM simulations are then analyzed in terms of ensemble mean and uncertainty. The results show that weighting GCMs has a limited impact on both projected future climate in term of precipitation and temperature changes and hydrology in terms of nine different streamflow criteria. These results apply to both raw and post-processed GCM model outputs, thus supporting the view that climate models should be considered equiprobable.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2017-12-01
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.
Optimization of locations of diffusion spots in indoor optical wireless local area networks
NASA Astrophysics Data System (ADS)
Eltokhey, Mahmoud W.; Mahmoud, K. R.; Ghassemlooy, Zabih; Obayya, Salah S. A.
2018-03-01
In this paper, we present a novel optimization of the locations of the diffusion spots in indoor optical wireless local area networks, based on the central force optimization (CFO) scheme. The users' performance uniformity is addressed by using the CFO algorithm, and adopting different objective function's configurations, while considering maximization and minimization of the signal to noise ratio and the delay spread, respectively. We also investigate the effect of varying the objective function's weights on the system and the users' performance as part of the adaptation process. The results show that the proposed objective function configuration-based optimization procedure offers an improvement of 65% in the standard deviation of individual receivers' performance.
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.
Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni
2018-06-15
Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China
Zou, Hui; Zou, Zhihong; Wang, Xiaojing
2015-01-01
The increase and the complexity of data caused by the uncertain environment is today’s reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006–2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality. PMID:26569283
Microenvironment-Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis
2015-10-01
with a biopolymer (i.e. starch ) to improve biocompatibility, and tagged with prostate cancer-targeting ligands. A significant challenge to translation... starch coating of 50 nm and 100 nm SPIONs was crosslinked and coated with amine groups, and then functionalized with NHS-polyethylene glycol (PEG) of...varying molecular weight (i.e., 2k, 5k or 20k Da) as shown in Scheme 1. Scheme 1. Surface modification of starch -coated SPIONs into aminated and
Microenvironment Sensitive Multimodal Contrast Agent for Prostate Cancer Diagnosis
2016-10-01
coated with a biopolymer (i.e. starch ) to improve biocompatibility, and tagged with prostate cancer-targeting ligands. A significant challenge to...The starch coating of 50 nm and 100 nm SPIONs was crosslinked and coated with amine groups, and then functionalized with NHS-polyethylene glycol (PEG...of varying molecular weight (i.e., 2k, 5k or 20k Da) as shown in Scheme 1. Scheme 1. Surface modification of starch -coated SPIONs into aminated
NASA Astrophysics Data System (ADS)
dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.
2013-09-01
In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.
Allocation and simulation study of carbon emission quotas among China's provinces in 2020.
Zhou, Xing; Guan, Xueling; Zhang, Ming; Zhou, Yao; Zhou, Meihua
2017-03-01
China will form its carbon market in 2017 to focus on the allocation of regional carbon emission quota in order to cope with global warming. The rationality of the regional allocation has become an important consideration for the government in ensuring stable growth in different regions that are experiencing disparity in resource endowment and economic status. Based on constructing the quota allocation indicator system for carbon emission, the emission quota for each province in different scenarios and schemes in 2020 is simulated by the multifactor hybrid weighted Shannon entropy allocation model. The following conclusions are drawn: (1) The top 5 secondary-level indicators that influence provincial quota allocation in weight are as follows: per capita energy consumption, openness, per capita carbon emission, per capita disposable income, and energy intensity. (2) The ratio of carbon emission in 2020 is different from that in 2013 in many scenarios, and the variation is scenario 2 > scenario 1 > scenario 3, with Hubei and Guangdong the provinces with the largest increase and decrease ratios, respectively. (3) In the same scenario, the quota allocation varies in different reduction criteria emphases; if the government emphasizes reduction efficiency, scheme 1 will show obvious adjustment, that is, Hunan, Hubei, Guizhou, and Yunnan will have the largest decrease. The amounts are 4.28, 8.31, 4.04, and 5.97 million tons, respectively.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa
2016-10-01
The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Jianhui; Zhang, Feifei; Zhao, Yi; Shu, Hong; Zhong, Kaiwen
2016-07-01
For the large-area snow depth (SD) data sets with high spatial resolution in the Altay region of Northern Xinjiang, China, we present a deterministic ensemble Kalman filter (DEnKF)-albedo assimilation scheme that considers the common land model (CoLM) subgrid heterogeneity. In the albedo assimilation of DEnKF-albedo, the assimilated albedos over each subgrid tile are estimated with the MCD43C1 bidirectional reflectance distribution function (BRDF) parameters product and CoLM calculated solar zenith angle. The BRDF parameters are hypothesized to be consistent over all subgrid tiles within a specified grid. In the SCF assimilation of DEnKF-albedo, a DEnKF combining a snow density-based observation operator considers the effects of the CoLM subgrid heterogeneity and is employed to assimilate MODIS SCF to update SD states over all subgrid tiles. The MODIS SCF over a grid is compared with the area-weighted sum of model predicted SCF over all the subgrid tiles within the grid. The results are validated with in situ SD measurements and AMSR-E product. Compared with the simulations, the DEnKF-albedo scheme can reduce errors of SD simulations and accurately simulate the seasonal variability of SD. Furthermore, it can improve simulations of SD spatiotemporal distribution in the Altay region, which is more accurate and shows more detail than the AMSR-E product.
Wang, Mingming; Sweetapple, Chris; Fu, Guangtao; Farmani, Raziyeh; Butler, David
2017-10-01
This paper presents a new framework for decision making in sustainable drainage system (SuDS) scheme design. It integrates resilience, hydraulic performance, pollution control, rainwater usage, energy analysis, greenhouse gas (GHG) emissions and costs, and has 12 indicators. The multi-criteria analysis methods of entropy weight and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were selected to support SuDS scheme selection. The effectiveness of the framework is demonstrated with a SuDS case in China. Indicators used include flood volume, flood duration, a hydraulic performance indicator, cost and resilience. Resilience is an important design consideration, and it supports scheme selection in the case study. The proposed framework will help a decision maker to choose an appropriate design scheme for implementation without subjectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.
2010-01-01
Background The reconstruction of protein complexes from the physical interactome of organisms serves as a building block towards understanding the higher level organization of the cell. Over the past few years, several independent high-throughput experiments have helped to catalogue enormous amount of physical protein interaction data from organisms such as yeast. However, these individual datasets show lack of correlation with each other and also contain substantial number of false positives (noise). Over these years, several affinity scoring schemes have also been devised to improve the qualities of these datasets. Therefore, the challenge now is to detect meaningful as well as novel complexes from protein interaction (PPI) networks derived by combining datasets from multiple sources and by making use of these affinity scoring schemes. In the attempt towards tackling this challenge, the Markov Clustering algorithm (MCL) has proved to be a popular and reasonably successful method, mainly due to its scalability, robustness, and ability to work on scored (weighted) networks. However, MCL produces many noisy clusters, which either do not match known complexes or have additional proteins that reduce the accuracies of correctly predicted complexes. Results Inspired by recent experimental observations by Gavin and colleagues on the modularity structure in yeast complexes and the distinctive properties of "core" and "attachment" proteins, we develop a core-attachment based refinement method coupled to MCL for reconstruction of yeast complexes from scored (weighted) PPI networks. We combine physical interactions from two recent "pull-down" experiments to generate an unscored PPI network. We then score this network using available affinity scoring schemes to generate multiple scored PPI networks. The evaluation of our method (called MCL-CAw) on these networks shows that: (i) MCL-CAw derives larger number of yeast complexes and with better accuracies than MCL, particularly in the presence of natural noise; (ii) Affinity scoring can effectively reduce the impact of noise on MCL-CAw and thereby improve the quality (precision and recall) of its predicted complexes; (iii) MCL-CAw responds well to most available scoring schemes. We discuss several instances where MCL-CAw was successful in deriving meaningful complexes, and where it missed a few proteins or whole complexes due to affinity scoring of the networks. We compare MCL-CAw with several recent complex detection algorithms on unscored and scored networks, and assess the relative performance of the algorithms on these networks. Further, we study the impact of augmenting physical datasets with computationally inferred interactions for complex detection. Finally, we analyse the essentiality of proteins within predicted complexes to understand a possible correlation between protein essentiality and their ability to form complexes. Conclusions We demonstrate that core-attachment based refinement in MCL-CAw improves the predictions of MCL on yeast PPI networks. We show that affinity scoring improves the performance of MCL-CAw. PMID:20939868
Casalegno, Stefano; Bennie, Jonathan J; Inger, Richard; Gaston, Kevin J
2014-01-01
Although the importance of addressing ecosystem service benefits in regional land use planning and decision-making is evident, substantial practical challenges remain. In particular, methods to identify priority areas for the provision of key ecosystem services and other environmental services (benefits from the environment not directly linked to the function of ecosystems) need to be developed. Priority areas are locations which provide disproportionally high benefits from one or more service. Here we map a set of ecosystem and environmental services and delineate priority areas according to different scenarios. Each scenario is produced by a set of weightings allocated to different services and corresponds to different landscape management strategies which decision makers could undertake. Using the county of Cornwall, U.K., as a case study, we processed gridded maps of key ecosystem services and environmental services, including renewable energy production and urban development. We explored their spatial distribution patterns and their spatial covariance and spatial stationarity within the region. Finally we applied a complementarity-based priority ranking algorithm (zonation) using different weighting schemes. Our conclusions are that (i) there are two main patterns of service distribution in this region, clustered services (including agriculture, carbon stocks, urban development and plant production) and dispersed services (including cultural services, energy production and floods mitigation); (ii) more than half of the services are spatially correlated and there is high non-stationarity in the spatial covariance between services; and (iii) it is important to consider both ecosystem services and other environmental services in identifying priority areas. Different weighting schemes provoke drastic changes in the delineation of priority areas and therefore decision making processes need to carefully consider the relative values attributed to different services.
Casalegno, Stefano; Bennie, Jonathan J.; Inger, Richard; Gaston, Kevin J.
2014-01-01
Although the importance of addressing ecosystem service benefits in regional land use planning and decision-making is evident, substantial practical challenges remain. In particular, methods to identify priority areas for the provision of key ecosystem services and other environmental services (benefits from the environment not directly linked to the function of ecosystems) need to be developed. Priority areas are locations which provide disproportionally high benefits from one or more service. Here we map a set of ecosystem and environmental services and delineate priority areas according to different scenarios. Each scenario is produced by a set of weightings allocated to different services and corresponds to different landscape management strategies which decision makers could undertake. Using the county of Cornwall, U.K., as a case study, we processed gridded maps of key ecosystem services and environmental services, including renewable energy production and urban development. We explored their spatial distribution patterns and their spatial covariance and spatial stationarity within the region. Finally we applied a complementarity-based priority ranking algorithm (zonation) using different weighting schemes. Our conclusions are that (i) there are two main patterns of service distribution in this region, clustered services (including agriculture, carbon stocks, urban development and plant production) and dispersed services (including cultural services, energy production and floods mitigation); (ii) more than half of the services are spatially correlated and there is high non-stationarity in the spatial covariance between services; and (iii) it is important to consider both ecosystem services and other environmental services in identifying priority areas. Different weighting schemes provoke drastic changes in the delineation of priority areas and therefore decision making processes need to carefully consider the relative values attributed to different services. PMID:25250775
NASA Astrophysics Data System (ADS)
Peckerar, Martin C.; Marrian, Christie R.
1995-05-01
Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Mikhail; Ackerman, Andrew; Avramov, Alex
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP) and potential cloud dissipation, in agreement with earlier studies. By comparing simulations with the same microphysics coupled to different dynamical cores as well as the same dynamics coupled to differentmore » microphysics schemes, it is found that the ice water path (IWP) is mainly controlled by ice microphysics, while the inter-model differences in LWP are largely driven by physics and numerics of the dynamical cores. In contrast to previous intercomparisons, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSD) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case.« less
Ginzburg, Irina
2017-01-01
Impact of the unphysical tangential advective-diffusion constraint of the bounce-back (BB) reflection on the impermeable solid surface is examined for the first four moments of concentration. Despite the number of recent improvements for the Neumann condition in the lattice Boltzmann method-advection-diffusion equation, the BB rule remains the only known local mass-conserving no-flux condition suitable for staircase porous geometry. We examine the closure relation of the BB rule in straight channel and cylindrical capillary analytically, and show that it excites the Knudsen-type boundary layers in the nonequilibrium solution for full-weight equilibrium stencil. Although the d2Q5 and d3Q7 coordinate schemes are sufficient for the modeling of isotropic diffusion, the full-weight stencils are appealing for their advanced stability, isotropy, anisotropy and anti-numerical-diffusion ability. The boundary layers are not covered by the Chapman-Enskog expansion around the expected equilibrium, but they accommodate the Chapman-Enskog expansion in the bulk with the closure relation of the bounce-back rule. We show that the induced boundary layers introduce first-order errors in two primary transport properties, namely, mean velocity (first moment) and molecular diffusion coefficient (second moment). As a side effect, the Taylor-dispersion coefficient (second moment), skewness (third moment), and kurtosis (fourth moment) deviate from their physical values and predictions of the fourth-order Chapman-Enskog analysis, even though the kurtosis error in pure diffusion does not depend on grid resolution. In two- and three-dimensional grid-aligned channels and open-tubular conduits, the errors of velocity and diffusion are proportional to the diagonal weight values of the corresponding equilibrium terms. The d2Q5 and d3Q7 schemes do not suffer from this deficiency in grid-aligned geometries but they cannot avoid it if the boundaries are not parallel to the coordinate lines. In order to vanish or attenuate the disparity of the modeled transport coefficients with the equilibrium weights without any modification of the BB rule, we propose to use the two-relaxation-times collision operator with free-tunable product of two eigenfunctions Λ. Two different values Λ_{v} and Λ_{b} are assigned for bulk and boundary nodes, respectively. The rationale behind this is that Λ_{v} is adjustable for stability, accuracy, or other purposes, while the corresponding Λ_{b}(Λ_{v}) controls the primary accommodation effects. Two distinguished but similar functional relations Λ_{b}(Λ_{v}) are constructed analytically: they preserve advection velocity in parabolic profile, exactly in the two-dimensional channel and very accurately in a three-dimensional cylindrical capillary. For any velocity-weight stencil, the (local) double-Λ BB scheme produces quasi-identical solutions with the (nonlocal) specular-forward reflection for first four moments in a channel. In a capillary, this strategy allows for the accurate modeling of the Taylor-dispersion and non-Gaussian effects. As illustrative example, it is shown that in the flow around a circular obstacle, the double-Λ scheme may also vanish the dependency of mean velocity on the velocity weight; the required value for Λ_{b}(Λ_{v}) can be identified in a few bisection iterations in given geometry. A positive solution for Λ_{b}(Λ_{v}) may not exist in pure diffusion, but a sufficiently small value of Λ_{b} significantly reduces the disparity in diffusion coefficient with the mass weight in ducts and in the presence of rectangular obstacles. Although Λ_{b} also controls the effective position of straight or curved boundaries, the double-Λ scheme deals with the lower-order effects. Its idea and construction may help understanding and amelioration of the anomalous, zero- and first-order behavior of the macroscopic solution in the presence of the bulk and boundary or interface discontinuities, commonly found in multiphase flow and heterogeneous transport.
NASA Astrophysics Data System (ADS)
Ginzburg, Irina
2017-01-01
Impact of the unphysical tangential advective-diffusion constraint of the bounce-back (BB) reflection on the impermeable solid surface is examined for the first four moments of concentration. Despite the number of recent improvements for the Neumann condition in the lattice Boltzmann method-advection-diffusion equation, the BB rule remains the only known local mass-conserving no-flux condition suitable for staircase porous geometry. We examine the closure relation of the BB rule in straight channel and cylindrical capillary analytically, and show that it excites the Knudsen-type boundary layers in the nonequilibrium solution for full-weight equilibrium stencil. Although the d2Q5 and d3Q7 coordinate schemes are sufficient for the modeling of isotropic diffusion, the full-weight stencils are appealing for their advanced stability, isotropy, anisotropy and anti-numerical-diffusion ability. The boundary layers are not covered by the Chapman-Enskog expansion around the expected equilibrium, but they accommodate the Chapman-Enskog expansion in the bulk with the closure relation of the bounce-back rule. We show that the induced boundary layers introduce first-order errors in two primary transport properties, namely, mean velocity (first moment) and molecular diffusion coefficient (second moment). As a side effect, the Taylor-dispersion coefficient (second moment), skewness (third moment), and kurtosis (fourth moment) deviate from their physical values and predictions of the fourth-order Chapman-Enskog analysis, even though the kurtosis error in pure diffusion does not depend on grid resolution. In two- and three-dimensional grid-aligned channels and open-tubular conduits, the errors of velocity and diffusion are proportional to the diagonal weight values of the corresponding equilibrium terms. The d2Q5 and d3Q7 schemes do not suffer from this deficiency in grid-aligned geometries but they cannot avoid it if the boundaries are not parallel to the coordinate lines. In order to vanish or attenuate the disparity of the modeled transport coefficients with the equilibrium weights without any modification of the BB rule, we propose to use the two-relaxation-times collision operator with free-tunable product of two eigenfunctions Λ . Two different values Λv and Λb are assigned for bulk and boundary nodes, respectively. The rationale behind this is that Λv is adjustable for stability, accuracy, or other purposes, while the corresponding Λb(Λv) controls the primary accommodation effects. Two distinguished but similar functional relations Λb(Λv) are constructed analytically: they preserve advection velocity in parabolic profile, exactly in the two-dimensional channel and very accurately in a three-dimensional cylindrical capillary. For any velocity-weight stencil, the (local) double-Λ BB scheme produces quasi-identical solutions with the (nonlocal) specular-forward reflection for first four moments in a channel. In a capillary, this strategy allows for the accurate modeling of the Taylor-dispersion and non-Gaussian effects. As illustrative example, it is shown that in the flow around a circular obstacle, the double-Λ scheme may also vanish the dependency of mean velocity on the velocity weight; the required value for Λb(Λv) can be identified in a few bisection iterations in given geometry. A positive solution for Λb(Λv) may not exist in pure diffusion, but a sufficiently small value of Λb significantly reduces the disparity in diffusion coefficient with the mass weight in ducts and in the presence of rectangular obstacles. Although Λb also controls the effective position of straight or curved boundaries, the double-Λ scheme deals with the lower-order effects. Its idea and construction may help understanding and amelioration of the anomalous, zero- and first-order behavior of the macroscopic solution in the presence of the bulk and boundary or interface discontinuities, commonly found in multiphase flow and heterogeneous transport.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot
The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less
A stable 1D multigroup high-order low-order method
Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot; ...
2016-07-13
The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less
Tensor methodology and computational geometry in direct computational experiments in fluid mechanics
NASA Astrophysics Data System (ADS)
Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Julia
2017-07-01
The paper considers a generalized functional and algorithmic construction of direct computational experiments in fluid dynamics. Notation of tensor mathematics is naturally embedded in the finite - element operation in the construction of numerical schemes. Large fluid particle, which have a finite size, its own weight, internal displacement and deformation is considered as an elementary computing object. Tensor representation of computational objects becomes strait linear and uniquely approximation of elementary volumes and fluid particles inside them. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the efficiency of the algorithms developed by numerical procedures with natural parallelism. It is shown that advantages of the proposed approach are achieved among them by considering representation of large particles of a continuous medium motion in dual coordinate systems and computing operations in the projections of these two coordinate systems with direct and inverse transformations. So new method for mathematical representation and synthesis of computational experiment based on large particle method is proposed.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glover, W. J., E-mail: williamjglover@gmail.com
2014-11-07
State averaged complete active space self-consistent field (SA-CASSCF) is a workhorse for determining the excited-state electronic structure of molecules, particularly for states with multireference character; however, the method suffers from known issues that have prevented its wider adoption. One issue is the presence of discontinuities in potential energy surfaces when a state that is not included in the state averaging crosses with one that is. In this communication I introduce a new dynamical weight with spline (DWS) scheme that mimics SA-CASSCF while removing energy discontinuities due to unweighted state crossings. In addition, analytical gradients for DWS-CASSCF (and other dynamically weightedmore » schemes) are derived for the first time, enabling energy-conserving excited-state ab initio molecular dynamics in instances where SA-CASSCF fails.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
Security patterns and a weighting scheme for mobile agents
NASA Astrophysics Data System (ADS)
Walker, Jessie J.
The notion of mobility has always been a prime factor in human endeavor and achievement. This need to migrate by humans has been distilled into software entities, which are their representatives on distant environments. Software agents are developed to act on behalf of a user. Mobile agents were born from the understanding that many times it was much more useful to move the code (program) to where the resources are located, instead of connecting remotely. Within the mobile agent research community, security has traditionally been the most defining issue facing the community and preventing the paradigm from gaining wide acceptance. There are still numerous difficult problems being addressed with very few practical solutions, such as the malicious host and agent problems. These problems are some of the most active areas of research within the mobile agent community. The major principles, facets, fundamental concepts, techniques and architectures of the field are well understood within the community. This is evident by the many mobile agent systems developed in the last decade that share common core components such as agent management, communication facilities, and mobility services. In other words new mobile agent systems and frameworks do not provide any new insights into agent system architecture or mobility services, agent coordination, communication that could be useful to the agent research community, although these new mobile agent systems do in many instances validate, refine, demonstrate the reuse of many previously proposed and discussed mobile agent research elements. Since mobile agent research for the last decade has been defined by security and related issues, our research into security patterns are within this narrow arena of mobile agent research. The research presented in this thesis examines the issue of mobile agent security from the standpoint of security pattern documented from the universe of mobile agent systems. In addition, we explore how these documented security patterns can be quantitatively compared based on a unique weighting scheme. The scheme is formalized into a theory that can be used improve the development of secure mobile agents and agent-based systems.
Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme
NASA Astrophysics Data System (ADS)
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793 ± 0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.
Sythesis of MCMC and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less
Report on Pairing-based Cryptography.
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.
Report on Pairing-based Cryptography
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435
A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-03-24
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
MRI diffusion tensor reconstruction with PROPELLER data acquisition.
Cheryauka, Arvidas B; Lee, James N; Samsonov, Alexei A; Defrise, Michel; Gullberg, Grant T
2004-02-01
MRI diffusion imaging is effective in measuring the diffusion tensor in brain, cardiac, liver, and spinal tissue. Diffusion tensor tomography MRI (DTT MRI) method is based on reconstructing the diffusion tensor field from measurements of projections of the tensor field. Projections are obtained by appropriate application of rotated diffusion gradients. In the present paper, the potential of a novel data acquisition scheme, PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction), is examined in combination with DTT MRI for its capability and sufficiency for diffusion imaging. An iterative reconstruction algorithm is used to reconstruct the diffusion tensor field from rotated diffusion weighted blades by appropriate rotated diffusion gradients. DTT MRI with PROPELLER data acquisition shows significant potential to reduce the number of weighted measurements, avoid ambiguity in reconstructing diffusion tensor parameters, increase signal-to-noise ratio, and decrease the influence of signal distortion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, S. N.; Wilson, Brian G.
2011-07-06
A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less
Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition
NASA Astrophysics Data System (ADS)
Yin, Xi; Liu, Xiaoming
2018-02-01
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
Autonomous learning by simple dynamical systems with delayed feedback.
Kaluza, Pablo; Mikhailov, Alexander S
2014-09-01
A general scheme for the construction of dynamical systems able to learn generation of the desired kinds of dynamics through adjustment of their internal structure is proposed. The scheme involves intrinsic time-delayed feedback to steer the dynamics towards the target performance. As an example, a system of coupled phase oscillators, which can, by changing the weights of connections between its elements, evolve to a dynamical state with the prescribed (low or high) synchronization level, is considered and investigated.
Network community-detection enhancement by proper weighting
NASA Astrophysics Data System (ADS)
Khadivi, Alireza; Ajdari Rad, Ali; Hasler, Martin
2011-04-01
In this paper, we show how proper assignment of weights to the edges of a complex network can enhance the detection of communities and how it can circumvent the resolution limit and the extreme degeneracy problems associated with modularity. Our general weighting scheme takes advantage of graph theoretic measures and it introduces two heuristics for tuning its parameters. We use this weighting as a preprocessing step for the greedy modularity optimization algorithm of Newman to improve its performance. The result of the experiments of our approach on computer-generated and real-world data networks confirm that the proposed approach not only mitigates the problems of modularity but also improves the modularity optimization.
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
An efficient sparse matrix multiplication scheme for the CYBER 205 computer
NASA Technical Reports Server (NTRS)
Lambiotte, Jules J., Jr.
1988-01-01
This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.
Assurance of energy efficiency and data security for ECG transmission in BASNs.
Ma, Tao; Shrestha, Pradhumna Lal; Hempel, Michael; Peng, Dongming; Sharif, Hamid; Chen, Hsiao-Hwa
2012-04-01
With the technological advancement in body area sensor networks (BASNs), low cost high quality electrocardiographic (ECG) diagnosis systems have become important equipment for healthcare service providers. However, energy consumption and data security with ECG systems in BASNs are still two major challenges to tackle. In this study, we investigate the properties of compressed ECG data for energy saving as an effort to devise a selective encryption mechanism and a two-rate unequal error protection (UEP) scheme. The proposed selective encryption mechanism provides a simple and yet effective security solution for an ECG sensor-based communication platform, where only one percent of data is encrypted without compromising ECG data security. This part of the encrypted data is essential to ECG data quality due to its unequally important contribution to distortion reduction. The two-rate UEP scheme achieves a significant additional energy saving due to its unequal investment of communication energy to the outcomes of the selective encryption, and thus, it maintains a high ECG data transmission quality. Our results show the improvements in communication energy saving of about 40%, and demonstrate a higher transmission quality and security measured in terms of wavelet-based weighted percent root-mean-squared difference.
Du, Jialu; Hu, Xin; Liu, Hongbo; Chen, C L Philip
2015-11-01
This paper develops an adaptive robust output feedback control scheme for dynamically positioned ships with unavailable velocities and unknown dynamic parameters in an unknown time-variant disturbance environment. The controller is designed by incorporating the high-gain observer and radial basis function (RBF) neural networks in vectorial backstepping method. The high-gain observer provides the estimations of the ship position and heading as well as velocities. The RBF neural networks are employed to compensate for the uncertainties of ship dynamics. The adaptive laws incorporating a leakage term are designed to estimate the weights of RBF neural networks and the bounds of unknown time-variant environmental disturbances. In contrast to the existing results of dynamic positioning (DP) controllers, the proposed control scheme relies only on the ship position and heading measurements and does not require a priori knowledge of the ship dynamics and external disturbances. By means of Lyapunov functions, it is theoretically proved that our output feedback controller can control a ship's position and heading to the arbitrarily small neighborhood of the desired target values while guaranteeing that all signals in the closed-loop DP control system are uniformly ultimately bounded. Finally, simulations involving two ships are carried out, and simulation results demonstrate the effectiveness of the proposed control scheme.
Parameter diagnostics of phases and phase transition learning by neural networks
NASA Astrophysics Data System (ADS)
Suchsland, Philippe; Wessel, Stefan
2018-05-01
We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.
NASA Astrophysics Data System (ADS)
Tsou, Haiping; Yan, Tsun-Yee
1999-04-01
This paper describes an extended-source spatial acquisition and tracking scheme for planetary optical communications. This scheme uses the Sun-lit Earth image as the beacon signal, which can be computed according to the current Sun-Earth-Probe angle from a pre-stored Earth image or a received snapshot taken by other Earth-orbiting satellite. Onboard the spacecraft, the reference image is correlated in the transform domain with the received image obtained from a detector array, which is assumed to have each of its pixels corrupted by an independent additive white Gaussian noise. The coordinate of the ground station is acquired and tracked, respectively, by an open-loop acquisition algorithm and a closed-loop tracking algorithm derived from the maximum likelihood criterion. As shown in the paper, the optimal spatial acquisition requires solving two nonlinear equations, or iteratively solving their linearized variants, to estimate the coordinate when translation in the relative positions of onboard and ground transceivers is considered. Similar assumption of linearization leads to the closed-loop spatial tracking algorithm in which the loop feedback signals can be derived from the weighted transform-domain correlation. Numerical results using a sample Sun-lit Earth image demonstrate that sub-pixel resolutions can be achieved by this scheme in a high disturbance environment.
Self-determination and motivation for bariatric surgery: a qualitative study.
Park, Juyeon
2016-10-01
This study examined how obese individuals acquire their motivation to undergo weight loss surgery and characterized the motivations within the framework of the self-determination theory (SDT). Participants expecting to have bariatric surgery were recruited and participated in semi-structured interviews. Interview accounts characterized different types of motivation for individuals seeking surgical weight loss treatments on the SDT continuum of relative autonomy. This study demonstrated that the more one's motivation was internally regulated, related to one's personal life and supported for competency, the more personal and hopeful were the anecdotes participants mentioned in accounts, thus the more positive the surgical outcomes were anticipated. Study limitations and future research were discussed as was the need for a systematic scheme to categorize types of motivation within the SDT, a longitudinal approach to measure actual weight loss outcomes based on the patient's pre-surgical motivation, and a further investigation with a larger sample size and balanced gender ratio. Practical implications of the study findings were also discussed as a novel strategy to internalize bariatric patients' motivation, further helping to improve their long-term quality of life post-surgery.
Robust mislabel logistic regression without modeling mislabel probabilities.
Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun
2018-03-01
Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.
Lee, Tian-Fu; Liu, Chuan-Ming
2013-06-01
A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.
ID-based encryption scheme with revocation
NASA Astrophysics Data System (ADS)
Othman, Hafizul Azrie; Ismail, Eddie Shahril
2017-04-01
In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.
A secure biometrics-based authentication scheme for telecare medicine information systems.
Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng
2013-10-01
The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.
A two‐point scheme for optimal breast IMRT treatment planning
2013-01-01
We propose an approach to determining optimal beam weights in breast/chest wall IMRT treatment plans. The goal is to decrease breathing effect and to maximize skin dose if the skin is included in the target or, otherwise, to minimize the skin dose. Two points in the target are utilized to calculate the optimal weights. The optimal plan (i.e., the plan with optimal beam weights) consists of high energy unblocked beams, low energy unblocked beams, and IMRT beams. Six breast and five chest wall cases were retrospectively planned with this scheme in Eclipse, including one breast case where CTV was contoured by the physician. Compared with 3D CRT plans composed of unblocked and field‐in‐field beams, the optimal plans demonstrated comparable or better dose uniformity, homogeneity, and conformity to the target, especially at beam junction when supraclavicular nodes are involved. Compared with nonoptimal plans (i.e., plans with nonoptimized weights), the optimal plans had better dose distributions at shallow depths close to the skin, especially in cases where breathing effect was taken into account. This was verified with experiments using a MapCHECK device attached to a motion simulation table (to mimic motion caused by breathing). PACS number: 87.55 de PMID:24257291
Weighted divergence correction scheme and its fast implementation
NASA Astrophysics Data System (ADS)
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
Yan, Fei; Christmas, William; Kittler, Josef
2008-10-01
In this paper, we propose a multilayered data association scheme with graph-theoretic formulation for tracking multiple objects that undergo switching dynamics in clutter. The proposed scheme takes as input object candidates detected in each frame. At the object candidate level, "tracklets'' are "grown'' from sets of candidates that have high probabilities of containing only true positives. At the tracklet level, a directed and weighted graph is constructed, where each node is a tracklet, and the edge weight between two nodes is defined according to the "compatibility'' of the two tracklets. The association problem is then formulated as an all-pairs shortest path (APSP) problem in this graph. Finally, at the path level, by analyzing the APSPs, all object trajectories are identified, and track initiation and track termination are automatically dealt with. By exploiting a special topological property of the graph, we have also developed a more efficient APSP algorithm than the general-purpose ones. The proposed data association scheme is applied to tennis sequences to track tennis balls. Experiments show that it works well on sequences where other data association methods perform poorly or fail completely.
On the use of transition matrix methods with extended ensembles.
Escobedo, Fernando A; Abreu, Charlles R A
2006-03-14
Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.
Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L
2010-08-01
To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal Discrete Spatial Compression for Beamspace Massive MIMO Signals
NASA Astrophysics Data System (ADS)
Jiang, Zhiyuan; Zhou, Sheng; Niu, Zhisheng
2018-05-01
Deploying massive number of antennas at the base station side can boost the cellular system performance dramatically. Meanwhile, it however involves significant additional radio-frequency (RF) front-end complexity, hardware cost and power consumption. To address this issue, the beamspace-multiple-input-multiple-output (beamspace-MIMO) based approach is considered as a promising solution. In this paper, we first show that the traditional beamspace-MIMO suffers from spatial power leakage and imperfect channel statistics estimation. A beam combination module is hence proposed, which consists of a small number (compared with the number of antenna elements) of low-resolution (possibly one-bit) digital (discrete) phase shifters after the beamspace transformation to further compress the beamspace signal dimensionality, such that the number of RF chains can be reduced beyond beamspace transformation and beam selection. The optimum discrete beam combination weights for the uplink are obtained based on the branch-and-bound (BB) approach. The key to the BB-based solution is to solve the embodied sub-problem, whose solution is derived in a closed-form. Based on the solution, a sequential greedy beam combination scheme with linear-complexity (w.r.t. the number of beams in the beamspace) is proposed. Link-level simulation results based on realistic channel models and long-term-evolution (LTE) parameters are presented which show that the proposed schemes can reduce the number of RF chains by up to $25\\%$ with a one-bit digital phase-shifter-network.
Ehling, G; Hecht, M; Heusener, A; Huesler, J; Gamer, A O; van Loveren, H; Maurer, Th; Riecke, K; Ullmann, L; Ulrich, P; Vandebriel, R; Vohr, H-W
2005-08-15
The original local lymph node assay (LLNA) is based on the use of radioactive labelling to measure cell proliferation. Other endpoints for the assessment of proliferation are also authorized by the OECD Guideline 429 provided there is appropriate scientific support, including full citations and description of the methodology (OECD, 2002. OECD Guideline for the Testing of Chemicals; Skin Sensitization: Local Lymph Node Assay, Guideline 429. Paris, adopted 24th April 2002.). Here, we describe the outcome of the second round of an inter-laboratory validation of alternative endpoints in the LLNA conducted in nine laboratories in Europe. The validation study was managed and supervised by the Swiss Agency for Therapeutic Products (Swissmedic) in Bern. Ear-draining lymph node (LN) weight and cell counts were used to assess LN cell proliferation instead of [3H]TdR incorporation. In addition, the acute inflammatory skin reaction was measured by ear weight determination of circular biopsies of the ears to identify skin irritation properties of the test items. The statistical analysis was performed in the department of statistics at the university of Bern. Similar to the EC(3) values defined for the radioactive method, threshold values were calculated for the endpoints measured in this modification of the LLNA. It was concluded that all parameters measured have to be taken into consideration for the categorisation of compounds due to their sensitising potencies. Therefore, an assessment scheme has been developed which turned out to be of great importance to consistently assess sensitisation versus irritancy based on the data of the different parameters. In contrast to the radioactive method, irritants have been picked up by all the laboratories applying this assessment scheme.
On geodynamo integrations conserving momentum flux
NASA Astrophysics Data System (ADS)
Wu, C.; Roberts, P. H.
2012-12-01
The equations governing the geodynamo are most often integrated by representing the magnetic field and fluid velocity by toroidal and poloidal scalars (for example, MAG code [1]). This procedure does not automatically conserve the momentum flux. The results can, particularly for flows with large shear, introduce significant errors, unless the viscosity is artificially increased. We describe a method that evades this difficulty, by solving the momentum equation directly while properly conserving momentum. It finds pressure by FFT and cyclic reduction, and integrates the governing equations on overlapping grids so avoiding the pole problem. The number of operations per time step is proportional to N3 where N is proportional to the number of grid points in each direction. This contrasts with the order N4 operations of standard spectral transform methods. The method is easily parallelized. It can also be easily adapted to schemes such as the Weighted Essentially Non-Oscillatory (WENO) method [2], a flux based procedure based on upwinding that is numerically stable even for zero explicit viscosity. The method has been successfully used to investigate the generation of magnetic fields by flows confined to spheroidal containers and driven by precessional and librational forcing [3, 4]. For spherical systems it satisfies dynamo benchmarks [5]. [1] MAG, http://www.geodynamics.org/cig/software/mag [2] Liu, XD, Osher, S and Chan, T, Weighted Essentially Nonoscillatory Schemes, J. Computational Physics, 115, 200-212, 1994. [3] Wu, CC and Roberts, PH, On a dynamo driven by topographic precession, Geophysical & Astrophysical Fluid Dynamics, 103, 467-501, (DOI: 10.1080/03091920903311788), 2009. [4] Wu, CC and Roberts, PH, On a dynamo driven topographically by longitudinal libration, Geophysical & Astrophysical Fluid Dynamics, DOI:10.1080/03091929.2012.682990, 2012. [5] Christensen, U, et al., A numerical dynamo benchmark, Phys. Earth Planet Int., 128, 25-34, 2001.
Doulamis, A D; Doulamis, N D; Kollias, S D
2003-01-01
Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)-based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operation. The generated models can be used as traffic rate predictors, during the network operation phase (online traffic modeling), or as video generators for estimating the network resources, during the network design phase (offline traffic modeling). In this paper, an adaptable neural-network architecture is proposed covering both cases. The scheme is based on an efficient recursive weight estimation algorithm, which adapts the network response to current conditions. In particular, the algorithm updates the network weights so that 1) the network output, after the adaptation, is approximately equal to current bit rates (current traffic statistics) and 2) a minimal degradation over the obtained network knowledge is provided. It can be shown that the proposed adaptable neural-network architecture simulates a recursive nonlinear autoregressive model (RNAR) similar to the notation used in the linear case. The algorithm presents low computational complexity and high efficiency in tracking traffic rates in contrast to conventional retraining schemes. Furthermore, for the problem of offline traffic modeling, a novel correlation mechanism is proposed for capturing the burstness of the actual MPEG video traffic. The performance of the model is evaluated using several real-life MPEG coded video sources of long duration and compared with other linear/nonlinear techniques used for both cases. The results indicate that the proposed adaptable neural-network architecture presents better performance than other examined techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees
Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian
2016-01-01
With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one. PMID:27626417
Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
Privacy-Enhanced and Multifunctional Health Data Aggregation under Differential Privacy Guarantees.
Ren, Hao; Li, Hongwei; Liang, Xiaohui; He, Shibo; Dai, Yuanshun; Zhao, Lian
2016-09-10
With the rapid growth of the health data scale, the limited storage and computation resources of wireless body area sensor networks (WBANs) is becoming a barrier to their development. Therefore, outsourcing the encrypted health data to the cloud has been an appealing strategy. However, date aggregation will become difficult. Some recently-proposed schemes try to address this problem. However, there are still some functions and privacy issues that are not discussed. In this paper, we propose a privacy-enhanced and multifunctional health data aggregation scheme (PMHA-DP) under differential privacy. Specifically, we achieve a new aggregation function, weighted average (WAAS), and design a privacy-enhanced aggregation scheme (PAAS) to protect the aggregated data from cloud servers. Besides, a histogram aggregation scheme with high accuracy is proposed. PMHA-DP supports fault tolerance while preserving data privacy. The performance evaluation shows that the proposal leads to less communication overhead than the existing one.
Prochnow, Bo; O'Reilly, Ossian; Dunham, Eric M.; ...
2017-03-16
In this paper, we develop a high-order finite difference scheme for axisymmetric wave propagation in a cylindrical conduit filled with a viscous fluid. The scheme is provably stable, and overcomes the difficulty of the polar coordinate singularity in the radial component of the diffusion operator. The finite difference approximation satisfies the principle of summation-by-parts (SBP), which is used to establish stability using the energy method. To treat the coordinate singularity without losing the SBP property of the scheme, a staggered grid is introduced and quadrature rules with weights set to zero at the endpoints are considered. Finally, the accuracy ofmore » the scheme is studied both for a model problem with periodic boundary conditions at the ends of the conduit and its practical utility is demonstrated by modeling acoustic-gravity waves in a magmatic conduit.« less
Mishra, Dheerendra
2015-03-01
Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian
2015-06-01
The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Gizaw, Solomon; Goshme, Shenkute; Getachew, Tesfaye; Haile, Aynalem; Rischkowsky, Barbara; van Arendonk, Johan; Valle-Zárate, Anne; Dessie, Tadelle; Mwai, Ally Okeyo
2014-06-01
Pedigree recording and genetic selection in village flocks of smallholder farmers have been deemed infeasible by researchers and development workers. This is mainly due to the difficulty of sire identification under uncontrolled village breeding practices. A cooperative village sheep-breeding scheme was designed to achieve controlled breeding and implemented for Menz sheep of Ethiopia in 2009. In this paper, we evaluated the reliability of pedigree recording in village flocks by comparing genetic parameters estimated from data sets collected in the cooperative village and in a nucleus flock maintained under controlled breeding. Effectiveness of selection in the cooperative village was evaluated based on trends in breeding values over generations. Heritability estimates for 6-month weight recorded in the village and the nucleus flock were very similar. There was an increasing trend over generations in average estimated breeding values for 6-month weight in the village flocks. These results have a number of implications: the pedigree recorded in the village flocks was reliable; genetic parameters, which have so far been estimated based on nucleus data sets, can be estimated based on village recording; and appreciable genetic improvement could be achieved in village sheep selection programs under low-input smallholder farming systems.
PATL: A RFID Tag Localization based on Phased Array Antenna.
Qiu, Lanxin; Liang, Xiaoxuan; Huang, Zhangqin
2017-03-15
In RFID systems, how to detect the position precisely is an important and challenging research topic. In this paper, we propose a range-free 2D tag localization method based on phased array antenna, called PATL. This method takes advantage of the adjustable radiation angle of the phased array antenna to scan the surveillance region in turns. By using the statistics of the tags' number in different antenna beam directions, a weighting algorithm is used to calculate the position of the tag. This method can be applied to real-time location of multiple targets without usage of any reference tags or additional readers. Additionally, we present an optimized weighting method based on RSSI to increase the locating accuracy. We use a Commercial Off-the-Shelf (COTS) UHF RFID reader which is integrated with a phased array antenna to evaluate our method. The experiment results from an indoor office environment demonstrate the average distance error of PATL is about 21 cm and the optimized approach achieves an accuracy of 13 cm. This novel 2D localization scheme is a simple, yet promising, solution that is especially applicable to the smart shelf visualized management in storage or retail area.
PATL: A RFID Tag Localization based on Phased Array Antenna
Qiu, Lanxin; Liang, Xiaoxuan; Huang, Zhangqin
2017-01-01
In RFID systems, how to detect the position precisely is an important and challenging research topic. In this paper, we propose a range-free 2D tag localization method based on phased array antenna, called PATL. This method takes advantage of the adjustable radiation angle of the phased array antenna to scan the surveillance region in turns. By using the statistics of the tags’ number in different antenna beam directions, a weighting algorithm is used to calculate the position of the tag. This method can be applied to real-time location of multiple targets without usage of any reference tags or additional readers. Additionally, we present an optimized weighting method based on RSSI to increase the locating accuracy. We use a Commercial Off-the-Shelf (COTS) UHF RFID reader which is integrated with a phased array antenna to evaluate our method. The experiment results from an indoor office environment demonstrate the average distance error of PATL is about 21 cm and the optimized approach achieves an accuracy of 13 cm. This novel 2D localization scheme is a simple, yet promising, solution that is especially applicable to the smart shelf visualized management in storage or retail area. PMID:28295014
Interference graph-based dynamic frequency reuse in optical attocell networks
NASA Astrophysics Data System (ADS)
Liu, Huanlin; Xia, Peijie; Chen, Yong; Wu, Lan
2017-11-01
Indoor optical attocell network may achieve higher capacity than radio frequency (RF) or Infrared (IR)-based wireless systems. It is proposed as a special type of visible light communication (VLC) system using Light Emitting Diodes (LEDs). However, the system spectral efficiency may be severely degraded owing to the inter-cell interference (ICI), particularly for dense deployment scenarios. To address these issues, we construct the spectral interference graph for indoor optical attocell network, and propose the Dynamic Frequency Reuse (DFR) and Weighted Dynamic Frequency Reuse (W-DFR) algorithms to decrease ICI and improve the spectral efficiency performance. The interference graph makes LEDs can transmit data without interference and select the minimum sub-bands needed for frequency reuse. Then, DFR algorithm reuses the system frequency equally across service-providing cells to mitigate spectrum interference. While W-DFR algorithm can reuse the system frequency by using the bandwidth weight (BW), which is defined based on the number of service users. Numerical results show that both of the proposed schemes can effectively improve the average spectral efficiency (ASE) of the system. Additionally, improvement of the user data rate is also obtained by analyzing its cumulative distribution function (CDF).
Weighted SAW reflector gratings for orthogonal frequency coded SAW tags and sensors
NASA Technical Reports Server (NTRS)
Puccio, Derek (Inventor); Malocha, Donald (Inventor)
2011-01-01
Weighted surface acoustic wave reflector gratings for coding identification tags and sensors to enable unique sensor operation and identification for a multi-sensor environment. In an embodiment, the weighted reflectors are variable while in another embodiment the reflector gratings are apodized. The weighting technique allows the designer to decrease reflectively and allows for more chips to be implemented in a device and, consequently, more coding diversity. As a result, more tags and sensors can be implemented using a given bandwidth when compared with uniform reflectors. Use of weighted reflector gratings with OFC makes various phase shifting schemes possible, such as in-phase and quadrature implementations of coded waveforms resulting in reduced device size and increased coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohatt, D; Malhotra, H
Purpose: Conventional treatment plans for lung radiotherapy are created using either the free breathing (FB) scheme which represents the tumor at an arbitrary breathing phase of the patient’s respiratory cycle, or the average computed tomography (ACT) intensity projection over 10-binned phases. Neither method is entirely accurate because of the absence of time dependence of tumor movement. In the present “Hybrid” method, the HU of tumor in 3D space is determined by relative weighting of the HU of the tumor and lung in proportion to the time they spend at that location during the entire breathing cycle. Methods: A Quasar respiratorymore » motion phantom was employed to simulate lung tumor movement. Utilizing 4DCT image scans, volumetric modulated arc therapy (VMAT) plans were generated for three treatment planning scenarios which included conventional FB and ACT schemes, along with a third alternative Hybrid approach. Our internal target volume (ITV) hybrid structure was created using Boolean operation in Eclipse (ver. 11) treatment planning system, where independent sub-regions created by the gross tumor volume (GTV) overlap from the 10 motion phases were each assigned a time weighted CT value. The dose-volume-histograms (DVH) for each scheme were compared and analyzed. Results: Using our hybrid technique, we have demonstrated a reduction of 1.9% – 3.4% in total monitor units with respect to conventional treatment planning strategies, along with a 6 fold improvement in high dose spillage over the FB plan. The higher density ACT and Hybrid schemes also produced a slight enhancement in target conformity and reduction in low dose spillage. Conclusion: All treatment plans created in this study exceeded RTOG protocol criteria. Our results determine the free breathing approach yields an inaccurate account of the target treatment density. A significant decrease in unnecessary lung irradiation can be achieved by implementing Hybrid HU method with ACT method second best.« less
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.
A provably-secure ECC-based authentication scheme for wireless sensor networks.
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-11-06
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.
A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-01-01
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009
Optimal Sensor Allocation for Fault Detection and Isolation
NASA Technical Reports Server (NTRS)
Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann
2004-01-01
Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.
Gyroaveraging operations using adaptive matrix operators
NASA Astrophysics Data System (ADS)
Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock
2018-05-01
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.
A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems
Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok
2018-01-01
Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621
NASA Astrophysics Data System (ADS)
Vilar, François; Shu, Chi-Wang; Maire, Pierre-Henri
2016-05-01
One of the main issues in the field of numerical schemes is to ally robustness with accuracy. Considering gas dynamics, numerical approximations may generate negative density or pressure, which may lead to nonlinear instability and crash of the code. This phenomenon is even more critical using a Lagrangian formalism, the grid moving and being deformed during the calculation. Furthermore, most of the problems studied in this framework contain very intense rarefaction and shock waves. In this paper, the admissibility of numerical solutions obtained by high-order finite-volume-scheme-based methods, such as the discontinuous Galerkin (DG) method, the essentially non-oscillatory (ENO) and the weighted ENO (WENO) finite volume schemes, is addressed in the one-dimensional Lagrangian gas dynamics framework. After briefly recalling how to derive Lagrangian forms of the 1D gas dynamics system of equations, a discussion on positivity-preserving approximate Riemann solvers, ensuring first-order finite volume schemes to be positive, is then given. This study is conducted for both ideal gas and non-ideal gas equations of state (EOS), such as the Jones-Wilkins-Lee (JWL) EOS or the Mie-Grüneisen (MG) EOS, and relies on two different techniques: either a particular definition of the local approximation of the acoustic impedances arising from the approximate Riemann solver, or an additional time step constraint relative to the cell volume variation. Then, making use of the work presented in [89,90,22], this positivity study is extended to high-orders of accuracy, where new time step constraints are obtained, and proper limitation is required. Through this new procedure, scheme robustness is highly improved and hence new problems can be tackled. Numerical results are provided to demonstrate the effectiveness of these methods. This paper is the first part of a series of two. The whole analysis presented here is extended to the two-dimensional case in [85], and proves to fit a wide range of numerical schemes in the literature, such as those presented in [19,64,15,82,84].
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
NASA Astrophysics Data System (ADS)
Zamri, Nurnadiah; Abdullah, Lazim
2014-06-01
Flood control project is a complex issue which takes economic, social, environment and technical attributes into account. Selection of the best flood control project requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgment are under uncertainty, it is relatively difficult for them to provide exact numerical values. The interval type-2 fuzzy set (IT2FS) is a strong tool which can deal with the uncertainty case of subjective, incomplete, and vague information. Besides, it helps to solve for some situations where the information about criteria weights for alternatives is completely unknown. Therefore, this paper is adopted the information interval type-2 entropy concept into the weighting process of interval type-2 fuzzy TOPSIS. This entropy weight is believed can effectively balance the influence of uncertainty factors in evaluating attribute. Then, a modified ranking value is proposed in line with the interval type-2 entropy weight. Quantitative and qualitative factors that normally linked with flood control project are considered for ranking. Data in form of interval type-2 linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. Study is considered for the whole of Malaysia. From the analysis, it shows that diversion scheme yielded the highest closeness coefficient at 0.4807. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the diversion scheme recorded the first rank among five causes.
An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity
Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272
An improved biometrics-based remote user authentication scheme with user anonymity.
Khan, Muhammad Khurram; Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav
2014-10-01
Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle
NASA Astrophysics Data System (ADS)
Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai
2018-03-01
Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage
Wang, Shangping; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
Qureshi, Adnan I
2007-10-01
Imaging of head and neck vasculature continues to improve with the application of new technology. To judge the value of new technologies reported in the literature, it is imperative to develop objective standards optimized against bias and favoring statistical power and clinical relevance. A review of the existing literature identified the following items as lending scientific value to a report on imaging technology: prospective design, comparison with an accepted modality, unbiased patient selection, standardized image acquisition, blinded interpretation, and measurement of reliability. These were incorporated into a new grading scheme. Two physicians tested the new scheme and an established scheme to grade reports published in the medical literature. Inter-observer reliability for both methods was calculated using the kappa coefficient. A total of 22 reports evaluating imaging modalities for cervical internal carotid artery stenosis were identified from a literature search and graded by both schemes. Agreement between the two physicians in grading the level of scientific evidence using the new scheme was excellent (kappa coefficient: 0.93, p<0.0001). Agreement using the established scheme was less rigorous (kappa coefficient: 0.39, p<0.0001). The weighted kappa coefficients were 0.95 and 0.38 for the new and established schemes, respectively. Overall agreement was higher for the newer scheme (95% versus 64%). The new grading scheme can be used reliably to categorize the strength of scientific knowledge provided by individual studies of vascular imaging. The new method could assist clinicians and researchers in determining appropriate clinical applications of newly reported technical advances.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Rius-Vilarrasa, E; Bünger, L; Maltin, C; Matthews, K R; Roehe, R
2009-05-01
The Meat and Livestock Commission's (MLC) EUROP classification based scheme and Video Image Analysis (VIA) system were compared in their ability to predict weights of primal carcass joints. A total of 443 commercial lamb carcasses under 12 months of age and mixed gender were selected by their cold carcass weight (CCW), conformation and fat scores. Lamb carcasses were classified for conformation and fatness, scanned by the VIA system and dissected into primal joints of leg, chump, loin, breast and shoulder. After adjustment for CCW, the estimation of primal joints using MLC EUROP scores showed high coefficients of determination (R(2)) in the range of 0.82-0.99. The use of VIA always resulted in equal or higher R(2). The precision measured as root mean square error (RMSE) was 27% (leg), 13% (chump), 1% (loin), 11% (breast), 5% (shoulders) and 13% (total primals) higher using VIA than MLC carcass information. Adjustment for slaughter day and gender effects indicated that estimations of primal joints using MLC EUROP scores were more sensitive to these factors than using VIA. This was consistent with an increase in stability of the prediction model of 28%, 11%, 2%, 12%, 6% and 14% for leg, chump, loin, breast and shoulder and total primals, respectively, using VIA compared to MLC EUROP scores. Consequently, VIA was capable of improving the prediction of primal meat yields compared to the current MLC EUROP carcass classification scheme used in the UK abattoirs.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
NASA Astrophysics Data System (ADS)
Kim, Hojin; Li, Ruijiang; Lee, Rena; Xing, Lei
2015-03-01
Conventional VMAT optimizes aperture shapes and weights at uniformly sampled stations, which is a generalization of the concept of a control point. Recently, rotational station parameter optimized radiation therapy (SPORT) has been proposed to improve the plan quality by inserting beams to the regions that demand additional intensity modulations, thus formulating non-uniform beam sampling. This work presents a new rotational SPORT planning strategy based on reweighted total-variation (TV) minimization (min.), using beam’s-eye-view dosimetrics (BEVD) guided beam selection. The convex programming based reweighted TV min. assures the simplified fluence-map, which facilitates single-aperture selection at each station for single-arc delivery. For the rotational arc treatment planning and non-uniform beam angle setting, the mathematical model needs to be modified by additional penalty term describing the fluence-map similarity and by determination of appropriate angular weighting factors. The proposed algorithm with additional penalty term is capable of achieving more efficient and deliverable plans adaptive to the conventional VMAT and SPORT planning schemes by reducing the dose delivery time about 5 to 10 s in three clinical cases (one prostate and two head-and-neck (HN) cases with a single and multiple targets). The BEVD guided beam selection provides effective and yet easy calculating methodology to select angles for denser, non-uniform angular sampling in SPORT planning. Our BEVD guided SPORT treatment schemes improve the dose sparing to femoral heads in the prostate and brainstem, parotid glands and oral cavity in the two HN cases, where the mean dose reduction of those organs ranges from 0.5 to 2.5 Gy. Also, it increases the conformation number assessing the dose conformity to the target from 0.84, 0.75 and 0.74 to 0.86, 0.79 and 0.80 in the prostate and two HN cases, while preserving the delivery efficiency, relative to conventional single-arc VMAT plans.
Competitive region orientation code for palmprint verification and identification
NASA Astrophysics Data System (ADS)
Tang, Wenliang
2015-11-01
Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.
Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie
2015-01-01
We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871
Computer-oriented synthesis of wide-band non-uniform negative resistance amplifiers
NASA Technical Reports Server (NTRS)
Branner, G. R.; Chan, S.-P.
1975-01-01
This paper presents a synthesis procedure which provides design values for broad-band amplifiers using non-uniform negative resistance devices. Employing a weighted least squares optimization scheme, the technique, based on an extension of procedures for uniform negative resistance devices, is capable of providing designs for a variety of matching network topologies. It also provides, for the first time, quantitative results for predicting the effects of parameter element variations on overall amplifier performance. The technique is also unique in that it employs exact partial derivatives for optimization and sensitivity computation. In comparison with conventional procedures, significantly improved broad-band designs are shown to result.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
NASA Technical Reports Server (NTRS)
Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei;
2014-01-01
Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
Significant parent-of-origin effects in cucumber
USDA-ARS?s Scientific Manuscript database
Cucumber is a useful plant to study organellar effects because chloroplasts are maternally and mitochondria paternally transmitted. We produced doubled haploids (DH) from divergent cucumber populations, generated reciprocal crosses in a diallel mating scheme, measured weights of plants approximately...
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
NASA Technical Reports Server (NTRS)
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
Fast rerouting schemes for protected mobile IP over MPLS networks
NASA Astrophysics Data System (ADS)
Wen, Chih-Chao; Chang, Sheng-Yi; Chen, Huan; Chen, Kim-Joan
2005-10-01
Fast rerouting is a critical traffic engineering operation in the MPLS networks. To implement the Mobile IP service over the MPLS network, one can collaborate with the fast rerouting operation to enhance the availability and survivability. MPLS can protect critical LSP tunnel between Home Agent (HA) and Foreign Agent (FA) using the fast rerouting scheme. In this paper, we propose a simple but efficient algorithm to address the triangle routing problem for the Mobile IP over the MPLS networks. We consider this routing issue as a link weighting and capacity assignment (LW-CA) problem. The derived solution is used to plan the fast restoration mechanism to protect the link or node failure. In this paper, we first model the LW-CA problem as a mixed integer optimization problem. Our goal is to minimize the call blocking probability on the most congested working truck for the mobile IP connections. Many existing network topologies are used to evaluate the performance of our scheme. Results show that our proposed scheme can obtain the best performance in terms of the smallest blocking probability compared to other schemes.
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.
Schäfer, Dirk; Grass, Michael; van de Haar, Peter
2011-07-01
Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.
Galerkin finite element scheme for magnetostrictive structures and composites
NASA Astrophysics Data System (ADS)
Kannan, Kidambi Srinivasan
The ever increasing-role of magnetostrictives in actuation and sensing applications is an indication of their importance in the emerging field of smart structures technology. As newer, and more complex, applications are developed, there is a growing need for a reliable computational tool that can effectively address the magneto-mechanical interactions and other nonlinearities in these materials and in structures incorporating them. This thesis presents a continuum level quasi-static, three-dimensional finite element computational scheme for modeling the nonlinear behavior of bulk magnetostrictive materials and particulate magnetostrictive composites. Models for magnetostriction must deal with two sources of nonlinearities-nonlinear body forces/moments in equilibrium equations governing magneto-mechanical interactions in deformable and magnetized bodies; and nonlinear coupled magneto-mechanical constitutive models for the material of interest. In the present work, classical differential formulations for nonlinear magneto-mechanical interactions are recast in integral form using the weighted-residual method. A discretized finite element form is obtained by applying the Galerkin technique. The finite element formulation is based upon three dimensional eight-noded (isoparametric) brick element interpolation functions and magnetostatic infinite elements at the boundary. Two alternative possibilities are explored for establishing the nonlinear incremental constitutive model-characterization in terms of magnetic field or in terms of magnetization. The former methodology is the one most commonly used in the literature. In this work, a detailed comparative study of both methodologies is carried out. The computational scheme is validated, qualitatively and quantitatively, against experimental measurements published in the literature on structures incorporating the magnetostrictive material Terfenol-D. The influence of nonlinear body forces and body moments of magnetic origin, on the response of magnetostrictive structures to complex mechanical and magnetic loading conditions, is carefully examined. While monolithic magnetostrictive materials have been commercially-available since the late eighties, attention in the smart structures research community has recently focussed upon building and using magnetostrictive particulate composite structures for conventional actuation applications and novel sensing methodologies in structural health monitoring. A particulate magnetostrictive composite element has been developed in the present work to model such structures. This composite element incorporates interactions between magnetostrictive particles by combining a numerical micromechanical analysis based on magneto-mechanical Green's functions, with a homogenization scheme based upon the Mori-Tanaka approach. This element has been applied to the simulation of particulate actuators and sensors reported in the literature. Simulation results are compared to experimental data for validation purposes. The computational schemes developed, for bulk materials and for composites, are expected to be of great value to researchers and designers of novel applications based on magnetostrictives.
An improved biometrics-based authentication scheme for telecare medical information systems.
Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping
2015-03-01
Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.
Jo, Wan-Kuen; Sivakumar Natarajan, Thillai
2015-08-12
Novel redox-mediator-free direct Z-scheme CaIn2S4 marigold-flower-like/TiO2 (CIS/TNP) photocatalysts with different CaIn2S4 weight percentages were synthesized using a facile wet-impregnation method. Uniform hierarchical marigold-flower-like CaIn2S4 (CIS) microspheres were synthesized using a hydrothermal method. Field-emission scanning electron microscopy and transmission electron microscopy analyses suggested that the formation and aggregation of nanoparticles, followed by the growth of petals or sheets and their subsequent self-assembly, led to the formation of the uniform hierarchical marigold-flower-like CIS structures. The photocatalytic degradation efficiency of the direct Z-scheme CIS/TNP photocatalysts was evaluated through the degradation of the pharmaceutical compounds isoniazid (ISN) and metronidazole (MTZ). The direct Z-scheme CaIn2S4 marigold-flower-like/TiO2 (1%-CIS/TNP) photocatalyst showed enhanced performance in the ISN (71.9%) and MTZ (86.5%) photocatalytic degradations as compared to composites with different CaIn2S4 contents or the individual TiO2 and CaIn2S4. A possible enhancement mechanism based on the Z-scheme formed between the CIS and TNP for the improved photocatalytic efficiency was also proposed. The recombination rate of the photoinduced charge carriers was significantly suppressed for the direct Z-scheme CIS/TNP photocatalyst, which was confirmed by photoluminescence analysis. Radical-trapping studies revealed that photogenerated holes (h+), •OH, and O2•- are the primary active species, and suggested that the enhanced photocatalytic efficiency of the 1%-CIS/TNP follows the Z-scheme mechanism for transferring the charge carriers. It was further confirmed by hydroxyl (•OH) radical determination via fluorescence techniques revealed that higher concentration of •OH radical were formed over 1%-CIS/TNP than over bare CIS and TNP. The separation of the charge carriers was further confirmed using photocurrent and electron spin resonance measurements. Kinetic and chemical oxygen demand analyses were performed to confirm the ISN and MTZ degradation. The results demonstrated that the direct Z-scheme CIS/TNP photocatalyst shows superior decomposition efficiency for the degradation of these pharmaceuticals under the given reaction conditions.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
NASA Astrophysics Data System (ADS)
Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei
2018-04-01
Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Recent Research on the Automated Mass Measuring System
NASA Astrophysics Data System (ADS)
Yao, Hong; Ren, Xiao-Ping; Wang, Jian; Zhong, Rui-Lin; Ding, Jing-An
The research development of robotic measurement system as well as the representative automatic system were introduced in the paper, and then discussed a sub-multiple calibration scheme adopted on a fully-automatic CCR10 system effectively. Automatic robot system can be able to perform the dissemination of the mass scale without any manual intervention as well as the fast speed calibration of weight samples against a reference weight. At the last, evaluation of the expanded uncertainty was given out.
Advances in Inertial Navigation Systems and Components
1981-04-01
directions: a. Improvement of the classical fringeshil’t reading through differential two- detector schemes and application of very low loss components...1969 29. Aronowitz, F., " Loss Lock-In in Ringlaser," J. Appi Physics 41, 130 (1970) 30. Malota, F , "Ringlaser and Ringinterferometer," Laser and...8217 . ] 4-9 TABLE I Iý0PPERHEAD RRS CHARACTERISTICS SUMMARY Weight of Jet: - 3.8 oz. Weight of Total Packagt: - 12.0 oz. Volume of Total Packagi?: - 10.5 in
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.
Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-05-22
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks
Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-01-01
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475
Ni, Guiyan; Cavero, David; Fangmann, Anna; Erbe, Malena; Simianer, Henner
2017-01-16
With the availability of next-generation sequencing technologies, genomic prediction based on whole-genome sequencing (WGS) data is now feasible in animal breeding schemes and was expected to lead to higher predictive ability, since such data may contain all genomic variants including causal mutations. Our objective was to compare prediction ability with high-density (HD) array data and WGS data in a commercial brown layer line with genomic best linear unbiased prediction (GBLUP) models using various approaches to weight single nucleotide polymorphisms (SNPs). A total of 892 chickens from a commercial brown layer line were genotyped with 336 K segregating SNPs (array data) that included 157 K genic SNPs (i.e. SNPs in or around a gene). For these individuals, genome-wide sequence information was imputed based on data from re-sequencing runs of 25 individuals, leading to 5.2 million (M) imputed SNPs (WGS data), including 2.6 M genic SNPs. De-regressed proofs (DRP) for eggshell strength, feed intake and laying rate were used as quasi-phenotypic data in genomic prediction analyses. Four weighting factors for building a trait-specific genomic relationship matrix were investigated: identical weights, -(log 10 P) from genome-wide association study results, squares of SNP effects from random regression BLUP, and variable selection based weights (known as BLUP|GA). Predictive ability was measured as the correlation between DRP and direct genomic breeding values in five replications of a fivefold cross-validation. Averaged over the three traits, the highest predictive ability (0.366 ± 0.075) was obtained when only genic SNPs from WGS data were used. Predictive abilities with genic SNPs and all SNPs from HD array data were 0.361 ± 0.072 and 0.353 ± 0.074, respectively. Prediction with -(log 10 P) or squares of SNP effects as weighting factors for building a genomic relationship matrix or BLUP|GA did not increase accuracy, compared to that with identical weights, regardless of the SNP set used. Our results show that little or no benefit was gained when using all imputed WGS data to perform genomic prediction compared to using HD array data regardless of the weighting factors tested. However, using only genic SNPs from WGS data had a positive effect on prediction ability.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
2013-01-01
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
A unifying Bayesian account of contextual effects in value-based choice
Friston, Karl J.; Dolan, Raymond J.
2017-01-01
Empirical evidence suggests the incentive value of an option is affected by other options available during choice and by options presented in the past. These contextual effects are hard to reconcile with classical theories and have inspired accounts where contextual influences play a crucial role. However, each account only addresses one or the other of the empirical findings and a unifying perspective has been elusive. Here, we offer a unifying theory of context effects on incentive value attribution and choice based on normative Bayesian principles. This formulation assumes that incentive value corresponds to a precision-weighted prediction error, where predictions are based upon expectations about reward. We show that this scheme explains a wide range of contextual effects, such as those elicited by other options available during choice (or within-choice context effects). These include both conditions in which choice requires an integration of multiple attributes and conditions where a multi-attribute integration is not necessary. Moreover, the same scheme explains context effects elicited by options presented in the past or between-choice context effects. Our formulation encompasses a wide range of contextual influences (comprising both within- and between-choice effects) by calling on Bayesian principles, without invoking ad-hoc assumptions. This helps clarify the contextual nature of incentive value and choice behaviour and may offer insights into psychopathologies characterized by dysfunctional decision-making, such as addiction and pathological gambling. PMID:28981514
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish
2017-04-01
Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.
Effects of empty bins on image upscaling in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2017-07-01
This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.
An Interval Type-2 Neural Fuzzy System for Online System Identification and Feature Elimination.
Lin, Chin-Teng; Pal, Nikhil R; Wu, Shang-Lin; Liu, Yu-Ting; Lin, Yang-Yin
2015-07-01
We propose an integrated mechanism for discarding derogatory features and extraction of fuzzy rules based on an interval type-2 neural fuzzy system (NFS)-in fact, it is a more general scheme that can discard bad features, irrelevant antecedent clauses, and even irrelevant rules. High-dimensional input variable and a large number of rules not only enhance the computational complexity of NFSs but also reduce their interpretability. Therefore, a mechanism for simultaneous extraction of fuzzy rules and reducing the impact of (or eliminating) the inferior features is necessary. The proposed approach, namely an interval type-2 Neural Fuzzy System for online System Identification and Feature Elimination (IT2NFS-SIFE), uses type-2 fuzzy sets to model uncertainties associated with information and data in designing the knowledge base. The consequent part of the IT2NFS-SIFE is of Takagi-Sugeno-Kang type with interval weights. The IT2NFS-SIFE possesses a self-evolving property that can automatically generate fuzzy rules. The poor features can be discarded through the concept of a membership modulator. The antecedent and modulator weights are learned using a gradient descent algorithm. The consequent part weights are tuned via the rule-ordered Kalman filter algorithm to enhance learning effectiveness. Simulation results show that IT2NFS-SIFE not only simplifies the system architecture by eliminating derogatory/irrelevant antecedent clauses, rules, and features but also maintains excellent performance.
Somogyi, O; Meskó, A; Csorba, L; Szabó, P; Zelkó, R
2017-08-30
The division of tablets and adequate methods of splitting them are a complex problem in all sectors of health care. Although tablet-splitting is often required, this procedure can be difficult for patients. Four tablets were investigated with different external features (shape, score-line, film-coat and size). The influencing effect of these features and the splitting methods was investigated according to the precision and "weight loss" of splitting techniques. All four types of tablets were halved by four methods: by hand, with a kitchen knife, with an original manufactured splitting device and with a modified tablet splitter based on a self-developed mechanical model. The mechanical parameters (harness and friability) of the products were measured during the study. The "weight loss" and precision of splitting methods were determined and compared by statistical analysis. On the basis of the results, the external features (geometry), the mechanical parameters of tablets and the mechanical structure of splitting devices can influence the "weight loss" and precision of tablet-splitting. Accordingly, a new decision-making scheme was developed for the selection of splitting methods. In addition, the skills of patients and the specialties of therapy should be considered so that pharmaceutical counselling can be more effective regarding tablet-splitting. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Suleiman, R. M.; Chance, K.; Liu, X.; Kurosu, T. P.; Gonzalez Abad, G.
2014-12-01
We present and discuss a detailed description of the retrieval algorithms for the OMI BrO product. The BrO algorithms are based on direct fitting of radiances from 319.0-347.5 nm. Radiances are modeled from the solar irradiance, attenuated and adjusted by contributions from the target gas and interfering gases, rotational Raman scattering, undersampling, additive and multiplicative closure polynomials and a common mode spectrum. The version of the algorithm used for both BrO includes relevant changes with respect to the operational code, including the fit of the O2-O2 collisional complex, updates in the high resolution solar reference spectrum, updates in spectroscopy, an updated Air Mass Factor (AMF) calculation scheme, and the inclusion of scattering weights and vertical profiles in the level 2 products. Updates to the algorithms include accurate scattering weights and air mass factor calculations, scattering weights and profiles in outputs and available cross sections. We include retrieval parameter and window optimization to reduce the interference from O3, HCHO, O2-O2, SO2, improve fitting accuracy and uncertainty, reduce striping, and improve the long-term stability. We validate OMI BrO with ground-based measurements from Harestua and with chemical transport model simulations. We analyze the global distribution and seasonal variation of BrO and investigate BrO emissions from volcanoes and salt lakes.
Step to improve neural cryptography against flipping attacks.
Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold
2004-12-01
Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.
Gyroaveraging operations using adaptive matrix operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
Design of an anti-Rician-fading modem for mobile satellite communication systems
NASA Technical Reports Server (NTRS)
Kojima, Toshiharu; Ishizu, Fumio; Miyake, Makoto; Murakami, Keishi; Fujino, Tadashi
1995-01-01
To design a demodulator applicable to mobile satellite communication systems using differential phase shift keying modulation, we have developed key technologies including an anti-Rician-fading demodulation scheme, an initial acquisition scheme, automatic gain control (AGC), automatic frequency control (AFC), and bit timing recovery (BTR). Using these technologies, we have developed one-chip digital signal processor (DSP) modem for mobile terminal, which is compact, of light weight, and of low power consumption. Results of performance test show that the developed DSP modem achieves good performance in terms of bit error ratio in mobile satellite communication environment, i.e., Rician fading channel. It is also shown that the initial acquisition scheme acquires received signal rapidly even if the carrier-to-noise power ratio (CNR) of the received signal is considerably low.
Gyroaveraging operations using adaptive matrix operators
Dominski, Julien; Ku, Seung -Hoe; Chang, Choong -Seock
2018-05-17
A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
Error function attack of chaos synchronization based encryption schemes.
Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu
2004-03-01
Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.
An, Younghwa
2012-01-01
Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.
An, Younghwa
2012-01-01
Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. PMID:22899887
Lee, Tian-Fu
2013-12-01
A smartcard-based authentication and key agreement scheme for telecare medicine information systems enables patients, doctors, nurses and health visitors to use smartcards for secure login to medical information systems. Authorized users can then efficiently access remote services provided by the medicine information systems through public networks. Guo and Chang recently improved the efficiency of a smartcard authentication and key agreement scheme by using chaotic maps. Later, Hao et al. reported that the scheme developed by Guo and Chang had two weaknesses: inability to provide anonymity and inefficient double secrets. Therefore, Hao et al. proposed an authentication scheme for telecare medicine information systems that solved these weaknesses and improved performance. However, a limitation in both schemes is their violation of the contributory property of key agreements. This investigation discusses these weaknesses and proposes a new smartcard-based authentication and key agreement scheme that uses chaotic maps for telecare medicine information systems. Compared to conventional schemes, the proposed scheme provides fewer weaknesses, better security, and more efficiency.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Cui, Guixiang; Zhang, Zhaoshun
2018-04-01
A coupling scheme is proposed for the simulation of microscale flow and dispersion in which both the mesoscale field and small-scale turbulence are specified at the boundary of a microscale model. The small-scale turbulence is obtained individually in the inner and outer layers by the transformation of pre-computed databases, and then combined in a weighted sum. Validation of the results of a flow over a cluster of model buildings shows that the inner- and outer-layer transition height should be located in the roughness sublayer. Both the new scheme and the previous scheme are applied in the simulation of the flow over the central business district of Oklahoma City (a point source during intensive observation period 3 of the Joint Urban 2003 experimental campaign), with results showing that the wind speed is well predicted in the canopy layer. Compared with the previous scheme, the new scheme improves the prediction of the wind direction and turbulent kinetic energy (TKE) in the canopy layer. The flow field influences the scalar plume in two ways, i.e. the averaged flow field determines the advective flux and the TKE field determines the turbulent flux. Thus, the mean, root-mean-square and maximum of the concentration agree better with the observations with the new scheme. These results indicate that the new scheme is an effective means of simulating the complex flow and dispersion in urban canopies.
NASA Astrophysics Data System (ADS)
Lange, Heiner; Craig, George
2014-05-01
This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.
NASA Astrophysics Data System (ADS)
Qu, Yegao; Shi, Ruchao; Batra, Romesh C.
2018-02-01
We present a robust sharp-interface immersed boundary method for numerically studying high speed flows of compressible and viscous fluids interacting with arbitrarily shaped either stationary or moving rigid solids. The Navier-Stokes equations are discretized on a rectangular Cartesian grid based on a low-diffusion flux splitting method for inviscid fluxes and conservative high-order central-difference schemes for the viscous components. Discontinuities such as those introduced by shock waves and contact surfaces are captured by using a high-resolution weighted essentially non-oscillatory (WENO) scheme. Ghost cells in the vicinity of the fluid-solid interface are introduced to satisfy boundary conditions on the interface. Values of variables in the ghost cells are found by using a constrained moving least squares method (CMLS) that eliminates numerical instabilities encountered in the conventional MLS formulation. The solution of the fluid flow and the solid motion equations is advanced in time by using the third-order Runge-Kutta and the implicit Newmark integration schemes, respectively. The performance of the proposed method has been assessed by computing results for the following four problems: shock-boundary layer interaction, supersonic viscous flows past a rigid cylinder, moving piston in a shock tube and lifting off from a flat surface of circular, rectangular and elliptic cylinders triggered by shock waves, and comparing computed results with those available in the literature.
Separations and characterizations of fractions from Mayan, Heavy Arabian, and Hondo crude oils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kircher, C.C.
1989-04-01
The results from hydrotreating the atmospheric residua of Hondo, Heavy Arabian, and Mayan crude oils have been reported recently. Over the same fixed-bed catalyst, the hydrosulfurization activities varied by a factor of two and the hydrodemetallation activities varied almost four-fold. Correlations among the relative activities and the elemental compositions of the feed oils showed a direct relationship between the hydrodemetallation activity and the metals content of the petroleum resins fractions, hereafter called polars. Thus, to discover chemical differences in feed oils and polars that may affect a catalysts activity, they have developed separation schemes to separate the oils into theirmore » component fractions and used various analytical techniques to characterize the fractions. The separation scheme developed is a modification and extension of the ASTM D2007 procedure. The sample is separated into saturates, aromatics, polars, and asphaltenes by precipitation/filtration and chromatography with Attapulgus cla and silica gel; then the polars are separated into various acids, bases, and neutral polars with macroporous ion exchange resins. This separation scheme has been applied to 650{degree}F + cut from Hondo (offshore California) crude. The fractions were characterized with carbon and hydrogen elemental analysis, XRF spectrometry for nickel, vanadium, and sulfur, chemiluminescence spectrometry for nitrogen. GC simulated distillations (saturates only), vapor pressure osmetry (number-average molecular weight) in toluene, flame emission spectrometry, and {sup 13}C-NMR spectroscopy.« less