Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
An incremental approach to genetic-algorithms-based classification.
Guan, Sheng-Uei; Zhu, Fangming
2005-04-01
Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.
Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrighi, Bill
2016-03-03
libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.
Phase retrieval via incremental truncated amplitude flow algorithm
NASA Astrophysics Data System (ADS)
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
An Incremental Weighted Least Squares Approach to Surface Lights Fields
NASA Astrophysics Data System (ADS)
Coombe, Greg; Lastra, Anselmo
An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.
Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.
Zhao, Dongfang; Yang, Li
2009-01-01
Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.
A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.
Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi
2015-12-01
Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Incremental social learning in particle swarms.
de Oca, Marco A Montes; Stutzle, Thomas; Van den Enden, Ken; Dorigo, Marco
2011-04-01
Incremental social learning (ISL) was proposed as a way to improve the scalability of systems composed of multiple learning agents. In this paper, we show that ISL can be very useful to improve the performance of population-based optimization algorithms. Our study focuses on two particle swarm optimization (PSO) algorithms: a) the incremental particle swarm optimizer (IPSO), which is a PSO algorithm with a growing population size in which the initial position of new particles is biased toward the best-so-far solution, and b) the incremental particle swarm optimizer with local search (IPSOLS), in which solutions are further improved through a local search procedure. We first derive analytically the probability density function induced by the proposed initialization rule applied to new particles. Then, we compare the performance of IPSO and IPSOLS on a set of benchmark functions with that of other PSO algorithms (with and without local search) and a random restart local search algorithm. Finally, we measure the benefits of using incremental social learning on PSO algorithms by running IPSO and IPSOLS on problems with different fitness distance correlations.
Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-01
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. PMID:26784203
Drivers' Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving.
Du, Mingbo; Mei, Tao; Liang, Huawei; Chen, Jiajia; Huang, Rulin; Zhao, Pan
2016-01-15
This paper describes a real-time motion planner based on the drivers' visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers' visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers' visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms.
NASA Astrophysics Data System (ADS)
Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.
1998-11-01
The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
Loukriz, Abdelhamid; Haddadi, Mourad; Messalti, Sabir
2016-05-01
Improvement of the efficiency of photovoltaic system based on new maximum power point tracking (MPPT) algorithms is the most promising solution due to its low cost and its easy implementation without equipment updating. Many MPPT methods with fixed step size have been developed. However, when atmospheric conditions change rapidly , the performance of conventional algorithms is reduced. In this paper, a new variable step size Incremental Conductance IC MPPT algorithm has been proposed. Modeling and simulation of different operational conditions of conventional Incremental Conductance IC and proposed methods are presented. The proposed method was developed and tested successfully on a photovoltaic system based on Flyback converter and control circuit using dsPIC30F4011. Both, simulation and experimental design are provided in several aspects. A comparative study between the proposed variable step size and fixed step size IC MPPT method under similar operating conditions is presented. The obtained results demonstrate the efficiency of the proposed MPPT algorithm in terms of speed in MPP tracking and accuracy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Three-dimensional unstructured grid generation via incremental insertion and local optimization
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Wiltberger, N. Lyn; Gandhi, Amar S.
1992-01-01
Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details.
NASA Astrophysics Data System (ADS)
Guo, Zhan; Yan, Xuefeng
2018-04-01
Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Constructing Temporally Extended Actions through Incremental Community Detection
Li, Ge
2018-01-01
Hierarchical reinforcement learning works on temporally extended actions or skills to facilitate learning. How to automatically form such abstraction is challenging, and many efforts tackle this issue in the options framework. While various approaches exist to construct options from different perspectives, few of them concentrate on options' adaptability during learning. This paper presents an algorithm to create options and enhance their quality online. Both aspects operate on detected communities of the learning environment's state transition graph. We first construct options from initial samples as the basis of online learning. Then a rule-based community revision algorithm is proposed to update graph partitions, based on which existing options can be continuously tuned. Experimental results in two problems indicate that options from initial samples may perform poorly in more complex environments, and our presented strategy can effectively improve options and get better results compared with flat reinforcement learning. PMID:29849543
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking
Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua
2014-01-01
To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252
NASA Astrophysics Data System (ADS)
Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung
2018-05-01
A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means
NASA Astrophysics Data System (ADS)
Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.
2018-04-01
This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Gorti, Sarma B; Peter, William H
A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Fast Fourier Transform Spectral Analysis Program
NASA Technical Reports Server (NTRS)
Daniel, J. A., Jr.; Graves, M. L.; Hovey, N. M.
1969-01-01
Fast Fourier Transform Spectral Analysis Program is used in frequency spectrum analysis of postflight, space vehicle telemetered trajectory data. This computer program with a digital algorithm can calculate power spectrum rms amplitudes and cross spectrum of sampled parameters at even time increments.
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-14
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-01
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829
Spectral Unmixing Based Construction of Lunar Mineral Abundance Maps
NASA Astrophysics Data System (ADS)
Bernhardt, V.; Grumpe, A.; Wöhler, C.
2017-07-01
In this study we apply a nonlinear spectral unmixing algorithm to a nearly global lunar spectral reflectance mosaic derived from hyper-spectral image data acquired by the Moon Mineralogy Mapper (M3) instrument. Corrections for topographic effects and for thermal emission were performed. A set of 19 laboratory-based reflectance spectra of lunar samples published by the Lunar Soil Characterization Consortium (LSCC) were used as a catalog of potential endmember spectra. For a given spectrum, the multi-population population-based incremental learning (MPBIL) algorithm was used to determine the subset of endmembers actually contained in it. However, as the MPBIL algorithm is computationally expensive, it cannot be applied to all pixels of the reflectance mosaic. Hence, the reflectance mosaic was clustered into a set of 64 prototype spectra, and the MPBIL algorithm was applied to each prototype spectrum. Each pixel of the mosaic was assigned to the most similar prototype, and the set of endmembers previously determined for that prototype was used for pixel-wise nonlinear spectral unmixing using the Hapke model, implemented as linear unmixing of the single-scattering albedo spectrum. This procedure yields maps of the fractional abundances of the 19 endmembers. Based on the known modal abundances of a variety of mineral species in the LSCC samples, a conversion from endmember abundances to mineral abundances was performed. We present maps of the fractional abundances of plagioclase, pyroxene and olivine and compare our results with previously published lunar mineral abundance maps.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei
2012-12-01
Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
Incremental learning of concept drift in nonstationary environments.
Elwell, Ryan; Polikar, Robi
2011-10-01
We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE
Incremental update of electrostatic interactions in adaptively restrained particle simulations.
Edorh, Semeho Prince A; Redon, Stéphane
2018-04-06
The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.
Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation
Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi
2015-01-01
Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
NASA Astrophysics Data System (ADS)
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification
NASA Astrophysics Data System (ADS)
Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona
2016-08-01
The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.
A randomised approach for NARX model identification based on a multivariate Bernoulli distribution
NASA Astrophysics Data System (ADS)
Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.
2017-04-01
The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.
NASA Astrophysics Data System (ADS)
Peña, M.
2016-10-01
Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.
Incremental online learning in high dimensions.
Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan
2005-12-01
Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.
NASA Astrophysics Data System (ADS)
Liu, Peng; Yang, Yong-qing; Li, Zhi-guo; Han, Jun-feng; Wei, Yu; Jing, Feng
2018-02-01
Aiming at the shortage of the incremental encoder with simple process to change along the count "in the presence of repeatability and anti disturbance ability, combined with its application in a large project in the country, designed an electromechanical switch for generating zero, zero crossing signal. A mechanical zero electric and zero coordinate transformation model is given to meet the path optimality, single, fast and accurate requirements of adaptive fast change algorithm, the proposed algorithm can effectively solve the contradiction between the accuracy and the change of the time change. A test platform is built to verify the effectiveness and robustness of the proposed algorithm. The experimental data show that the effect of the algorithm accuracy is not influenced by the change of the speed of change, change the error of only 0.0013. Meet too fast, the change of system accuracy, and repeated experiments show that this algorithm has high robustness.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
Single-pass incremental force updates for adaptively restrained molecular dynamics.
Singh, Krishna Kant; Redon, Stephane
2018-03-30
Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
FRED: a program development tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shilling, J.
1985-09-01
The structured, screen-based editor FRED is introduced. FRED provides incremental parsing and semantic analysis. The parsing is based on an LL(1) top-down algorithm which has been modified to provide follow-the-cursor parsing and soft templates. The languages accepted by the editor are LL(1) languages with the addition of the Unknown and preferred production non-terminal classes. The semantic analysis is based on the incremental update of attribute grammar equations. We briefly describe the interface between FRED and an automated reference librarian system that is under development.
Enabling Incremental Query Re-Optimization.
Liu, Mengmeng; Ives, Zachary G; Loo, Boon Thau
2016-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs , and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries ; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations.
Enabling Incremental Query Re-Optimization
Liu, Mengmeng; Ives, Zachary G.; Loo, Boon Thau
2017-01-01
As declarative query processing techniques expand to the Web, data streams, network routers, and cloud platforms, there is an increasing need to re-plan execution in the presence of unanticipated performance changes. New runtime information may affect which query plan we prefer to run. Adaptive techniques require innovation both in terms of the algorithms used to estimate costs, and in terms of the search algorithm that finds the best plan. We investigate how to build a cost-based optimizer that recomputes the optimal plan incrementally given new cost information, much as a stream engine constantly updates its outputs given new data. Our implementation especially shows benefits for stream processing workloads. It lays the foundations upon which a variety of novel adaptive optimization algorithms can be built. We start by leveraging the recently proposed approach of formulating query plan enumeration as a set of recursive datalog queries; we develop a variety of novel optimization approaches to ensure effective pruning in both static and incremental cases. We further show that the lessons learned in the declarative implementation can be equally applied to more traditional optimizer implementations. PMID:28659658
Generation of Referring Expressions: Assessing the Incremental Algorithm
ERIC Educational Resources Information Center
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
Online Performance-Improvement Algorithms
1994-08-01
fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
Tracking and recognition face in videos with incremental local sparse representation model
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang
2013-10-01
This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.
Cobweb/3: A portable implementation
NASA Technical Reports Server (NTRS)
Mckusick, Kathleen; Thompson, Kevin
1990-01-01
An algorithm is examined for data clustering and incremental concept formation. An overview is given of the Cobweb/3 system and the algorithm on which it is based, as well as the practical details of obtaining and running the system code. The implementation features a flexible user interface which includes a graphical display of the concept hierarchies that the system constructs.
Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents
NASA Astrophysics Data System (ADS)
Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa
SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.
TEAM: efficient two-locus epistasis tests in human genome-wide association study.
Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei
2010-06-15
As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.
Theory and generation of conditional, scalable sub-Gaussian random fields
NASA Astrophysics Data System (ADS)
Panzeri, M.; Riva, M.; Guadagnini, A.; Neuman, S. P.
2016-03-01
Many earth and environmental (as well as a host of other) variables, Y, and their spatial (or temporal) increments, ΔY, exhibit non-Gaussian statistical scaling. Previously we were able to capture key aspects of such non-Gaussian scaling by treating Y and/or ΔY as sub-Gaussian random fields (or processes). This however left unaddressed the empirical finding that whereas sample frequency distributions of Y tend to display relatively mild non-Gaussian peaks and tails, those of ΔY often reveal peaks that grow sharper and tails that become heavier with decreasing separation distance or lag. Recently we proposed a generalized sub-Gaussian model (GSG) which resolves this apparent inconsistency between the statistical scaling behaviors of observed variables and their increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. Most importantly, we demonstrated the feasibility of estimating all parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments, ΔY. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random fields, introduce two approximate versions of this algorithm to reduce CPU time, and explore them on one and two-dimensional synthetic test cases.
Is It that Difficult to Find a Good Preference Order for the Incremental Algorithm?
ERIC Educational Resources Information Center
Krahmer, Emiel; Koolen, Ruud; Theune, Mariet
2012-01-01
In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order.…
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
An Incremental High-Utility Mining Algorithm with Transaction Insertion
Gan, Wensheng; Zhang, Binbin
2015-01-01
Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038
A design of LED adaptive dimming lighting system based on incremental PID controller
NASA Astrophysics Data System (ADS)
He, Xiangyan; Xiao, Zexin; He, Shaojia
2010-11-01
As a new generation energy-saving lighting source, LED is applied widely in various technology and industry fields. The requirement of its adaptive lighting technology is more and more rigorous, especially in the automatic on-line detecting system. In this paper, a closed loop feedback LED adaptive dimming lighting system based on incremental PID controller is designed, which consists of MEGA16 chip as a Micro-controller Unit (MCU), the ambient light sensor BH1750 chip with Inter-Integrated Circuit (I2C), and constant-current driving circuit. A given value of light intensity required for the on-line detecting environment need to be saved to the register of MCU. The optical intensity, detected by BH1750 chip in real time, is converted to digital signal by AD converter of the BH1750 chip, and then transmitted to MEGA16 chip through I2C serial bus. Since the variation law of light intensity in the on-line detecting environment is usually not easy to be established, incremental Proportional-Integral-Differential (PID) algorithm is applied in this system. Control variable obtained by the incremental PID determines duty cycle of Pulse-Width Modulation (PWM). Consequently, LED's forward current is adjusted by PWM, and the luminous intensity of the detection environment is stabilized by self-adaptation. The coefficients of incremental PID are obtained respectively after experiments. Compared with the traditional LED dimming system, it has advantages of anti-interference, simple construction, fast response, and high stability by the use of incremental PID algorithm and BH1750 chip with I2C serial bus. Therefore, it is suitable for the adaptive on-line detecting applications.
Mixed-initiative control of intelligent systems
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1987-01-01
Mixed-initiative user interfaces provide a means by which a human operator and an intelligent system may collectively share the task of deciding what to do next. Such interfaces are important to the effective utilization of real-time expert systems as assistants in the execution of critical tasks. Presented here is the Incremental Inference algorithm, a symbolic reasoning mechanism based on propositional logic and suited to the construction of mixed-initiative interfaces. The algorithm is similar in some respects to the Truth Maintenance System, but replaces the notion of 'justifications' with a notion of recency, allowing newer values to override older values yet permitting various interested parties to refresh these values as they become older and thus more vulnerable to change. A simple example is given of the use of the Incremental Inference algorithm plus an overview of the integration of this mechanism within the SPECTRUM expert system for geological interpretation of imaging spectrometer data.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
A scalable and practical one-pass clustering algorithm for recommender system
NASA Astrophysics Data System (ADS)
Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali
2015-12-01
KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Servo Platform Circuit Design of Pendulous Gyroscope Based on DSP
NASA Astrophysics Data System (ADS)
Tan, Lilong; Wang, Pengcheng; Zhong, Qiyuan; Zhang, Cui; Liu, Yunfei
2018-03-01
In order to solve the problem when a certain type of pendulous gyroscope in the initial installation deviation more than 40 degrees, that the servo platform can not be up to the speed of the gyroscope in the rough north seeking phase. This paper takes the digital signal processor TMS320F28027 as the core, uses incremental digital PID algorithm, carries out the circuit design of the servo platform. Firstly, the hardware circuit is divided into three parts: DSP minimum system, motor driving circuit and signal processing circuit, then the mathematical model of incremental digital PID algorithm is established, based on the model, writes the PID control program in CCS3.3, finally, the servo motor tracking control experiment is carried out, it shows that the design can significantly improve the tracking ability of the servo platform, and the design has good engineering practice.
NASA Astrophysics Data System (ADS)
de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.
In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.
The SIST-M: Predictive validity of a brief structured Clinical Dementia Rating interview
Okereke, Olivia I.; Pantoja-Galicia, Norberto; Copeland, Maura; Hyman, Bradley T.; Wanggaard, Taylor; Albert, Marilyn S.; Betensky, Rebecca A.; Blacker, Deborah
2011-01-01
Background We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI). Objective To test predictive validity of the SIST-M. Methods Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves. Results AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB. Conclusion CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview. PMID:21986342
Self-supervised online metric learning with low rank constraint for scene categorization.
Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo
2013-08-01
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.
Nguyen, Hai Van; Finkelstein, Eric Andrew; Mital, Shweta; Gardner, Daphne Su-Lyn
2017-11-01
Offering genetic testing for Maturity Onset Diabetes of the Young (MODY) to all young patients with type 2 diabetes has been shown to be not cost-effective. This study tests whether a novel algorithm-driven genetic testing strategy for MODY is incrementally cost-effective relative to the setting of no testing. A decision tree was constructed to estimate the costs and effectiveness of the algorithm-driven MODY testing strategy and a strategy of no genetic testing over a 30-year time horizon from a payer's perspective. The algorithm uses glutamic acid decarboxylase (GAD) antibody testing (negative antibodies), age of onset of diabetes (<45 years) and body mass index (<25 kg/m 2 if diagnosed >30 years) to stratify the population of patients with diabetes into three subgroups, and testing for MODY only among the subgroup most likely to have the mutation. Singapore-specific costs and prevalence of MODY obtained from local studies and utility values sourced from the literature are used to populate the model. The algorithm-driven MODY testing strategy has an incremental cost-effectiveness ratio of US$93 663 per quality-adjusted life year relative to the no testing strategy. If the price of genetic testing falls from US$1050 to US$530 (a 50% decrease), it will become cost-effective. Our proposed algorithm-driven testing strategy for MODY is not yet cost-effective based on established benchmarks. However, as genetic testing prices continue to fall, this strategy is likely to become cost-effective in the near future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Global Sensor Management: Military Asset Allocation
2009-10-06
solution (referred to as moves). A similar approach has been suggested by Zweben et al. (1993), who use a local search base metaheuristic , specifically...trapped in a local optimum. Hansen and Mladenovic (1998) describe the concept of variable neighborhood local search algorithms , and describe an...Mataric and G.S. Sukhatme (2002). “An incremental deployment algorithm for mobile robot teams,” Proceedings of the 2002 IEEE/RSJ Intl. Conference on
Assessing the Incremental Algorithm: A Response to Krahmer et al.
ERIC Educational Resources Information Center
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
This response discusses the experiment reported in Krahmer et al.'s Letter to the Editor of "Cognitive Science". We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from "normal" (i.e.,…
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Planning paths through a spatial hierarchy - Eliminating stair-stepping effects
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1989-01-01
Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.
Chen, C L Philip; Liu, Zhulin
2018-01-01
Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Erratum: Erratum: Denoising Phase Unwrapping Algorithm for Precise Phase Shifting Interferometry
NASA Astrophysics Data System (ADS)
Phuc, Phan Huy; Rhee, Hyug-Gyo; Ghim, Young-Sik
2018-06-01
This is a revision of the reference list reported in the original article. In order to clear the contribution of the previous work on the incremental breadth-first search (IBFS) method applied to the PUMA algorithm, we add one more reference to the existing reference list, as in this erratum. Page 83 : In this paper, we propose an algorithm that modifies the Boykov-Kolmogorov (BK) algorithm using the incremental breadth-first search (IBFS) method [27, 28] to find paths from the source to the sink of a graph. [28] S. Ali, H. Khan, I. Shaik and F. Ali, Int. J. Eng. and Technol. 7, 254 (2015).
Yakhelef, N; Audibert, M; Varaine, F; Chakaya, J; Sitienei, J; Huerga, H; Bonnet, M
2014-05-01
In 2007, the World Health Organization recommended introducing rapid Mycobacterium tuberculosis culture into the diagnostic algorithm of smear-negative pulmonary tuberculosis (TB). To assess the cost-effectiveness of introducing a rapid non-commercial culture method (thin-layer agar), together with Löwenstein-Jensen culture to diagnose smear-negative TB at a district hospital in Kenya. Outcomes (number of true TB cases treated) were obtained from a prospective study evaluating the effectiveness of a clinical and radiological algorithm (conventional) against the alternative algorithm (conventional plus M. tuberculosis culture) in 380 smear-negative TB suspects. The costs of implementing each algorithm were calculated using a 'micro-costing' or 'ingredient-based' method. We then compared the cost and effectiveness of conventional vs. culture-based algorithms and estimated the incremental cost-effectiveness ratio. The costs of conventional and culture-based algorithms per smear-negative TB suspect were respectively €39.5 and €144. The costs per confirmed and treated TB case were respectively €452 and €913. The culture-based algorithm led to diagnosis and treatment of 27 more cases for an additional cost of €1477 per case. Despite the increase in patients started on treatment thanks to culture, the relatively high cost of a culture-based algorithm will make it difficult for resource-limited countries to afford.
Computing aggregate properties of preimages for 2D cellular automata.
Beer, Randall D
2017-11-01
Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.
Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A
2004-01-01
We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.
A MPPT Algorithm Based PV System Connected to Single Phase Voltage Controlled Grid
NASA Astrophysics Data System (ADS)
Sreekanth, G.; Narender Reddy, N.; Durga Prasad, A.; Nagendrababu, V.
2012-10-01
Future ancillary services provided by photovoltaic (PV) systems could facilitate their penetration in power systems. In addition, low-power PV systems can be designed to improve the power quality. This paper presents a single-phase PV systemthat provides grid voltage support and compensation of harmonic distortion at the point of common coupling thanks to a repetitive controller. The power provided by the PV panels is controlled by a Maximum Power Point Tracking algorithm based on the incremental conductance method specifically modified to control the phase of the PV inverter voltage. Simulation and experimental results validate the presented solution.
Back to the Future: Consistency-Based Trajectory Tracking
NASA Technical Reports Server (NTRS)
Kurien, James; Nayak, P. Pandurand; Norvig, Peter (Technical Monitor)
2000-01-01
Given a model of a physical process and a sequence of commands and observations received over time, the task of an autonomous controller is to determine the likely states of the process and the actions required to move the process to a desired configuration. We introduce a representation and algorithms for incrementally generating approximate belief states for a restricted but relevant class of partially observable Markov decision processes with very large state spaces. The algorithm presented incrementally generates, rather than revises, an approximate belief state at any point by abstracting and summarizing segments of the likely trajectories of the process. This enables applications to efficiently maintain a partial belief state when it remains consistent with observations and revisit past assumptions about the process' evolution when the belief state is ruled out. The system presented has been implemented and results on examples from the domain of spacecraft control are presented.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Statistical characteristics of surrogate data based on geophysical measurements
NASA Astrophysics Data System (ADS)
Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.
2006-09-01
In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
Research of digital controlled DC/DC converter based on STC12C5410AD
NASA Astrophysics Data System (ADS)
Chen, Dan-Jiang; Jin, Xin; Xiao, Zhi-Hong
2010-02-01
In order to study application of digital control technology on DC/DC converter, principle of increment mode PID control algorithm was analyzed in the paper. Then, a SCM named STC12C5410AD was introduced with its internal resources and characteristics. The PID control algorithm can be implemented easily based on it. The output of PID control was used to change the value of a variable that is 255 times than duty cycle, and this reduced the error of calculation. The valid of the presented algorithm was verified by an experiment for a BUCK DC/DC converter. The experimental results indicated that output voltage of the BUCK converter is stable with low ripple.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams-Hayes, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.
Pornographic image recognition and filtering using incremental learning in compressed domain
NASA Astrophysics Data System (ADS)
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
Vujanovic, Anka A; Bonn-Miller, Marcel O; Bernstein, Amit; McKee, Laura G; Zvolensky, Michael J
2010-01-01
The present investigation examined the incremental predictive validity of mindfulness skills, as measured by the Kentucky Inventory of Mindfulness Skills (KIMS), in relation to multiple facets of emotional dysregulation, as indexed by the Difficulties in Emotion Regulation Scale (DERS), above and beyond variance explained by negative affectivity, anxiety sensitivity, and distress tolerance. Participants were a nonclinical community sample of 193 young adults (106 women, 87 men; M(age) = 23.91 years). The KIMS Accepting without Judgment subscale was incrementally negatively predictive of all facets of emotional dysregulation, as measured by the DERS. Furthermore, KIMS Acting with Awareness was incrementally negatively related to difficulties engaging in goal-directed behavior. Additionally, both observing and describing mindfulness skills were incrementally negatively related to lack of emotional awareness, and describing skills also were incrementally negatively related to lack of emotional clarity. Findings are discussed in relation to advancing scientific understanding of emotional dysregulation from a mindfulness skills-based framework.
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers
2006-09-26
Alata O. Fernandez-Maloigne C., and Ferrie J.C. (2001). Unsupervised Algorithm for the Segmentation of Three-Dimensional Magnetic Resonance Brain ...instinctual and learned responses in the brain , causing it to make decisions based on patterns in the stimuli. Using this deceptively simple process...2001. [2] Bohn C. (1997). An Incremental Unsupervised Learning Scheme for Function Approximation. In: Proceedings of the 1997 IEEE International
Tracking the global maximum power point of PV arrays under partial shading conditions
NASA Astrophysics Data System (ADS)
Fennich, Meryem
This thesis presents the theoretical and simulation studies of the global maximum power point tracking (MPPT) for photovoltaic systems under partial shading. The main goal is to track the maximum power point of the photovoltaic module so that the maximum possible power can be extracted from the photovoltaic panels. When several panels are connected in series with some of them shaded partially either due to clouds or shadows from neighboring buildings, several local maxima appear in the power vs. voltage curve. A power increment based MPPT algorithm is effective in identifying the global maximum from the several local maxima. Several existing MPPT algorithms are explored and the state-of-the-art power increment method is simulated and tested for various partial shading conditions. The current-voltage and power-voltage characteristics of the PV model are studied under different partial shading conditions, along with five different cases demonstrating how the MPPT algorithm performs when shading switches from one state to another. Each case is supplemented with simulation results. The method of tracking the Global MPP is based on controlling the DC-DC converter connected to the output of the PV array. A complete system simulation including the PV array, the direct current to direct current (DC-DC) converter and the MPPT is presented and tested using MATLAB software. The simulation results show that the MPPT algorithm works very well with the buck converter, while the boost converter needs further changes and implementation.
A hybrid neural network model for noisy data regression.
Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M
2004-04-01
A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.
Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks
2015-03-16
moderately-sized networks. As a consequence, throughout this effort, a simulated annealing (SA) algorithm will be employed to effectively search the...then increment k by 1 and repeat the search to find z∗3. Once can continue to increment k until W < zδ, at which point the algorithm will stop and...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Collaborative Localization and Location Verification in WSNs
Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang
2015-01-01
Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Progressive simplification and transmission of building polygons based on triangle meshes
NASA Astrophysics Data System (ADS)
Li, Hongsheng; Wang, Yingjie; Guo, Qingsheng; Han, Jiafu
2010-11-01
Digital earth is a virtual representation of our planet and a data integration platform which aims at harnessing multisource, multi-resolution, multi-format spatial data. This paper introduces a research framework integrating progressive cartographic generalization and transmission of vector data. The progressive cartographic generalization provides multiple resolution data from coarse to fine as key scales and increments between them which is not available in traditional generalization framework. Based on the progressive simplification algorithm, the building polygons are triangulated into meshes and encoded according to the simplification sequence of two basic operations, edge collapse and vertex split. The map data at key scales and encoded increments between them are stored in a multi-resolution file. As the client submits requests to the server, the coarsest map is transmitted first and then the increments. After data decoding and mesh refinement the building polygons with more details will be visualized. Progressive generalization and transmission of building polygons is demonstrated in the paper.
NASA Astrophysics Data System (ADS)
Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.
2015-12-01
Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Formal Semantics and Implementation of BPMN 2.0 Inclusive Gateways
NASA Astrophysics Data System (ADS)
Christiansen, David Raymond; Carbone, Marco; Hildebrandt, Thomas
We present the first direct formalization of the semantics of inclusive gateways as described in the Business Process Modeling Notation (BPMN) 2.0 Beta 1 specification. The formal semantics is given for a minimal subset of BPMN 2.0 containing just the inclusive and exclusive gateways and the start and stop events. By focusing on this subset we achieve a simple graph model that highlights the particular non-local features of the inclusive gateway semantics. We sketch two ways of implementing the semantics using algorithms based on incrementally updated data structures and also discuss distributed communication-based implementations of the two algorithms.
Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao
2014-01-01
Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.
The incremental costs of recommended therapy versus real world therapy in type 2 diabetes patients
Crivera, C.; Suh, D. C.; Huang, E. S.; Cagliero, E.; Grant, R. W.; Vo, L.; Shin, H. C.; Meigs, J. B.
2008-01-01
Background The goals of diabetes management have evolved over the past decade to become the attainment of near-normal glucose and cardiovascular risk factor levels. Improved metabolic control is achieved through optimized medication regimens, but costs specifically associated with such optimization have not been examined. Objective To estimate the incremental medication cost of providing optimal therapy to reach recommended goals versus actual therapy in patients with type 2 diabetes. Methods We randomly selected the charts of 601 type 2 diabetes patients receiving care from the outpatient clinics of Massachusetts General Hospital March 1, 1996–August 31, 1997 and abstracted clinical and medication data. We applied treatment algorithms based on 2004 clinical practice guidelines for hyperglycemia, hyperlipidemia, and hypertension to patients’ current medication therapy to determine how current medication regimens could be improved to attain recommended treatment goals. Four clinicians and three pharmacists independently applied the algorithms and reached consensus on recommended therapies. Mean incremental medication costs, the cost differences between current and recommended therapies, per patient (expressed in 2004 dollars) were calculated with 95% bootstrap confidence intervals (CIs). Results Mean patient age was 65 years old, mean duration of diabetes was 7.7 years, 32% had ideal glucose control, 25% had ideal systolic blood pressure, and 24% had ideal low-density lipoprotein cholesterol. Care for these diabetes patients was similar to that observed in recent national studies. If treatment algorithm recommendations were applied, the average annual medication cost/patient would increase from $1525 to $2164. Annual incremental costs/patient increased by $168 (95% CI $133–$206) for antihyperglycemic medications, $75 ($57–$93) for antihypertensive medications, $392 ($354–$434) for antihyperlipidemic medications, and $3 ($3–$4) for aspirin prophylaxis. Yearly incremental cost of recommended laboratory testing ranged from $77–$189/patient. Limitations Although baseline data come from the clinics of a single academic institution, collected in 1997, the care of these diabetes patients was remarkably similar to care recently observed nationally. In addition, the data are dependent on the medical record and may not accurately reflect patients’ actual experiences. Conclusion Average yearly incremental cost of optimizing drug regimens to achieve recommended treatment goals for type 2 diabetes was approximately $600/patient. These results provide valuable input for assessing the cost-effectiveness of improving comprehensive diabetes care. PMID:17076990
Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy
NASA Astrophysics Data System (ADS)
Yang, Yu; Dong, Bin; Wen, Zaiwen
2017-02-01
In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.
Redundancy checking algorithms based on parallel novel extension rule
NASA Astrophysics Data System (ADS)
Liu, Lei; Yang, Yang; Li, Guangli; Wang, Qi; Lü, Shuai
2017-05-01
Redundancy checking (RC) is a key knowledge reduction technology. Extension rule (ER) is a new reasoning method, first presented in 2003 and well received by experts at home and abroad. Novel extension rule (NER) is an improved ER-based reasoning method, presented in 2009. In this paper, we first analyse the characteristics of the extension rule, and then present a simple algorithm for redundancy checking based on extension rule (RCER). In addition, we introduce MIMF, a type of heuristic strategy. Using the aforementioned rule and strategy, we design and implement RCHER algorithm, which relies on MIMF. Next we design and implement an RCNER (redundancy checking based on NER) algorithm based on NER. Parallel computing greatly accelerates the NER algorithm, which has weak dependence among tasks when executed. Considering this, we present PNER (parallel NER) and apply it to redundancy checking and necessity checking. Furthermore, we design and implement the RCPNER (redundancy checking based on PNER) and NCPPNER (necessary clause partition based on PNER) algorithms as well. The experimental results show that MIMF significantly influences the acceleration of algorithm RCER in formulae on a large scale and high redundancy. Comparing PNER with NER and RCPNER with RCNER, the average speedup can reach up to the number of task decompositions when executed. Comparing NCPNER with the RCNER-based algorithm on separating redundant formulae, speedup increases steadily as the scale of the formulae is incrementing. Finally, we describe the challenges that the extension rule will be faced with and suggest possible solutions.
D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server
NASA Astrophysics Data System (ADS)
Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.
2017-11-01
The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.
Menon, J; Mishra, P
2018-04-01
We determined incremental health care resource utilization, incremental health care expenditures, incremental absenteeism, and incremental absenteeism costs associated with osteoarthritis. Medical Expenditure Panel Survey (MEPS) for 2011 was used as data source. Individuals 18 years or older and employed during 2011 were eligible for inclusion in the sample for analyses. Individuals with osteoarthritis were identified based on ICD-9-CM codes. Incremental health care resource utilization included annual hospitalization, hospital days, emergency room visits and outpatient visits. Incremental health expenditures included annual inpatient, outpatient, emergency room, medications, miscellaneous and annual total expenditures. Of the total sample, 1354 were diagnosed with osteoarthritis, and compared to non osteoarthritis individuals. Incremental resource utilization, expenditures, absenteeism and absenteeism costs were estimated using regression models, adjusting for age, gender, sex, region, marital status, insurance coverage, comorbidities, anxiety, asthma, hypertension and hyperlipidemia. Regression models revealed incremental mean annual resource use associated with osteoarthritis of 0.07 hospitalizations, equal to 70 additional hospitalizations per 100 osteoarthritic patients annually, and 3.63 outpatient visits, equal to 363 additional visits per 100 osteoarthritic patients annually. Mean annual incremental total expenditures associated with osteoarthritis were $2046. Annually, mean incremental expenditures were largest for inpatient expenditures at $826, followed by mean incremental outpatient expenditures of $659, and mean incremental medication expenditures of $325. Mean annual incremental absenteeism was 2.2 days and mean annual incremental absenteeism costs were $715.74. Total direct expenditures were estimated at $41.7 billion. Osteoarthritis was associated with significant incremental health care resource utilization, expenditures, absenteeism and absenteeism costs. Copyright © 2017 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Control strategy of grid-connected photovoltaic generation system based on GMPPT method
NASA Astrophysics Data System (ADS)
Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen
2018-02-01
There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting
2010-11-01
An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.
Kutlak, Roman; van Deemter, Kees; Mellish, Chris
2016-01-01
This article presents a computational model of the production of referring expressions under uncertainty over the hearer's knowledge. Although situations where the hearer's knowledge is uncertain have seldom been addressed in the computational literature, they are common in ordinary communication, for example when a writer addresses an unknown audience, or when a speaker addresses a stranger. We propose a computational model composed of three complimentary heuristics based on, respectively, an estimation of the recipient's knowledge, an estimation of the extent to which a property is unexpected, and the question of what is the optimum number of properties in a given situation. The model was tested in an experiment with human readers, in which it was compared against the Incremental Algorithm and human-produced descriptions. The results suggest that the new model outperforms the Incremental Algorithm in terms of the proportion of correctly identified entities and in terms of the perceived quality of the generated descriptions. PMID:27630592
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.
Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha
2017-02-01
Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.
Toward translational incremental similarity-based reasoning in breast cancer grading
NASA Astrophysics Data System (ADS)
Tutac, Adina E.; Racoceanu, Daniel; Leow, Wee-Keng; Müller, Henning; Putti, Thomas; Cretu, Vladimir
2009-02-01
One of the fundamental issues in bridging the gap between the proliferation of Content-Based Image Retrieval (CBIR) systems in the scientific literature and the deficiency of their usage in medical community is based on the characteristic of CBIR to access information by images or/and text only. Yet, the way physicians are reasoning about patients leads intuitively to a case representation. Hence, a proper solution to overcome this gap is to consider a CBIR approach inspired by Case-Based Reasoning (CBR), which naturally introduces medical knowledge structured by cases. Moreover, in a CBR system, the knowledge is incrementally added and learned. The purpose of this study is to initiate a translational solution from CBIR algorithms to clinical practice, using a CBIR/CBR hybrid approach. Therefore, we advance the idea of a translational incremental similarity-based reasoning (TISBR), using combined CBIR and CBR characteristics: incremental learning of medical knowledge, medical case-based structure of the knowledge (CBR), image usage to retrieve similar cases (CBIR), similarity concept (central for both paradigms). For this purpose, three major axes are explored: the indexing, the cases retrieval and the search refinement, applied to Breast Cancer Grading (BCG), a powerful breast cancer prognosis exam. The effectiveness of this strategy is currently evaluated over cases provided by the Pathology Department of Singapore National University Hospital, for the indexing. With its current accuracy, TISBR launches interesting perspectives for complex reasoning in future medical research, opening the way to a better knowledge traceability and a better acceptance rate of computer-aided diagnosis assistance among practitioners.
An Improved Incremental Learning Approach for KPI Prognosis of Dynamic Fuel Cell System.
Yin, Shen; Xie, Xiaochen; Lam, James; Cheung, Kie Chung; Gao, Huijun
2016-12-01
The key performance indicator (KPI) has an important practical value with respect to the product quality and economic benefits for modern industry. To cope with the KPI prognosis issue under nonlinear conditions, this paper presents an improved incremental learning approach based on available process measurements. The proposed approach takes advantage of the algorithm overlapping of locally weighted projection regression (LWPR) and partial least squares (PLS), implementing the PLS-based prognosis in each locally linear model produced by the incremental learning process of LWPR. The global prognosis results including KPI prediction and process monitoring are obtained from the corresponding normalized weighted means of all the local models. The statistical indicators for prognosis are enhanced as well by the design of novel KPI-related and KPI-unrelated statistics with suitable control limits for non-Gaussian data. For application-oriented purpose, the process measurements from real datasets of a proton exchange membrane fuel cell system are employed to demonstrate the effectiveness of KPI prognosis. The proposed approach is finally extended to a long-term voltage prediction for potential reference of further fuel cell applications.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
Incremental principal component pursuit for video background modeling
Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt
2017-03-14
An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.
Reducing Surface Clutter in Cloud Profiling Radar Data
NASA Technical Reports Server (NTRS)
Tanelli, Simone; Pak, Kyung; Durden, Stephen; Im, Eastwood
2008-01-01
An algorithm has been devised to reduce ground clutter in the data products of the CloudSat Cloud Profiling Radar (CPR), which is a nadir-looking radar instrument, in orbit around the Earth, that measures power backscattered by clouds as a function of distance from the instrument. Ground clutter contaminates the CPR data in the lowest 1 km of the atmospheric profile, heretofore making it impossible to use CPR data to satisfy the scientific interest in studying clouds and light rainfall at low altitude. The algorithm is based partly on the fact that the CloudSat orbit is such that the geodetic altitude of the CPR varies continuously over a range of approximately 25 km. As the geodetic altitude changes, the radar timing parameters are changed at intervals defined by flight software in order to keep the troposphere inside a data-collection time window. However, within each interval, the surface of the Earth continuously "scans through" (that is, it moves across) a few range bins of the data time window. For each radar profile, only few samples [one for every range-bin increment ((Delta)r = 240 m)] of the surface-clutter signature are available around the range bin in which the peak of surface return is observed, but samples in consecutive radar profiles are offset slightly (by amounts much less than (Delta)r) with respect to each other according to the relative change in geodetic altitude. As a consequence, in a case in which the surface area under examination is homogenous (e.g., an ocean surface), a sequence of consecutive radar profiles of the surface in that area contains samples of the surface response with range resolution (Delta)p much finer than the range-bin increment ((Delta)p << r). Once the high-resolution surface response has thus become available, the profile of surface clutter can be accurately estimated by use of a conventional maximum-correlation scheme: A translated and scaled version of the high-resolution surface response is fitted to the observed low-resolution profile. The translation and scaling factors that optimize the fit in a maximum-correlation sense represent (1) the true position of the surface relative to the sampled surface peak and (2) the magnitude of the surface backscatter. The performance of this algorithm has been tested on CloudSat data acquired over an ocean surface. A preliminary analysis of the test data showed a surface-clutter-rejection ratio over flat surfaces of >10 dB and a reduction of the contaminated altitude over ocean from about 1 km to about 0.5 km (over the ocean). The algorithm has been embedded in CloudSat L1B processing as of Release 04 (July 2007), and the estimated flat surface clutter is removed in L2B-GEOPROF product from the observed profile of reflectivity (see CloudSat product documentation for details and performance at http://www.cloudsat.cira.colostate.edu/ dataSpecs.php?prodid=1).
New approach to statistical description of fluctuating particle fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenko, V. V.
2009-01-15
The probability density functions (PDFs) of the increments of fluctuating particle fluxes are investigated. It is found that the PDFs have heavy power-law tails decreasing as x{sup -{alpha}-1} at x {yields} {infinity}. This makes it possible to describe these PDFs in terms of fractionally stable distributions (FSDs) q(x; {alpha}, {beta}, {theta}, {lambda}). The parameters {alpha}, {beta}, {gamma}, and {lambda} were estimated statistically using as an example the time samples of fluctuating particle fluxes measured in the edge plasma of the L-2M stellarator. Two series of fluctuating fluxes measured before and after boronization of the vacuum chamber were processed. It ismore » shown that the increments of fluctuating fluxes are well described by DSDs. The effect of boronization on the parameters of FSDs is analyzed. An algorithm for statistically estimating the FSD parameters and a procedure for processing experimental data are described.« less
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Realizable optimal control for a remotely piloted research vehicle. [stability augmentation
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
The design of a control system using the linear-quadratic regulator (LQR) control law theory for time invariant systems in conjunction with an incremental gradient procedure is presented. The incremental gradient technique reduces the full-state feedback controller design, generated by the LQR algorithm, to a realizable design. With a realizable controller, the feedback gains are based only on the available system outputs instead of being based on the full-state outputs. The design is for a remotely piloted research vehicle (RPRV) stability augmentation system. The design includes methods for accounting for noisy measurements, discrete controls with zero-order-hold outputs, and computational delay errors. Results from simulation studies of the response of the RPRV to a step in the elevator and frequency analysis techniques are included to illustrate these abnormalities and their influence on the controller design.
Smoothing-Based Relative Navigation and Coded Aperture Imaging
NASA Technical Reports Server (NTRS)
Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher
2017-01-01
This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.
Konias, Sokratis; Chouvarda, Ioanna; Vlahavas, Ioannis; Maglaveras, Nicos
2005-09-01
Current approaches for mining association rules usually assume that the mining is performed in a static database, where the problem of missing attribute values does not practically exist. However, these assumptions are not preserved in some medical databases, like in a home care system. In this paper, a novel uncertainty rule algorithm is illustrated, namely URG-2 (Uncertainty Rule Generator), which addresses the problem of mining dynamic databases containing missing values. This algorithm requires only one pass from the initial dataset in order to generate the item set, while new metrics corresponding to the notion of Support and Confidence are used. URG-2 was evaluated over two medical databases, introducing randomly multiple missing values for each record's attribute (rate: 5-20% by 5% increments) in the initial dataset. Compared with the classical approach (records with missing values are ignored), the proposed algorithm was more robust in mining rules from datasets containing missing values. In all cases, the difference in preserving the initial rules ranged between 30% and 60% in favour of URG-2. Moreover, due to its incremental nature, URG-2 saved over 90% of the time required for thorough re-mining. Thus, the proposed algorithm can offer a preferable solution for mining in dynamic relational databases.
Grid Integration of Single Stage Solar PV System using Three-level Voltage Source Converter
NASA Astrophysics Data System (ADS)
Hussain, Ikhlaq; Kandpal, Maulik; Singh, Bhim
2016-08-01
This paper presents a single stage solar PV (photovoltaic) grid integrated power generating system using a three level voltage source converter (VSC) operating at low switching frequency of 900 Hz with robust synchronizing phase locked loop (RS-PLL) based control algorithm. To track the maximum power from solar PV array, an incremental conductance algorithm is used and this maximum power is fed to the grid via three-level VSC. The use of single stage system with three level VSC offers the advantage of low switching losses and the operation at high voltages and high power which results in enhancement of power quality in the proposed system. Simulated results validate the design and control algorithm under steady state and dynamic conditions.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Self-adaptive Solution Strategies
NASA Technical Reports Server (NTRS)
Padovan, J.
1984-01-01
The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.
Cascaded face alignment via intimacy definition feature
NASA Astrophysics Data System (ADS)
Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin
2017-09-01
Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.
van der Maas, Marloes E; Mantjes, Gertjan; Steuten, Lotte M G
2017-04-01
Antibiotics are often recommended as treatment for patients with chronic obstructive pulmonary disease (COPD) exacerbations. However, not all COPD exacerbations are caused by bacterial infections and there is consequently considerable misuse and overuse of antibiotics among patients with COPD. This poses a severe burden on healthcare resources such as increased risk of developing antibiotic resistance. The biomarker procalcitonin (PCT) displays specificity to distinguish bacterial inflammations from nonbacterial inflammations and may therefore help to rationalize antibiotic prescriptions. We report in this study, a three-country comparison of the health and economic consequences of a PCT biomarker-guided prescription and clinical decision-making strategy compared to current practice in hospitalized patients with COPD exacerbations. A decision tree was developed, comparing the expected costs and effects of the PCT algorithm to current practice in the Netherlands, Germany, and the United Kingdom. The time horizon of the model captured the length of hospital stay and a societal perspective was also adopted. The primary health outcome was the duration of antibiotic therapy. The incremental cost-effectiveness ratio was defined as the incremental costs per antibiotic day avoided. The incremental cost savings per day on antibiotic therapy avoided were (in Euros) €90 in the Netherlands, €125 in Germany, and €52 in the United Kingdom. Probabilistic sensitivity analyses showed that in the majority of simulations, the PCT biomarker strategy was superior to current practice (the Netherlands: 58%, Germany: 58%, and the United Kingdom: 57%). In conclusion, the PCT biomarker algorithm to optimize antibiotic prescriptions in COPD is likely to be cost-effective compared to current practice. Both the percentage of patients who start with antibiotic treatment as well as the duration of antibiotic therapy are reduced with the PCT decision algorithm, leading to a decrease in total costs per patient. Economic analysis based on real-life data is recommended for further research. Biomarker-driven prescription algorithms are important instruments for personalized medicine in COPD. This also attests to the emerging convergence of biomarker innovations and the broader field of Health Technology Assessment (HTA).
Hoenigl, Martin; Graff-Zivin, Joshua; Little, Susan J.
2016-01-01
Background. In nonhealthcare settings, widespread screening for acute human immunodeficiency virus (HIV) infection (AHI) is limited by cost and decision algorithms to better prioritize use of resources. Comparative cost analyses for available strategies are lacking. Methods. To determine cost-effectiveness of community-based testing strategies, we evaluated annual costs of 3 algorithms that detect AHI based on HIV nucleic acid amplification testing (EarlyTest algorithm) or on HIV p24 antigen (Ag) detection via Architect (Architect algorithm) or Determine (Determine algorithm) as well as 1 algorithm that relies on HIV antibody testing alone (Antibody algorithm). The cost model used data on men who have sex with men (MSM) undergoing community-based AHI screening in San Diego, California. Incremental cost-effectiveness ratios (ICERs) per diagnosis of AHI were calculated for programs with HIV prevalence rates between 0.1% and 2.9%. Results. Among MSM in San Diego, EarlyTest was cost-savings (ie, ICERs per AHI diagnosis less than $13.000) when compared with the 3 other algorithms. Cost analyses relative to regional HIV prevalence showed that EarlyTest was cost-effective (ie, ICERs less than $69.547) for similar populations of MSM with an HIV prevalence rate >0.4%; Architect was the second best alternative for HIV prevalence rates >0.6%. Conclusions. Identification of AHI by the dual EarlyTest screening algorithm is likely to be cost-effective not only among at-risk MSM in San Diego but also among similar populations of MSM with HIV prevalence rates >0.4%. PMID:26508512
Wang, Lingling; Fu, Li
2018-01-01
In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soufi, M; Asl, A Kamali; Geramifar, P
2015-06-15
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lungmore » lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and diagnosis.« less
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Incremental triangulation by way of edge swapping and local optimization
NASA Technical Reports Server (NTRS)
Wiltberger, N. Lyn
1994-01-01
This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.
A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.
Singh, Narinder; Singh, S B
2017-01-01
A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.
Stream Clustering of Growing Objects
NASA Astrophysics Data System (ADS)
Siddiqui, Zaigham Faraz; Spiliopoulou, Myra
We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of
Kariuki, Jacob K; Gona, Philimon; Leveille, Suzanne G; Stuart-Shor, Eileen M; Hayman, Laura L; Cromwell, Jerry
2018-06-01
The non-lab Framingham algorithm, which substitute body mass index for lipids in the laboratory based (lab-based) Framingham algorithm, has been validated among African Americans (AAs). However, its cost-effectiveness and economic tradeoffs have not been evaluated. This study examines the incremental cost-effectiveness ratio (ICER) of two cardiovascular disease (CVD) prevention programs guided by the non-lab versus lab-based Framingham algorithm. We simulated the World Health Organization CVD prevention guidelines on a cohort of 2690 AA participants in the Atherosclerosis Risk in Communities (ARIC) cohort. Costs were estimated using Medicare fee schedules (diagnostic tests, drugs & visits), Bureau of Labor Statistics (RN wages), and estimates for managing incident CVD events. Outcomes were assumed to be true positive cases detected at a data driven treatment threshold. Both algorithms had the best balance of sensitivity/specificity at the moderate risk threshold (>10% risk). Over 12years, 82% and 77% of 401 incident CVD events were accurately predicted via the non-lab and lab-based Framingham algorithms, respectively. There were 20 fewer false negative cases in the non-lab approach translating into over $900,000 in savings over 12years. The ICER was -$57,153 for every extra CVD event prevented when using the non-lab algorithm. The approach guided by the non-lab Framingham strategy dominated the lab-based approach with respect to both costs and predictive ability. Consequently, the non-lab Framingham algorithm could potentially provide a highly effective screening tool at lower cost to address the high burden of CVD especially among AA and in resource-constrained settings where lab tests are unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems andmore » then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.« less
Hoenigl, Martin; Graff-Zivin, Joshua; Little, Susan J
2016-02-15
In nonhealthcare settings, widespread screening for acute human immunodeficiency virus (HIV) infection (AHI) is limited by cost and decision algorithms to better prioritize use of resources. Comparative cost analyses for available strategies are lacking. To determine cost-effectiveness of community-based testing strategies, we evaluated annual costs of 3 algorithms that detect AHI based on HIV nucleic acid amplification testing (EarlyTest algorithm) or on HIV p24 antigen (Ag) detection via Architect (Architect algorithm) or Determine (Determine algorithm) as well as 1 algorithm that relies on HIV antibody testing alone (Antibody algorithm). The cost model used data on men who have sex with men (MSM) undergoing community-based AHI screening in San Diego, California. Incremental cost-effectiveness ratios (ICERs) per diagnosis of AHI were calculated for programs with HIV prevalence rates between 0.1% and 2.9%. Among MSM in San Diego, EarlyTest was cost-savings (ie, ICERs per AHI diagnosis less than $13.000) when compared with the 3 other algorithms. Cost analyses relative to regional HIV prevalence showed that EarlyTest was cost-effective (ie, ICERs less than $69.547) for similar populations of MSM with an HIV prevalence rate >0.4%; Architect was the second best alternative for HIV prevalence rates >0.6%. Identification of AHI by the dual EarlyTest screening algorithm is likely to be cost-effective not only among at-risk MSM in San Diego but also among similar populations of MSM with HIV prevalence rates >0.4%. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Productivity losses in chronic obstructive pulmonary disease: a population-based survey.
Erdal, Marta; Johannessen, Ane; Askildsen, Jan Erik; Eagan, Tomas; Gulsvik, Amund; Grønseth, Rune
2014-01-01
We aimed to estimate incremental productivity losses (sick leave and disability) of spirometry-defined chronic obstructive pulmonary disease (COPD) in a population-based sample and in hospital-recruited patients with COPD. Furthermore, we examined predictors of productivity losses by multivariate analyses. We performed four quarterly telephone interviews of 53 and 107 population-based patients with COPD and controls, as well as 102 hospital-recruited patients with COPD below retirement age. Information was gathered regarding annual productivity loss, exacerbations of respiratory symptoms and comorbidities. Incremental productivity losses were estimated by multivariate quantile median regression according to the human capital approach, adjusting for sex, age, smoking habits, education and lung function. Main effect variables were COPD/control status, number of comorbidities and exacerbations of respiratory symptoms. Altogether 55%, 87% and 31% of population-based COPD cases, controls and hospital patients, respectively, had a paid job at baseline. The annual incremental productivity losses were 5.8 (95% CI 1.4 to 10.1) and 330.6 (95% CI 327.8 to 333.3) days, comparing population-recruited and hospital-recruited patients with COPD to controls, respectively. There were significantly higher productivity losses associated with female sex and less education. Additional adjustments for comorbidities, exacerbations and FEV1% predicted explained all productivity losses in the population-based sample, as well as nearly 40% of the productivity losses in hospital-recruited patients. Annual incremental productivity losses were more than 50 times higher in hospital-recruited patients with COPD than that of population-recruited patients with COPD. To ensure a precise estimation of societal burden, studies on patients with COPD should be population-based.
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team has developed a series of flight control concepts designed to demonstrate the benefits of a neural network-based adaptive controller. The objective of the team is to develop and flight-test control systems that use neural network technology to optimize the performance of the aircraft under nominal conditions as well as stabilize the aircraft under failure conditions. Failure conditions include locked or failed control surfaces as well as unforeseen damage that might occur to the aircraft in flight. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to the baseline aerodynamic derivatives in flight. This set of open-loop flight tests was performed in preparation for a future phase of flights in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed a pitch frequency sweep and an automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. An examination of flight data shows that addition of the flight-identified aerodynamic derivative increments into the simulation improved the pitch handling qualities of the aircraft.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
Derivative Analysis of AVIRIS Hyperspectral Data for the Detection of Plant Stress
NASA Technical Reports Server (NTRS)
Estep, Lee; Berglund, Judith
2001-01-01
A remote sensing campaign was conducted over a U.S. Department of Agriculture test site at Shelton, Nebraska. The test field was set off in blocks that were differentially treated with nitrogen. Four replicates of 0-kg/ha to 200-kg/ha, in 50-kg/ha increments, were present. Low-altitude AVIRIS hyperspectral data were collected over the site in 224 spectral bands. Simultaneously, ground data were collected to support the airborne imagery. In an effort to evaluate published, derivative-based algorithms for the detection of plant stress, different derivative-based approaches were applied to the collected AVIRIS image cube. The results indicate that, given good quality hyperspectral imagery, derivative techniques compare favorably with simple, well known band ratio algorithms for detection of plant stress.
Hybrid intelligent optimization methods for engineering problems
NASA Astrophysics Data System (ADS)
Pehlivanoglu, Yasin Volkan
The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles.
Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Cheng, Jiahao
2018-02-01
Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.
A Geostatistical Scaling Approach for the Generation of Non Gaussian Random Variables and Increments
NASA Astrophysics Data System (ADS)
Guadagnini, Alberto; Neuman, Shlomo P.; Riva, Monica; Panzeri, Marco
2016-04-01
We address manifestations of non-Gaussian statistical scaling displayed by many variables, Y, and their (spatial or temporal) increments. Evidence of such behavior includes symmetry of increment distributions at all separation distances (or lags) with sharp peaks and heavy tails which tend to decay asymptotically as lag increases. Variables reported to exhibit such distributions include quantities of direct relevance to hydrogeological sciences, e.g. porosity, log permeability, electrical resistivity, soil and sediment texture, sediment transport rate, rainfall, measured and simulated turbulent fluid velocity, and other. No model known to us captures all of the documented statistical scaling behaviors in a unique and consistent manner. We recently proposed a generalized sub-Gaussian model (GSG) which reconciles within a unique theoretical framework the probability distributions of a target variable and its increments. We presented an algorithm to generate unconditional random realizations of statistically isotropic or anisotropic GSG functions and illustrated it in two dimensions. In this context, we demonstrated the feasibility of estimating all key parameters of a GSG model underlying a single realization of Y by analyzing jointly spatial moments of Y data and corresponding increments. Here, we extend our GSG model to account for noisy measurements of Y at a discrete set of points in space (or time), present an algorithm to generate conditional realizations of corresponding isotropic or anisotropic random field, and explore them on one- and two-dimensional synthetic test cases.
NASA Astrophysics Data System (ADS)
He, Jianbin; Yu, Simin; Cai, Jianping
2016-12-01
Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.
Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-10-21
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array
Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637
Registering Ground and Satellite Imagery for Visual Localization
2012-08-01
reckoning, inertial, stereo, light detection and ranging ( LIDAR ), cellular radio, and visual. As no sensor or algorithm provides perfect localization in...by metric localization approaches to confine the region of a map that needs to be searched. Simultaneous Localization and Mapping ( SLAM ) (5, 6), using...estimate the metric location of the camera. Se et al. (7) use SIFT features for both appearance-based global localization and incremental 3D SLAM . Johns and
Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O
2014-09-01
The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.
Algorithms For Integrating Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Ultra-high resolution computed tomography imaging
Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.
2002-01-01
A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.
NASA Astrophysics Data System (ADS)
Lei, Meizhen; Wang, Liqiang
2018-01-01
The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.
Improved pulse laser ranging algorithm based on high speed sampling
NASA Astrophysics Data System (ADS)
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Ghorbani, Nima; Watson, P J
2005-06-01
This study examined the incremental validity of Hardiness scales in a sample of Iranian managers. Along with measures of the Five Factor Model and of Organizational and Psychological Adjustment, Hardiness scales were administered to 159 male managers (M age = 39.9, SD = 7.5) who had worked in their organizations for 7.9 yr. (SD=5.4). Hardiness predicted greater Job Satisfaction, higher Organization-based Self-esteem, and perceptions of the work environment as being less stressful and constraining. Hardiness also correlated positively with Assertiveness, Emotional Stability, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness and negatively with Depression, Anxiety, Perceived Stress, Chance External Control, and a Powerful Others External Control. Evidence of incremental validity was obtained when the Hardiness scales supplemented the Five Factor Model in predicting organizational and psychological adjustment. These data documented the incremental validity of the Hardiness scales in a non-Western sample and thus confirmed once again that Hardiness has a relevance that extends beyond the culture in which it was developed.
Huerga, Helena; Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline
2017-01-01
Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4-55.6) to 84.0% (95%CI:77.3-89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1-69.8) to 82.1% (95%CI:75.1-87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8-81.0) to 87.8% (95%CI:81.6-92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5-4.9). LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM allows same day treatment initiation in patients at higher risk of death and in those not able to produce sputum.
NASA Technical Reports Server (NTRS)
Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.
2004-01-01
The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.
A Framework and Toolkit for the Construction of Multimodal Learning Interfaces
1998-04-29
human communication modalities in the context of a broad class of applications, specifically those that support state manipulation via parameterized actions. The multimodal semantic model is also the basis for a flexible, domain independent, incrementally trainable multimodal interpretation algorithm based on a connectionist network. The second major contribution is an application framework consisting of reusable components and a modular, distributed system architecture. Multimodal application developers can assemble the components in the framework into a new application,
Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.
Kervrann, C; Legland, D; Pardini, L
2004-06-01
Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.
NASA Astrophysics Data System (ADS)
Imada, Keita; Nakamura, Katsuhiko
This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Abdelmoula, Nouha; Harthong, Barthélémy; Imbault, Didier; Dorémus, Pierre
2017-12-01
The multi-particle finite element method involving assemblies of meshed particles interacting through finite-element contact conditions is adopted to study the plastic flow of a granular material with highly deformable elastic-plastic grains. In particular, it is investigated whether the flow rule postulate applies for such materials. Using a spherical stress probing method, the influence of incremental stress on plastic strain increment vectors was assessed for numerical samples compacted along two different loading paths up to different values of relative density. Results show that the numerical samples studied behave reasonably well according to an associated flow rule, except in the vicinity of the loading point where the influence of the stress increment proved to be very significant. A plausible explanation for the non-uniqueness of the direction of plastic flow is proposed, based on the idea that the resistance of the numerical sample to plastic straining can vary by an order of magnitude depending on the direction of the accumulated stress. The above-mentioned dependency of the direction of plastic flow on the direction of the stress increment was related to the difference in strength between shearing and normal stressing at the scale of contact surfaces between particles.
Cost-Effectiveness of Automated Digital Microscopy for Diagnosis of Active Tuberculosis.
Jha, Swati; Ismail, Nazir; Clark, David; Lewis, James J; Omar, Shaheed; Dreyer, Andries; Chihota, Violet; Churchyard, Gavin; Dowdy, David W
2016-01-01
Automated digital microscopy has the potential to improve the diagnosis of tuberculosis (TB), particularly in settings where molecular testing is too expensive to perform routinely. The cost-effectiveness of TB diagnostic algorithms using automated digital microscopy remains uncertain. Using data from a demonstration study of an automated digital microscopy system (TBDx, Applied Visual Systems, Inc.), we performed an economic evaluation of TB diagnosis in South Africa from the health system perspective. The primary outcome was the incremental cost per new TB diagnosis made. We considered costs and effectiveness of different algorithms for automated digital microscopy, including as a stand-alone test and with confirmation of positive results with Xpert MTB/RIF ('Xpert', Cepheid, Inc.). Results were compared against both manual microscopy and universal Xpert testing. In settings willing to pay $2000 per incremental TB diagnosis, universal Xpert was the preferred strategy. However, where resources were not sufficient to support universal Xpert, and a testing volume of at least 30 specimens per day could be ensured, automated digital microscopy with Xpert confirmation of low-positive results could facilitate the diagnosis of 79-84% of all Xpert-positive TB cases, at 50-60% of the total cost. The cost-effectiveness of this strategy was $1280 per incremental TB diagnosis (95% uncertainty range, UR: $340-$3440) in the base case, but improved under conditions likely reflective of many settings in sub-Saharan Africa: $677 per diagnosis (95% UR: $450-$935) when sensitivity of manual smear microscopy was lowered to 0.5, and $956 per diagnosis (95% UR: $40-$2910) when the prevalence of multidrug-resistant TB was lowered to 1%. Although universal Xpert testing is the preferred algorithm for TB diagnosis when resources are sufficient, automated digital microscopy can identify the majority of cases and halve the cost of diagnosis and treatment when resources are more scarce and multidrug-resistant TB is not common.
Generation of referring expressions: assessing the Incremental Algorithm.
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-07-01
A substantial amount of recent work in natural language generation has focused on the generation of ''one-shot'' referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We test this hypothesis by eliciting referring expressions from human subjects and computing the similarity between the expressions elicited and the ones generated by algorithms. It turns out that the success of the IA depends substantially on the ''preference order'' (PO) employed by the IA, particularly in complex domains. While some POs cause the IA to produce referring expressions that are very similar to expressions produced by human subjects, others cause the IA to perform worse than its main competitors; moreover, it turns out to be difficult to predict the success of a PO on the basis of existing psycholinguistic findings or frequencies in corpora. We also examine the computational complexity of the algorithms in question and argue that there are no compelling reasons for preferring the IA over some of its main competitors on these grounds. We conclude that future research on the generation of referring expressions should explore alternatives to the IA, focusing on algorithms, inspired by the Greedy Algorithm, which do not work with a fixed PO. Copyright © 2011 Cognitive Science Society, Inc.
A Novel Method for Reconstructing Broken Contour Lines Extracted from Scanned Topographic Maps
NASA Astrophysics Data System (ADS)
Wang, Feng; Liu, Pingzhi; Yang, Yun; Wei, Haiping; An, Xiaoya
2018-05-01
It is known that after segmentation and morphological operations on scanned topographic maps, gaps occur in contour lines. It is also well known that filling these gaps and reconstruction of contour lines with high accuracy and completeness is not an easy problem. In this paper, a novel method is proposed dedicated in automatic or semiautomatic filling up caps and reconstructing broken contour lines in binary images. The key part of end points' auto-matching and reconnecting is deeply discussed after introducing the procedure of reconstruction, in which some key algorithms and mechanisms are presented and realized, including multiple incremental backing trace to get weighted average direction angle of end points, the max constraint angle control mechanism based on the multiple gradient ranks, combination of weighted Euclidean distance and deviation angle to determine the optimum matching end point, bidirectional parabola control, etc. Lastly, experimental comparisons based on typically samples are complemented between proposed method and the other representative method, the results indicate that the former holds higher accuracy and completeness, better stability and applicability.
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
An Expert System toward Buiding An Earth Science Knowledge Graph
NASA Astrophysics Data System (ADS)
Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.
2017-12-01
In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.
Product Quality Modelling Based on Incremental Support Vector Machine
NASA Astrophysics Data System (ADS)
Wang, J.; Zhang, W.; Qin, B.; Shi, W.
2012-05-01
Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.
2013-05-28
those of the support vector machine and relevance vector machine, and the model runs more quickly than the other algorithms . When one class occurs...incremental support vector machine algorithm for online learning when fewer than 50 data points are available. (a) Papers published in peer-reviewed journals...learning environments, where data processing occurs one observation at a time and the classification algorithm improves over time with new
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Shawn
This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy
Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao
2016-01-01
Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000
NASA Astrophysics Data System (ADS)
Zheng, Yan
2015-03-01
Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.
Measuring the self-similarity exponent in Lévy stable processes of financial time series
NASA Astrophysics Data System (ADS)
Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.
2013-11-01
Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.
User-Adapted Recommendation of Content on Mobile Devices Using Bayesian Networks
NASA Astrophysics Data System (ADS)
Iwasaki, Hirotoshi; Mizuno, Nobuhiro; Hara, Kousuke; Motomura, Yoichi
Mobile devices, such as cellular phones and car navigation systems, are essential to daily life. People acquire necessary information and preferred content over communication networks anywhere, anytime. However, usability issues arise from the simplicity of user interfaces themselves. Thus, a recommendation of content that is adapted to a user's preference and situation will help the user select content. In this paper, we describe a method to realize such a system using Bayesian networks. This user-adapted mobile system is based on a user model that provides recommendation of content (i.e., restaurants, shops, and music that are suitable to the user and situation) and that learns incrementally based on accumulated usage history data. However, sufficient samples are not always guaranteed, since a user model would require combined dependency among users, situations, and contents. Therefore, we propose the LK method for modeling, which complements incomplete and insufficient samples using knowledge data, and CPT incremental learning for adaptation based on a small number of samples. In order to evaluate the methods proposed, we applied them to restaurant recommendations made on car navigation systems. The evaluation results confirmed that our model based on the LK method can be expected to provide better generalization performance than that of the conventional method. Furthermore, our system would require much less operation than current car navigation systems from the beginning of use. Our evaluation results also indicate that learning a user's individual preference through CPT incremental learning would be beneficial to many users, even with only a few samples. As a result, we have developed the technology of a system that becomes more adapted to a user the more it is used.
Risk markers for disappearance of pediatric Web resources
Hernández-Borges, Angel A.; Jiménez-Sosa, Alejandro; Torres-Álvarez de Arcaya, Maria L.; Macías-Cervi, Pablo; Gaspar-Guardado, Maria A.; Ruíz-Rabaza, Ana
2005-01-01
Objectives: The authors sought to find out whether certain Webometric indexes of a sample of pediatric Web resources, and some tests based on them, could be helpful predictors of their disappearance. Methods: The authors performed a retrospective study of a sample of 363 pediatric Websites and pages they had followed for 4 years. Main measurements included: number of resources that disappeared, number of inbound links and their annual increment, average daily visits to the resources in the sample, sample compliance with the quality criteria of 3 international organizations, and online time of the Web resources. Results: On average, 11% of the sample disappeared annually. However, 13% of these were available again at the end of follow up. Disappearing and surviving Websites did not show differences in the variables studied. However, surviving Web pages had a higher number of inbound links and higher annual increment in inbound links. Similarly, Web pages that survived showed higher compliance with recognized sets of quality criteria than those that disappeared. A subset of 14 quality criteria whose compliance accounted for 90% of the probability of online permanence was identified. Finally, a progressive increment of inbound links was found to be a marker of good prognosis, showing high specificity and positive predictive value (88% and 94%, respectively). Conclusions: The number of inbound links and annual increment of inbound links could be useful markers of the permanence probability for pediatric Web pages. Strategies that assure the Web editors' awareness of their Web resources' popularity could stimulate them to improve the quality of their Websites. PMID:16059427
Computing aggregate properties of preimages for 2D cellular automata
NASA Astrophysics Data System (ADS)
Beer, Randall D.
2017-11-01
Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm—incremental aggregation—that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.
Statistically Controlling for Confounding Constructs Is Harder than You Think
Westfall, Jacob; Yarkoni, Tal
2016-01-01
Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707
Noise and complexity in human postural control: interpreting the different estimations of entropy.
Rhea, Christopher K; Silver, Tobin A; Hong, S Lee; Ryu, Joong Hyun; Studenka, Breanna E; Hughes, Charmayne M L; Haddad, Jeffrey M
2011-03-17
Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses.
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
The Incremental Multiresolution Matrix Factorization Algorithm
Ithapu, Vamsi K.; Kondor, Risi; Johnson, Sterling C.; Singh, Vikas
2017-01-01
Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices – an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct “global” factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision. PMID:29416293
Muhlbaier, Michael D; Topalis, Apostolos; Polikar, Robi
2009-01-01
We have previously introduced an incremental learning algorithm Learn(++), which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn(++) suffers from an inherent "outvoting" problem when asked to learn a new class omega(new) introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify omega(new) instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on omega(new) instances--until there are enough new classifiers to counteract the unfair outvoting. This forces Learn(++) to generate an unnecessarily large number of classifiers. This paper describes Learn(++).NC, specifically designed for efficient incremental learning of multiple new classes using significantly fewer classifiers. To do so, Learn (++).NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn(++), but also as compared to several other algorithms recently proposed for similar problems.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
The Development of a 3D LADAR Simulator Based on a Fast Target Impulse Response Generation Approach
NASA Astrophysics Data System (ADS)
Al-Temeemy, Ali Adnan
2017-09-01
A new laser detection and ranging (LADAR) simulator has been developed, using MATLAB and its graphical user interface, to simulate direct detection time of flight LADAR systems, and to produce 3D simulated scanning images under a wide variety of conditions. This simulator models each stage from the laser source to data generation and can be considered as an efficient simulation tool to use when developing LADAR systems and their data processing algorithms. The novel approach proposed for this simulator is to generate the actual target impulse response. This approach is fast and able to deal with high scanning requirements without losing the fidelity that accompanies increments in speed. This leads to a more efficient LADAR simulator and opens up the possibility for simulating LADAR beam propagation more accurately by using a large number of laser footprint samples. The approach is to select only the parts of the target that lie in the laser beam angular field by mathematically deriving the required equations and calculating the target angular ranges. The performance of the new simulator has been evaluated under different scanning conditions, the results showing significant increments in processing speeds in comparison to conventional approaches, which are also used in this study as a point of comparison for the results. The results also show the simulator's ability to simulate phenomena related to the scanning process, for example, type of noise, scanning resolution and laser beam width.
Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J
2009-03-01
A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.
Kawatkar, Aniket A; Jacobsen, Steven J; Levy, Gerald D; Medhekar, Swati S; Venkatasubramaniam, Kumarapuram V; Herrinton, Lisa J
2012-11-01
To quantify the incremental direct medical expenditure associated with rheumatoid arthritis (RA) in the US population from a payer's perspective. A probability-weighted sample of adult respondents from the Medical Expenditure Panel Survey (2008) was used to identify a cohort of patients with RA and compared to a control cohort without RA. Annual expenditure outcomes, including total expenditure and subgroups related to pharmacy, office-based visits, emergency department visits, hospital inpatient stays, and residual expenditures were estimated. Differences between the RA and control cohort were adjusted for sociodemographic factors, employment status, insurance coverage, health behavior, and health status using a generalized linear model with log link and gamma distribution. Statistical inferences on difference in expenditures between RA and non-RA controls were based on nonparametric cluster bootstrapping using percentiles. The adjusted average annual total expenditure of the RA cohort in 2008 US dollars (USD) was $13,012 (95% confidence interval [95% CI] $1,737-$47,081), while that of the control cohort was $4,950 (95% CI $567-$17,425). The incremental total expenditure of the RA patients as compared to non-RA controls was $2,085 (95% CI $250-$7,822). RA patients also had a significantly higher pharmacy expenditure of $5,825 (95% CI $446-$30,998) that was on average $1,380 (95% CI $94-$7,492) higher as compared to the controls. The summated total incremental expenditure of all RA patients in the US was $22.3 billion (2008 USD). RA exerts considerable incremental economic burden on US health care, which is primarily driven by the incremental pharmacy expenditure. Copyright © 2012 by the American College of Rheumatology.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
K-Nearest Neighbor Algorithm Optimization in Text Categorization
NASA Astrophysics Data System (ADS)
Chen, Shufeng
2018-01-01
K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Multirate sampled-data yaw-damper and modal suppression system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1990-01-01
A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.
Statistical modelling as an aid to the design of retail sampling plans for mycotoxins in food.
MacArthur, Roy; MacDonald, Susan; Brereton, Paul; Murray, Alistair
2006-01-01
A study has been carried out to assess appropriate statistical models for use in evaluating retail sampling plans for the determination of mycotoxins in food. A compound gamma model was found to be a suitable fit. A simulation model based on the compound gamma model was used to produce operating characteristic curves for a range of parameters relevant to retail sampling. The model was also used to estimate the minimum number of increments necessary to minimize the overall measurement uncertainty. Simulation results showed that measurements based on retail samples (for which the maximum number of increments is constrained by cost) may produce fit-for-purpose results for the measurement of ochratoxin A in dried fruit, but are unlikely to do so for the measurement of aflatoxin B1 in pistachio nuts. In order to produce a more accurate simulation, further work is required to determine the degree of heterogeneity associated with batches of food products. With appropriate parameterization in terms of physical and biological characteristics, the systems developed in this study could be applied to other analyte/matrix combinations.
The incremental impact of cardiac MRI on clinical decision-making.
Rajwani, Adil; Stewart, Michael J; Richardson, James D; Child, Nicholas M; Maredia, Neil
2016-01-01
Despite a significant expansion in the use of cardiac MRI (CMR), there is inadequate evaluation of its incremental impact on clinical decision-making over and above other well-established modalities. We sought to determine the incremental utility of CMR in routine practice. 629 consecutive CMR studies referred by 44 clinicians from 9 institutions were evaluated. Pre-defined algorithms were used to determine the incremental influence on diagnostic thinking, influence on clinical management and thus the overall clinical utility. Studies were also subdivided and evaluated according to the indication for CMR. CMR provided incremental information to the clinician in 85% of cases, with incremental influence on diagnostic thinking in 85% of cases and incremental impact on management in 42% of cases. The overall incremental utility of CMR exceeded 90% in 7 out of the 13 indications, whereas in settings such as the evaluation of unexplained ventricular arrhythmia or mild left ventricular systolic dysfunction, this was <50%. CMR was frequently able to inform and influence decision-making in routine clinical practice, even with analyses that accepted only incremental clinical information and excluded a redundant duplication of imaging. Significant variations in yield were noted according to the indication for CMR. These data support a wider integration of CMR services into cardiac imaging departments. These data are the first to objectively evaluate the incremental value of a UK CMR service in clinical decision-making. Such data are essential when seeking justification for a CMR service.
Finding Minimum-Power Broadcast Trees for Wireless Networks
NASA Technical Reports Server (NTRS)
Arabshahi, Payman; Gray, Andrew; Das, Arindam; El-Sharkawi, Mohamed; Marks, Robert, II
2004-01-01
Some algorithms have been devised for use in a method of constructing tree graphs that represent connections among the nodes of a wireless communication network. These algorithms provide for determining the viability of any given candidate connection tree and for generating an initial set of viable trees that can be used in any of a variety of search algorithms (e.g., a genetic algorithm) to find a tree that enables the network to broadcast from a source node to all other nodes while consuming the minimum amount of total power. The method yields solutions better than those of a prior algorithm known as the broadcast incremental power algorithm, albeit at a slightly greater computational cost.
Investigating the incremental validity of cognitive variables in early mathematics screening.
Clarke, Ben; Shanley, Lina; Kosty, Derek; Baker, Scott K; Cary, Mari Strand; Fien, Hank; Smolkowski, Keith
2018-03-26
The purpose of this study was to investigate the incremental validity of a set of domain general cognitive measures added to a traditional screening battery of early numeracy measures. The sample consisted of 458 kindergarten students of whom 285 were designated as severely at-risk for mathematics difficulty. Hierarchical multiple regression results indicated that Wechsler Abbreviated Scales of Intelligence (WASI) Matrix Reasoning and Vocabulary subtests, and Digit Span Forward and Backward measures explained a small, but unique portion of the variance in kindergarten students' mathematics performance on the Test of Early Mathematics Ability-Third Edition (TEMA-3) when controlling for Early Numeracy Curriculum Based Measurement (EN-CBM) screening measures (R² change = .01). Furthermore, the incremental validity of the domain general cognitive measures was relatively stronger for the severely at-risk sample. We discuss results from the study in light of instructional decision-making and note the findings do not justify adding domain general cognitive assessments to mathematics screening batteries. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cost-effectiveness of WHO-Recommended Algorithms for TB Case Finding at Ethiopian HIV Clinics.
Adelman, Max W; McFarland, Deborah A; Tsegaye, Mulugeta; Aseffa, Abraham; Kempker, Russell R; Blumberg, Henry M
2018-01-01
The World Health Organization (WHO) recommends active tuberculosis (TB) case finding and a rapid molecular diagnostic test (Xpert MTB/RIF) to detect TB among people living with HIV (PLHIV) in high-burden settings. Information on the cost-effectiveness of these recommended strategies is crucial for their implementation. We conducted a model-based cost-effectiveness analysis comparing 2 algorithms for TB screening and diagnosis at Ethiopian HIV clinics: (1) WHO-recommended symptom screen combined with Xpert for PLHIV with a positive symptom screen and (2) current recommended practice algorithm (CRPA; based on symptom screening, smear microscopy, and clinical TB diagnosis). Our primary outcome was US$ per disability-adjusted life-year (DALY) averted. Secondary outcomes were additional true-positive diagnoses, and false-negative and false-positive diagnoses averted. Compared with CRPA, combining a WHO-recommended symptom screen with Xpert was highly cost-effective (incremental cost of $5 per DALY averted). Among a cohort of 15 000 PLHIV with a TB prevalence of 6% (900 TB cases), this algorithm detected 8 more true-positive cases than CRPA, and averted 2045 false-positive and 8 false-negative diagnoses compared with CRPA. The WHO-recommended algorithm was marginally costlier ($240 000) than CRPA ($239 000). In sensitivity analysis, the symptom screen/Xpert algorithm was dominated at low Xpert sensitivity (66%). In this model-based analysis, combining a WHO-recommended symptom screen with Xpert for TB diagnosis among PLHIV was highly cost-effective ($5 per DALY averted) and more sensitive than CRPA in a high-burden, resource-limited setting.
Alzahouri, Kazem; Bahrami, Stéphane; Durand-Zaleski, Isabelle; Guillemin, Francis; Roux, Christian
2013-01-01
FRAX™ is a fracture prediction algorithm to determine a patient's absolute fracture risk. There is a growing consensus that osteoporosis treatment should be based on individual 10-year fracture probability, as calculated in the FRAX™ algorithm, rather than on T-scores alone. Our objective was to evaluate the cost-effectiveness of five years of branded alendronate therapy in postmenopausal French women with a known FRAX™ score. A Markov cohort state transition model using FRAX™ values and whenever possible population-specific data and probabilities. We estimated the incremental cost-effectiveness ratio (ICER) of alendronate versus no treatment in postmenopausal women with FRAX™ ranging from 10 to 3%. Number of women to treat (NNT) for preventing hip fracture, costs, quality-adjusted life-years, incremental cost-effectiveness ratios. The incremental cost-effectiveness ratios (ICER) compared to no treatment at age 70 ranged from €104,183 to €413,473 per QALY when FRAX™ decreased from 10 to 3%. The NNTs for preventing one hip fracture ranged from 97 to 388 according to age (50-80 years) and FRAX™. Sensitivity analyses showed that the main determinants of cost-effectiveness were adherence to therapy and cost of treatment. Using French costs of branded drug and current estimates of treatment efficacy, alendronate therapy for 70-year-old women with 10-year probability of hip fracture of 10% just meets the accepted cost-effectiveness threshold. Improving treatment adherence and/or decreasing treatment cost lowers the ICER. The model however underestimates the potential benefit by excluding other fractures. Copyright © 2012 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.
A comparative study of two prediction models for brain tumor progression
NASA Astrophysics Data System (ADS)
Zhou, Deqi; Tran, Loc; Wang, Jihong; Li, Jiang
2015-03-01
MR diffusion tensor imaging (DTI) technique together with traditional T1 or T2 weighted MRI scans supplies rich information sources for brain cancer diagnoses. These images form large-scale, high-dimensional data sets. Due to the fact that significant correlations exist among these images, we assume low-dimensional geometry data structures (manifolds) are embedded in the high-dimensional space. Those manifolds might be hidden from radiologists because it is challenging for human experts to interpret high-dimensional data. Identification of the manifold is a critical step for successfully analyzing multimodal MR images. We have developed various manifold learning algorithms (Tran et al. 2011; Tran et al. 2013) for medical image analysis. This paper presents a comparative study of an incremental manifold learning scheme (Tran. et al. 2013) versus the deep learning model (Hinton et al. 2006) in the application of brain tumor progression prediction. The incremental manifold learning is a variant of manifold learning algorithm to handle large-scale datasets in which a representative subset of original data is sampled first to construct a manifold skeleton and remaining data points are then inserted into the skeleton by following their local geometry. The incremental manifold learning algorithm aims at mitigating the computational burden associated with traditional manifold learning methods for large-scale datasets. Deep learning is a recently developed multilayer perceptron model that has achieved start-of-the-art performances in many applications. A recent technique named "Dropout" can further boost the deep model by preventing weight coadaptation to avoid over-fitting (Hinton et al. 2012). We applied the two models on multiple MRI scans from four brain tumor patients to predict tumor progression and compared the performances of the two models in terms of average prediction accuracy, sensitivity, specificity and precision. The quantitative performance metrics were calculated as average over the four patients. Experimental results show that both the manifold learning and deep neural network models produced better results compared to using raw data and principle component analysis (PCA), and the deep learning model is a better method than manifold learning on this data set. The averaged sensitivity and specificity by deep learning are comparable with these by the manifold learning approach while its precision is considerably higher. This means that the predicted abnormal points by deep learning are more likely to correspond to the actual progression region.
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
Parallel approach for bioinspired algorithms
NASA Astrophysics Data System (ADS)
Zaporozhets, Dmitry; Zaruba, Daria; Kulieva, Nina
2018-05-01
In the paper, a probabilistic parallel approach based on the population heuristic, such as a genetic algorithm, is suggested. The authors proposed using a multithreading approach at the micro level at which new alternative solutions are generated. On each iteration, several threads that independently used the same population to generate new solutions can be started. After the work of all threads, a selection operator combines obtained results in the new population. To confirm the effectiveness of the suggested approach, the authors have developed software on the basis of which experimental computations can be carried out. The authors have considered a classic optimization problem – finding a Hamiltonian cycle in a graph. Experiments show that due to the parallel approach at the micro level, increment of running speed can be obtained on graphs with 250 and more vertices.
Collecting, preparing, crossdating, and measuring tree increment cores
Phipps, R.L.
1985-01-01
Techniques for collecting and handling increment tree cores are described. Procedures include those for cleaning and caring for increment borers, extracting the sample from a tree, core surfacing, crossdating, and measuring. (USGS)
Analysis of A Drug Target-based Classification System using Molecular Descriptors.
Lu, Jing; Zhang, Pin; Bi, Yi; Luo, Xiaomin
2016-01-01
Drug-target interaction is an important topic in drug discovery and drug repositioning. KEGG database offers a drug annotation and classification using a target-based classification system. In this study, we gave an investigation on five target-based classes: (I) G protein-coupled receptors; (II) Nuclear receptors; (III) Ion channels; (IV) Enzymes; (V) Pathogens, using molecular descriptors to represent each drug compound. Two popular feature selection methods, maximum relevance minimum redundancy and incremental feature selection, were adopted to extract the important descriptors. Meanwhile, an optimal prediction model based on nearest neighbor algorithm was constructed, which got the best result in identifying drug target-based classes. Finally, some key descriptors were discussed to uncover their important roles in the identification of drug-target classes.
A hybrid incremental projection method for thermal-hydraulics applications
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; Berndt, Markus; Francois, Marianne M.; Stagg, Alan K.; Xia, Yidong; Luo, Hong
2016-07-01
A new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya-Babuška-Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie-Chow interpolation or by using a Petrov-Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes, and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.
A hybrid incremental projection method for thermal-hydraulics applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
A hybrid incremental projection method for thermal-hydraulics applications
Christon, Mark A.; Bakosi, Jozsef; Nadiga, Balasubramanya T.; ...
2016-07-01
In this paper, a new second-order accurate, hybrid, incremental projection method for time-dependent incompressible viscous flow is introduced in this paper. The hybrid finite-element/finite-volume discretization circumvents the well-known Ladyzhenskaya–Babuška–Brezzi conditions for stability, and does not require special treatment to filter pressure modes by either Rhie–Chow interpolation or by using a Petrov–Galerkin finite element formulation. The use of a co-velocity with a high-resolution advection method and a linearly consistent edge-based treatment of viscous/diffusive terms yields a robust algorithm for a broad spectrum of incompressible flows. The high-resolution advection method is shown to deliver second-order spatial convergence on mixed element topology meshes,more » and the implicit advective treatment significantly increases the stable time-step size. The algorithm is robust and extensible, permitting the incorporation of features such as porous media flow, RANS and LES turbulence models, and semi-/fully-implicit time stepping. A series of verification and validation problems are used to illustrate the convergence properties of the algorithm. The temporal stability properties are demonstrated on a range of problems with 2 ≤ CFL ≤ 100. The new flow solver is built using the Hydra multiphysics toolkit. The Hydra toolkit is written in C++ and provides a rich suite of extensible and fully-parallel components that permit rapid application development, supports multiple discretization techniques, provides I/O interfaces, dynamic run-time load balancing and data migration, and interfaces to scalable popular linear solvers, e.g., in open-source packages such as HYPRE, PETSc, and Trilinos.« less
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
NASA Technical Reports Server (NTRS)
Korivi, V. M.; Taylor, A. C., III; Newman, P. A.; Hou, G. J.-W.; Jones, H. E.
1992-01-01
An incremental strategy is presented for iteratively solving very large systems of linear equations, which are associated with aerodynamic sensitivity derivatives for advanced CFD codes. It is shown that the left-hand side matrix operator and the well-known factorization algorithm used to solve the nonlinear flow equations can also be used to efficiently solve the linear sensitivity equations. Two airfoil problems are considered as an example: subsonic low Reynolds number laminar flow and transonic high Reynolds number turbulent flow.
NASA Astrophysics Data System (ADS)
Abdellah, Skoudarli; Mokhtar, Nibouche; Amina, Serir
2015-11-01
The H.264/AVC video coding standard is used in a wide range of applications from video conferencing to high-definition television according to its high compression efficiency. This efficiency is mainly acquired from the newly allowed prediction schemes including variable block modes. However, these schemes require a high complexity to select the optimal mode. Consequently, complexity reduction in the H.264/AVC encoder has recently become a very challenging task in the video compression domain, especially when implementing the encoder in real-time applications. Fast mode decision algorithms play an important role in reducing the overall complexity of the encoder. In this paper, we propose an adaptive fast intermode algorithm based on motion activity, temporal stationarity, and spatial homogeneity. This algorithm predicts the motion activity of the current macroblock from its neighboring blocks and identifies temporal stationary regions and spatially homogeneous regions using adaptive threshold values based on content video features. Extensive experimental work has been done in high profile, and results show that the proposed source-coding algorithm effectively reduces the computational complexity by 53.18% on average compared with the reference software encoder, while maintaining the high-coding efficiency of H.264/AVC by incurring only 0.097 dB in total peak signal-to-noise ratio and 0.228% increment on the total bit rate.
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
NASA Technical Reports Server (NTRS)
Glass, Charles E.; Boyd, Richard V.; Sternberg, Ben K.
1991-01-01
The overall aim is to provide base technology for an automated vision system for on-board interpretation of geophysical data. During the first year's work, it was demonstrated that geophysical data can be treated as patterns and interpreted using single neural networks. Current research is developing an integrated vision system comprising neural networks, algorithmic preprocessing, and expert knowledge. This system is to be tested incrementally using synthetic geophysical patterns, laboratory generated geophysical patterns, and field geophysical patterns.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.
Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter
2013-02-01
Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of streamline tractography algorithms or the assumption of a noise distribution. Moreover, the BootGraph can be applied to common DTI data sets without further modifications and shows a high repeatability. Thus, it is very well suited for longitudinal studies and meta-studies based on DTI. Copyright © 2012 Elsevier Inc. All rights reserved.
KODA, MASAHIKO; TOKUNAGA, SHIHO; MATONO, TOMOMITSU; SUGIHARA, TAKAAKI; NAGAHARA, TAKAKAZU; MURAWAKI, YOSHIKAZU
2011-01-01
The purpose of the present study was to compare the size and configuration of the ablation zones created by SuperSlim and CoAccess electrodes, using various ablation algorithms in ex vivo bovine liver and in clinical cases. In the experimental study, we ablated explanted bovine liver using 2 types of electrodes and 4 ablation algorithms (combinations of incremental power supply, stepwise expansion and additional low-power ablation) and evaluated the ablation area and time. In the clinical study, we compared the ablation volume and the shape of the ablation zone between both electrodes in 23 hepatocellular carcinoma (HCC) cases with the best algorithm (incremental power supply, stepwise expansion and additional low-power ablation) as derived from the experimental study. In the experimental study, the ablation area and time by the CoAccess electrode were significantly greater compared to those by the SuperSlim electrode for the single-step (algorithm 1, p=0.0209 and 0.0325, respectively) and stepwise expansion algorithms (algorithm 2, p=0.0002 and <0.0001, respectively; algorithm 3, p= 0.006 and 0.0407, respectively). However, differences were not significant for the additional low-power ablation algorithm. In the clinical study, the ablation volume and time in the CoAccess group were significantly larger and longer, respectively, compared to those in the SuperSlim group (p=0.0242 and 0.009, respectively). Round ablation zones were acquired in 91.7% of the CoAccess group, while irregular ablation zones were obtained in 45.5% of the SuperSlim group (p=0.0428). In conclusion, the CoAccess electrode achieves larger and more uniform ablation zones compared with the SuperSlim electrode, though it requires longer ablation times in experimental and clinical studies. PMID:22977647
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan
2014-11-18
This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints.
Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan
2014-01-01
This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints. PMID:25412217
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearlymore » the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
Different types of maximum power point tracking techniques for renewable energy systems: A survey
NASA Astrophysics Data System (ADS)
Khan, Mohammad Junaid; Shukla, Praveen; Mustafa, Rashid; Chatterji, S.; Mathew, Lini
2016-03-01
Global demand for electricity is increasing while production of energy from fossil fuels is declining and therefore the obvious choice of the clean energy source that is abundant and could provide security for development future is energy from the sun. In this paper, the characteristic of the supply voltage of the photovoltaic generator is nonlinear and exhibits multiple peaks, including many local peaks and a global peak in non-uniform irradiance. To keep global peak, MPPT is the important component of photovoltaic systems. Although many review articles discussed conventional techniques such as P & O, incremental conductance, the correlation ripple control and very few attempts have been made with intelligent MPPT techniques. This document also discusses different algorithms based on fuzzy logic, Ant Colony Optimization, Genetic Algorithm, artificial neural networks, Particle Swarm Optimization Algorithm Firefly, Extremum seeking control method and hybrid methods applied to the monitoring of maximum value of power at point in systems of photovoltaic under changing conditions of irradiance.
Xiong, Zheng; He, Yinyan; Hattrick-Simpers, Jason R; Hu, Jianjun
2017-03-13
The creation of composition-processing-structure relationships currently represents a key bottleneck for data analysis for high-throughput experimental (HTE) material studies. Here we propose an automated phase diagram attribution algorithm for HTE data analysis that uses a graph-based segmentation algorithm and Delaunay tessellation to create a crystal phase diagram from high throughput libraries of X-ray diffraction (XRD) patterns. We also propose the sample-pair based objective evaluation measures for the phase diagram prediction problem. Our approach was validated using 278 diffraction patterns from a Fe-Ga-Pd composition spread sample with a prediction precision of 0.934 and a Matthews Correlation Coefficient score of 0.823. The algorithm was then applied to the open Ni-Mn-Al thin-film composition spread sample to obtain the first predicted phase diagram mapping for that sample.
Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei
2018-02-01
Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
New multirate sampled-data control law structure and synthesis algorithm
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.; Yang, Gen-Sheng
1992-01-01
A new multirate sampled-data control law structure is defined and a new parameter-optimization-based synthesis algorithm for that structure is introduced. The synthesis algorithm can be applied to multirate, multiple-input/multiple-output, sampled-data control laws having a prescribed dynamic order and structure, and a priori specified sampling/update rates for all sensors, processor states, and control inputs. The synthesis algorithm is applied to design two-input, two-output tip position controllers of various dynamic orders for a sixth-order, two-link robot arm model.
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.
Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E
2018-01-01
The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Taylor, Katrina; Seegmiller, Jeffrey; Vella, Chantal A
2016-11-01
To determine whether a decremental protocol could elicit a higher maximal oxygen consumption (VO 2 max) than an incremental protocol in trained participants. A secondary aim was to examine whether cardiac-output (Q) and stroke-volume (SV) responses differed between decremental and incremental protocols in this sample. Nineteen runners/triathletes were randomized to either the decremental or incremental group. All participants completed an initial incremental VO 2 max test on a treadmill, followed by a verification phase. The incremental group completed 2 further incremental tests. The decremental group completed a second VO 2 max test using the decremental protocol, based on their verification phase. The decremental group then completed a final incremental test. During each test, VO 2 , ventilation, and heart rate were measured, and cardiac variables were estimated with thoracic bioimpedance. Repeated-measures analysis of variance was conducted with an alpha level set at .05. There were no significant main effects for group (P = .37) or interaction (P = .10) over time (P = .45). VO 2 max was similar between the incremental (57.29 ± 8.94 mL · kg -1 · min -1 ) and decremental (60.82 ± 8.49 mL · kg -1 · min -1 ) groups over time. Furthermore, Q and SV were similar between the incremental (Q 22.72 ± 5.85 L/min, SV 119.64 ± 33.02 mL/beat) and decremental groups (Q 20.36 ± 4.59 L/min, SV 109.03 ± 24.27 mL/beat) across all 3 trials. The findings suggest that the decremental protocol does not elicit higher VO 2 max than an incremental protocol but may be used as an alternative protocol to measure VO 2 max in runners and triathletes.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Simple-random-sampling-based multiclass text classification algorithm.
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.
Dielectric properties-based method for rapid and nondestructive moisture sensing in almonds
USDA-ARS?s Scientific Manuscript database
A dielectric-based method is presented for moisture determination in almonds independent of bulk density. The dielectric properties of almond were measured between 5 and 15 GHz, with a 1-GHz increments, for samples with moisture contents ranging from 4.8% to 16.5%, wet basis, bulk densities ranging ...
Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco
2015-01-01
In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
NASA Astrophysics Data System (ADS)
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
NASA Technical Reports Server (NTRS)
Watson, Brian; Kamat, M. P.
1990-01-01
Element-by-element preconditioned conjugate gradient (EBE-PCG) algorithms have been advocated for use in parallel/vector processing environments as being superior to the conventional LDL(exp T) decomposition algorithm for single load cases. Although there may be some advantages in using such algorithms for a single load case, when it comes to situations involving multiple load cases, the LDL(exp T) decomposition algorithm would appear to be decidedly more cost-effective. The authors have outlined an EBE-PCG algorithm suitable for multiple load cases and compared its effectiveness to the highly efficient LDL(exp T) decomposition scheme. The proposed algorithm offers almost no advantages over the LDL(exp T) algorithm for the linear problems investigated on the Alliant FX/8. However, there may be some merit in the algorithm in solving nonlinear problems with load incrementation, but that remains to be investigated.
Using Data Assimilation Diagnostics to Assess the SMAP Level-4 Soil Moisture Product
NASA Technical Reports Server (NTRS)
Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe
2018-01-01
The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx.2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.
Global Assessment of the SMAP Level-4 Soil Moisture Product Using Assimilation Diagnostics
NASA Technical Reports Server (NTRS)
Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe
2018-01-01
The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx. 2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.
ERIC Educational Resources Information Center
Garcia, Dru; Joseph, Laurice M.; Alber-Morgan, Sheila; Konrad, Moira
2014-01-01
The purpose of this study was to examine the efficiency of an incremental rehearsal oral versus an incremental rehearsal written procedure on a sample of primary grade children's weekly spelling performance. Participants included five second and one first grader who were in need of help with their spelling according to their teachers. An…
Reifman, Jaques; Feldman, Earl E.; Wei, Thomas Y. C.; Glickert, Roger W.
2003-01-01
The control of emissions from fossil-fired boilers wherein an injection of substances above the primary combustion zone employs multi-layer feedforward artificial neural networks for modeling static nonlinear relationships between the distribution of injected substances into the upper region of the furnace and the emissions exiting the furnace. Multivariable nonlinear constrained optimization algorithms use the mathematical expressions from the artificial neural networks to provide the optimal substance distribution that minimizes emission levels for a given total substance injection rate. Based upon the optimal operating conditions from the optimization algorithms, the incremental substance cost per unit of emissions reduction, and the open-market price per unit of emissions reduction, the intelligent emissions controller allows for the determination of whether it is more cost-effective to achieve additional increments in emission reduction through the injection of additional substance or through the purchase of emission credits on the open market. This is of particular interest to fossil-fired electrical power plant operators. The intelligent emission controller is particularly adapted for determining the economical control of such pollutants as oxides of nitrogen (NO.sub.x) and carbon monoxide (CO) emitted by fossil-fired boilers by the selective introduction of multiple inputs of substances (such as natural gas, ammonia, oil, water-oil emulsion, coal-water slurry and/or urea, and combinations of these substances) above the primary combustion zone of fossil-fired boilers.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...
2017-01-31
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
Incremental Validity of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF).
Siegling, A B; Vesely, Ashley K; Petrides, K V; Saklofske, Donald H
2015-01-01
This study examined the incremental validity of the adult short form of the Trait Emotional Intelligence Questionnaire (TEIQue-SF) in predicting 7 construct-relevant criteria beyond the variance explained by the Five-factor model and coping strategies. Additionally, the relative contributions of the questionnaire's 4 subscales were assessed. Two samples of Canadian university students completed the TEIQue-SF, along with measures of the Big Five, coping strategies (Sample 1 only), and emotion-laden criteria. The TEIQue-SF showed consistent incremental effects beyond the Big Five or the Big Five and coping strategies, predicting all 7 criteria examined across the 2 samples. Furthermore, 2 of the 4 TEIQue-SF subscales accounted for the measure's incremental validity. Although the findings provide good support for the validity and utility of the TEIQue-SF, directions for further research are emphasized.
Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria
NASA Astrophysics Data System (ADS)
Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.
2009-04-01
Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and 1000m² from the sample plots together with the heterogeneous landscape may qualify the low correlation, particularly as the correlation increases when strongly fragmented sites are left out.
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
Probabilistic Multi-Person Tracking Using Dynamic Bayes Networks
NASA Astrophysics Data System (ADS)
Klinger, T.; Rottensteiner, F.; Heipke, C.
2015-08-01
Tracking-by-detection is a widely used practice in recent tracking systems. These usually rely on independent single frame detections that are handled as observations in a recursive estimation framework. If these observations are imprecise the generated trajectory is prone to be updated towards a wrong position. In contrary to existing methods our novel approach uses a Dynamic Bayes Network in which the state vector of a recursive Bayes filter, as well as the location of the tracked object in the image are modelled as unknowns. These unknowns are estimated in a probabilistic framework taking into account a dynamic model, and a state-of-the-art pedestrian detector and classifier. The classifier is based on the Random Forest-algorithm and is capable of being trained incrementally so that new training samples can be incorporated at runtime. This allows the classifier to adapt to the changing appearance of a target and to unlearn outdated features. The approach is evaluated on a publicly available benchmark. The results confirm that our approach is well suited for tracking pedestrians over long distances while at the same time achieving comparatively good geometric accuracy.
NASA Astrophysics Data System (ADS)
Moon, Byung-Young
2005-12-01
The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Jordan, Jake; Gage, Heather; Benton, Barbara; Lalji, Amyn; Norton, Christine; Andreyev, H Jervoise N
2017-01-01
Over 20 distressing gastrointestinal symptoms affect many patients after pelvic radiotherapy, but in the United Kingdom few are referred for assessment. Algorithmic-based treatment delivered by either a consultant gastroenterologist or a clinical nurse specialist has been shown in a randomized trial to be statistically and clinically more effective than provision of a self-help booklet. In this study, we assessed cost-effectiveness. Outcomes were measured at baseline (pre-randomization) and 6 months. Change in quality-adjusted life years (QALYs) was the primary outcome for the economic evaluation; a secondary analysis used change in the bowel subset score of the modified Inflammatory Bowel Disease Questionnaire (IBDQ-B). Intervention costs, British pounds 2013, covered visits with the gastroenterologist or nurse, investigations, medications and treatments. Incremental outcomes and incremental costs were estimated simultaneously using multivariate linear regression. Uncertainty was handled non-parametrically using bootstrap with replacement. The mean (SD) cost of treatment was £895 (499) for the nurse and £1101 (567) for the consultant. The nurse was dominated by usual care, which was cheaper and achieved better outcomes. The mean cost per QALY gained from the consultant, compared to usual care, was £250,455; comparing the consultant to the nurse, it was £25,875. Algorithmic care produced better outcomes compared to the booklet only, as reflected in the IBDQ-B results, at a cost of ~£1,000. Algorithmic treatment of radiation bowel injury by a consultant or a nurse results in significant symptom relief for patients but was not found to be cost-effective according to the National Institute for Health and Care Excellence (NICE) criteria.
Structure-guided Protein Transition Modeling with a Probabilistic Roadmap Algorithm.
Maximova, Tatiana; Plaku, Erion; Shehu, Amarda
2016-07-07
Proteins are macromolecules in perpetual motion, switching between structural states to modulate their function. A detailed characterization of the precise yet complex relationship between protein structure, dynamics, and function requires elucidating transitions between functionally-relevant states. Doing so challenges both wet and dry laboratories, as protein dynamics involves disparate temporal scales. In this paper we present a novel, sampling-based algorithm to compute transition paths. The algorithm exploits two main ideas. First, it leverages known structures to initialize its search and define a reduced conformation space for rapid sampling. This is key to address the insufficient sampling issue suffered by sampling-based algorithms. Second, the algorithm embeds samples in a nearest-neighbor graph where transition paths can be efficiently computed via queries. The algorithm adapts the probabilistic roadmap framework that is popular in robot motion planning. In addition to efficiently computing lowest-cost paths between any given structures, the algorithm allows investigating hypotheses regarding the order of experimentally-known structures in a transition event. This novel contribution is likely to open up new venues of research. Detailed analysis is presented on multiple-basin proteins of relevance to human disease. Multiscaling and the AMBER ff14SB force field are used to obtain energetically-credible paths at atomistic detail.
Sample size calculation in economic evaluations.
Al, M J; van Hout, B A; Michel, B C; Rutten, F F
1998-06-01
A simulation method is presented for sample size calculation in economic evaluations. As input the method requires: the expected difference and variance of costs and effects, their correlation, the significance level (alpha) and the power of the testing method and the maximum acceptable ratio of incremental effectiveness to incremental costs. The method is illustrated with data from two trials. The first compares primary coronary angioplasty with streptokinase in the treatment of acute myocardial infarction, in the second trial, lansoprazole is compared with omeprazole in the treatment of reflux oesophagitis. These case studies show how the various parameters influence the sample size. Given the large number of parameters that have to be specified in advance, the lack of knowledge about costs and their standard deviation, and the difficulty of specifying the maximum acceptable ratio of incremental effectiveness to incremental costs, the conclusion of the study is that from a technical point of view it is possible to perform a sample size calculation for an economic evaluation, but one should wonder how useful it is.
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
Noise and Complexity in Human Postural Control: Interpreting the Different Estimations of Entropy
Rhea, Christopher K.; Silver, Tobin A.; Hong, S. Lee; Ryu, Joong Hyun; Studenka, Breanna E.; Hughes, Charmayne M. L.; Haddad, Jeffrey M.
2011-01-01
Background Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. Methods and Findings The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. Conclusions The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses. PMID:21437281
Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds
NASA Astrophysics Data System (ADS)
Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert
2014-06-01
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
A suffix arrays based approach to semantic search in P2P systems
NASA Astrophysics Data System (ADS)
Shi, Qingwei; Zhao, Zheng; Bao, Hu
2007-09-01
Building a semantic search system on top of peer-to-peer (P2P) networks is becoming an attractive and promising alternative scheme for the reason of scalability, Data freshness and search cost. In this paper, we present a Suffix Arrays based algorithm for Semantic Search (SASS) in P2P systems, which generates a distributed Semantic Overlay Network (SONs) construction for full-text search in P2P networks. For each node through the P2P network, SASS distributes document indices based on a set of suffix arrays, by which clusters are created depending on words or phrases shared between documents, therefore, the search cost for a given query is decreased by only scanning semantically related documents. In contrast to recently announced SONs scheme designed by using metadata or predefined-class, SASS is an unsupervised approach for decentralized generation of SONs. SASS is also an incremental, linear time algorithm, which efficiently handle the problem of nodes update in P2P networks. Our simulation results demonstrate that SASS yields high search efficiency in dynamic environments.
An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning
Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco
2015-01-01
Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130
NASA Astrophysics Data System (ADS)
Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco
2018-02-01
The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.
Incremental soil sampling root water uptake, or be great through others
USDA-ARS?s Scientific Manuscript database
Ray Allmaras pursued several research topics in relation to residue and tillage research. He looked for new tools to help explain soil responses to tillage, including disk permeameters and image analysis. The incremental sampler developed by Pikul and Allmaras allowed small-depth increment, volumetr...
ERIC Educational Resources Information Center
Hampton, David D.; Lembke, Erica S.
2016-01-01
The purpose of this study was to examine 4 early writing measures used to monitor the early writing progress of 1st-grade students. We administered the measures to 23 1st-grade students biweekly for a total of 16 weeks. We obtained 3-min samples and conducted analyses for each 1-min increment. We scored samples using 2 different methods: correct…
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.
Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso
2018-07-01
There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.
Tractable Goal Selection with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2009-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
ELF: An Extended-Lagrangian Free Energy Calculation Module for Multiple Molecular Dynamics Engines.
Chen, Haochuan; Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng
2018-06-18
Extended adaptive biasing force (eABF), a collective variable (CV)-based importance-sampling algorithm, has proven to be very robust and efficient compared with the original ABF algorithm. Its implementation in Colvars, a software addition to molecular dynamics (MD) engines, is, however, currently limited to NAMD and LAMMPS. To broaden the scope of eABF and its variants, like its generalized form (egABF), and make them available to other MD engines, e.g., GROMACS, AMBER, CP2K, and openMM, we present a PLUMED-based implementation, called extended-Lagrangian free energy calculation (ELF). This implementation can be used as a stand-alone gradient estimator for other CV-based sampling algorithms, such as temperature-accelerated MD (TAMD) and extended-Lagrangian metadynamics (MtD). ELF provides the end user with a convenient framework to help select the best-suited importance-sampling algorithm for a given application without any commitment to a particular MD engine.
Network Sampling and Classification:An Investigation of Network Model Representations
Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.
2011-01-01
Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773
Pebay, Philippe; Terriberry, Timothy B.; Kolla, Hemanth; ...
2016-03-29
Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Such formulas are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the fullmore » representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.« less
Fuzzy CMAC With incremental Bayesian Ying-Yang learning and dynamic rule construction.
Nguyen, M N
2010-04-01
Inspired by the philosophy of ancient Chinese Taoism, Xu's Bayesian ying-yang (BYY) learning technique performs clustering by harmonizing the training data (yang) with the solution (ying). In our previous work, the BYY learning technique was applied to a fuzzy cerebellar model articulation controller (FCMAC) to find the optimal fuzzy sets; however, this is not suitable for time series data analysis. To address this problem, we propose an incremental BYY learning technique in this paper, with the idea of sliding window and rule structure dynamic algorithms. Three contributions are made as a result of this research. First, an online expectation-maximization algorithm incorporated with the sliding window is proposed for the fuzzification phase. Second, the memory requirement is greatly reduced since the entire data set no longer needs to be obtained during the prediction process. Third, the rule structure dynamic algorithm with dynamically initializing, recruiting, and pruning rules relieves the "curse of dimensionality" problem that is inherent in the FCMAC. Because of these features, the experimental results of the benchmark data sets of currency exchange rates and Mackey-Glass show that the proposed model is more suitable for real-time streaming data analysis.
NASA Astrophysics Data System (ADS)
Meng, Qinggang; Lee, M. H.
2007-03-01
Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.
2013-06-01
lenses of unconsolidated sand and rounded river gravel overlain by as much as 5 m of silt. Gravel consists mostly of quartz and metamorphic rock with...iii LIST OF FIGURES Page Figure 1. Example of multi-increment sampling using a systematic-random sampling design for collecting two separate...The small arms firing Range 16 Record berms at Fort Wainwright. .................... 25 Figure 9. Location of berms sampled using ISM and grab
Lee, Pius; Liu, Yang
2014-01-01
We report the progress of an ongoing effort by the Air Resources Laboratory, NOAA to build a prototype regional Chemical Analysis System (ARLCAS). The ARLCAS focuses on providing long-term analysis of the three dimensional (3D) air-pollutant concentration fields over the continental U.S. It leverages expertise from the NASA Earth Science Division-sponsored Air Quality Applied Science Team (AQAST) for the state-of-science knowledge in atmospheric and data assimilation sciences. The ARLCAS complies with national operational center requirement protocols and aims to have the modeling system to be maintained by a national center. Meteorology and chemistry observations consist of land-, air- and space-based observed and quality-assured data. We develop modularized testing to investigate the efficacies of the various components of the ARLCAS. The sensitivity testing of data assimilation schemes showed that with the increment of additional observational data sets, the accuracy of the analysis chemical fields also increased incrementally in varying margins. The benefit is especially noted for additional data sets based on a different platform and/or a different retrieval algorithm. We also described a plan to apply the analysis chemical fields in environmental surveillance at the Centers for Disease Control and Prevention. PMID:25514141
Lee, Pius; Liu, Yang
2014-12-01
We report the progress of an ongoing effort by the Air Resources Laboratory, NOAA to build a prototype regional Chemical Analysis System (ARLCAS). The ARLCAS focuses on providing long-term analysis of the three dimensional (3D) air-pollutant concentration fields over the continental U.S. It leverages expertise from the NASA Earth Science Division-sponsored Air Quality Applied Science Team (AQAST) for the state-of-science knowledge in atmospheric and data assimilation sciences. The ARLCAS complies with national operational center requirement protocols and aims to have the modeling system to be maintained by a national center. Meteorology and chemistry observations consist of land-, air- and space-based observed and quality-assured data. We develop modularized testing to investigate the efficacies of the various components of the ARLCAS. The sensitivity testing of data assimilation schemes showed that with the increment of additional observational data sets, the accuracy of the analysis chemical fields also increased incrementally in varying margins. The benefit is especially noted for additional data sets based on a different platform and/or a different retrieval algorithm. We also described a plan to apply the analysis chemical fields in environmental surveillance at the Centers for Disease Control and Prevention.
Efficient traffic grooming with dynamic ONU grouping for multiple-OLT-based access network
NASA Astrophysics Data System (ADS)
Zhang, Shizong; Gu, Rentao; Ji, Yuefeng; Wang, Hongxiang
2015-12-01
Fast bandwidth growth urges large-scale high-density access scenarios, where the multiple Passive Optical Networking (PON) system clustered deployment can be adopted as an appropriate solution to fulfill the huge bandwidth demands, especially for a future 5G mobile network. However, the lack of interaction between different optical line terminals (OLTs) results in part of the bandwidth resources waste. To increase the bandwidth efficiency, as well as reduce bandwidth pressure at the edge of a network, we propose a centralized flexible PON architecture based on Time- and Wavelength-Division Multiplexing PON (TWDM PON). It can provide flexible affiliation for optical network units (ONUs) and different OLTs to support access network traffic localization. Specifically, a dynamic ONU grouping algorithm (DGA) is provided to obtain the minimal OLT outbound traffic. Simulation results show that DGA obtains an average 25.23% traffic gain increment under different OLT numbers within a small ONU number situation, and the traffic gain will increase dramatically with the increment of the ONU number. As the DGA can be deployed easily as an application running above the centralized control plane, the proposed architecture can be helpful to improve the network efficiency for future traffic-intensive access scenarios.
Modifications to Improve Data Acquisition and Analysis for Camouflage Design
1983-01-01
terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the
Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A
2010-06-01
To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.
The generalization ability of SVM classification based on Markov sampling.
Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang
2015-06-01
The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Space-time quantitative source apportionment of soil heavy metal concentration increments.
Yang, Yong; Christakos, George; Guo, Mingwu; Xiao, Lu; Huang, Wei
2017-04-01
Assessing the space-time trends and detecting the sources of heavy metal accumulation in soils have important consequences in the prevention and treatment of soil heavy metal pollution. In this study, we collected soil samples in the eastern part of the Qingshan district, Wuhan city, Hubei Province, China, during the period 2010-2014. The Cd, Cu, Pb and Zn concentrations in soils exhibited a significant accumulation during 2010-2014. The spatiotemporal Kriging technique, based on a quantitative characterization of soil heavy metal concentration variations in terms of non-separable variogram models, was employed to estimate the spatiotemporal soil heavy metal distribution in the study region. Our findings showed that the Cd, Cu, and Zn concentrations have an obvious incremental tendency from the southwestern to the central part of the study region. However, the Pb concentrations exhibited an obvious tendency from the northern part to the central part of the region. Then, spatial overlay analysis was used to obtain absolute and relative concentration increments of adjacent 1- or 5-year periods during 2010-2014. The spatial distribution of soil heavy metal concentration increments showed that the larger increments occurred in the center of the study region. Lastly, the principal component analysis combined with the multiple linear regression method were employed to quantify the source apportionment of the soil heavy metal concentration increments in the region. Our results led to the conclusion that the sources of soil heavy metal concentration increments should be ascribed to industry, agriculture and traffic. In particular, 82.5% of soil heavy metal concentration increment during 2010-2014 was ascribed to industrial/agricultural activities sources. Using STK and SOA to obtain the spatial distribution of heavy metal concentration increments in soils. Using PCA-MLR to quantify the source apportionment of soil heavy metal concentration increments. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P.; Carrasco, Miguel Á.
2016-01-01
This study examines the relationships between perceived parental acceptance and children’s behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children’s (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother–father or child–father combinations seem to be the best way to optimize the multi-informant method in order to predict children’s behavioral problems based on perceived parental acceptance. PMID:27242582
Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P; Carrasco, Miguel Á
2016-01-01
This study examines the relationships between perceived parental acceptance and children's behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children's (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother-father or child-father combinations seem to be the best way to optimize the multi-informant method in order to predict children's behavioral problems based on perceived parental acceptance.
Incremental Beliefs of Ability, Achievement Emotions and Learning of Singapore Students
ERIC Educational Resources Information Center
Luo, Wenshu; Lee, Kerry; Ng, Pak Tee; Ong, Joanne Xiao Wei
2014-01-01
This study investigated the relationships of students' incremental beliefs of math ability to their achievement emotions, classroom engagement and math achievement. A sample of 273 secondary students in Singapore were administered measures of incremental beliefs of math ability, math enjoyment, pride, boredom and anxiety, as well as math classroom…
Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems
NASA Technical Reports Server (NTRS)
Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith
1988-01-01
Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.
Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift
NASA Astrophysics Data System (ADS)
Modarresi, N.; Rezakhah, S.
The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boardman, Beth Leigh
The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT*more » when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The analysis for each of the above mentioned algorithms is confirmed in simulations.« less
An end-to-end workflow for engineering of biological networks from high-level specifications.
Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun
2012-08-17
We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.
The Draft EPA Subsurface Vapor Intrusion Guidance Document was established to "address the incremental increases in exposures and risks from subsurface contaminants that my be intruding into indoor air". The document utilizes attenuation factors based on indoor air/soil gas or i...
NASA Astrophysics Data System (ADS)
Gao, F.; Leng, S. L.; Zhu, Z.; Li, X. J.; Hu, X.; Song, H. Z.
2018-04-01
The nanopowders of Cu2Se were synthesized by the hydrothermal method, and then were hot-pressed into bulk pellets. The effects of different preparation conditions on the structure and thermoelectric properties of Cu2Se nanocrystalline bulk alloys were investigated. The resistivity and Seebeck coefficients increase with the increment of hot-pressing temperatures, while they decrease with the increment of hot-pressing time, except for the Seebeck coefficients of the sample hot-pressed for 30 min. Based on the power factors and dimensionless thermoelectric figure-of-merit ( ZT) values, the optimum hot-pressing parameters are 700°C and 30 min.
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Advanced Image Enhancement Method for Distant Vessels and Structures in Capsule Endoscopy
Pedersen, Marius
2017-01-01
This paper proposes an advanced method for contrast enhancement of capsule endoscopic images, with the main objective to obtain sufficient information about the vessels and structures in more distant (or darker) parts of capsule endoscopic images. The proposed method (PM) combines two algorithms for the enhancement of darker and brighter areas of capsule endoscopic images, respectively. The half-unit weighted-bilinear algorithm (HWB) proposed in our previous work is used to enhance darker areas according to the darker map content of its HSV's component V. Enhancement of brighter areas is achieved thanks to the novel threshold weighted-bilinear algorithm (TWB) developed to avoid overexposure and enlargement of specular highlight spots while preserving the hue, in such areas. The TWB performs enhancement operations following a gradual increment of the brightness of the brighter map content of its HSV's component V. In other words, the TWB decreases its averaged weights as the intensity content of the component V increases. Extensive experimental demonstrations were conducted, and, based on evaluation of the reference and PM enhanced images, a gastroenterologist (Ø.H.) concluded that the PM enhanced images were the best ones based on the information about the vessels, contrast in the images, and the view or visibility of the structures in more distant parts of the capsule endoscopy images. PMID:29225668
Lucena-Santos, Paola; Trindade, Inês A; Oliveira, Margareth; Pinto-Gouveia, José
2017-05-19
Given the clinical usefulness of the CFQ-BI (Cognitive Fusion Questionnaire-Body Image; the only existing measure to assess the body-image-related cognitive fusion), the present study aimed to confirm its one-factor structure, to verify its measurement invariance between clinical and non-clinical samples, to analyze its internal consistency and sensitivity to detect differences between samples, as well as to explore the incremental and convergent validities of the CFQ-BI scores in Brazilian samples. This was a cross-sectional study, which was conducted in clinical (women with overweight or obesity in treatment for weight loss) and non-clinical samples (women from the general population). The one-factor structure was confirmed showing factorial measurement invariance across clinical and non-clinical samples. The CFQ-BI scores presented an excellent internal consistency, were able to discriminate clinical and non-clinical samples, and were positively associated with binge eating severity, general cognitive fusion, and psychological inflexibility. Furthermore, body-image-related cognitive fusion scores (CFQ-BI) presented incremental validity over a general measure of cognitive fusion in the prediction of binge eating symptoms. This study demonstrated that CFQ-BI is a short scale with reliable and robust scores in Brazilian samples, presenting incremental and convergent validities, measurement invariance, and sensitivity to detect differences between clinical and non-clinical groups of women, enabling comparative studies between them.
Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang
2016-11-16
The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P
2014-10-01
Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.
Goal Selection for Embedded Systems with Oversubscribed Resources
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Onboard Run-Time Goal Selection for Autonomous Operations
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; McLaren, David
2010-01-01
We describe an efficient, online goal selection algorithm for use onboard spacecraft and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Word Decoding Development in Incremental Phonics Instruction in a Transparent Orthography
ERIC Educational Resources Information Center
Schaars, Moniek M.; Segers, Eliane; Verhoeven, Ludo
2017-01-01
The present longitudinal study aimed to investigate the development of word decoding skills during incremental phonics instruction in Dutch as a transparent orthography. A representative sample of 973 Dutch children in the first grade (M[subscript age] = 6;1, SD = 0;5) was exposed to incremental subsets of Dutch grapheme-phoneme correspondences…
Comparison of the Incremental Validity of the Old and New MCAT.
ERIC Educational Resources Information Center
Wolf, Fredric M.; And Others
The predictive and incremental validity of both the Old and New Medical College Admission Test (MCAT) was examined and compared with a sample of over 300 medical students. Results of zero order and incremental validity coefficients, as well as prediction models resulting from all possible subsets regression analyses using Mallow's Cp criterion,…
SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.
Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui
2013-04-01
Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.
NASA Astrophysics Data System (ADS)
Ham, Yoo-Geun; Song, Hyo-Jong; Jung, Jaehee; Lim, Gyu-Ho
2017-04-01
This study introduces a altered version of the incremental analysis updates (IAU), called the nonstationary IAU (NIAU) method, to enhance the assimilation accuracy of the IAU while retaining the continuity of the analysis. Analogous to the IAU, the NIAU is designed to add analysis increments at every model time step to improve the continuity in the intermittent data assimilation. Still, unlike the IAU, the NIAU method applies time-evolved forcing employing the forward operator as rectifications to the model. The solution of the NIAU is better than that of the IAU, of which analysis is performed at the start of the time window for adding the IAU forcing, in terms of the accuracy of the analysis field. It is because, in the linear systems, the NIAU solution equals that in an intermittent data assimilation method at the end of the assimilation interval. To have the filtering property in the NIAU, a forward operator to propagate the increment is reconstructed with only dominant singular vectors. An illustration of those advantages of the NIAU is given using the simple 40-variable Lorenz model.
P. David Jones; Laurence R. Schimleck; Richard F. Daniels; Alexander Clark; Robert C. Purnell
2008-01-01
A necessary objective for tree-breeding programs, with a focus on wood quality, is the measurement of wood properties on a whole-tree basis, however, the time and cost involved limits the numbers of trees sampled. Near infrared (NIR) spectroscopy provides an alternative and recently, it has been demonstrated that calibrations based on milled increment cores and whole-...
Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection.
Barreto, A M E C; Takei, K; E C, Sabino; Bellesa, M A O; Salles, N A; Barreto, C C; Nishiya, A S; Chamone, D F
2008-02-01
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio > or =95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Pham, Huy P; Sireci, Anthony N; Kim, Chong H; Schwartz, Joseph
2014-09-01
Both plasma- and recombinant activated factor VII (rFVIIa)-based algorithms can be used to correct coagulopathy in preliver transplant patients with acute liver failure requiring intracranial pressure monitor (ICPM) placement. A decision model was created to compare the cost-effectiveness of these methods. A 70-kg patient could receive either 1 round of plasma followed by coagulation testing or 2 units of plasma and 40 μg/kg rFVIIa. Intracranial pressure monitor is placed without coagulation testing after rFVIIa administration. In the plasma algorithm, the probability of ICPM placement was estimated based on expected international normalized ratio (INR) after plasma administration. Risks of rFVIIa thrombosis and transfusion reactions were also included. The model was run for patients with INRs ranging from 2 to 6 with concomitant adjustments to model parameters. The model supported the initial use of rFVIIa for ICPM placement as a cost-effective treatment when INR ≥2 (with incremental cost-effectiveness ratio of at most US$7088.02). © The Author(s) 2014.
Chen, Lei; Zhang, Yu-Hang; Zheng, Mingyue; Huang, Tao; Cai, Yu-Dong
2016-12-01
Compound-protein interactions play important roles in every cell via the recognition and regulation of specific functional proteins. The correct identification of compound-protein interactions can lead to a good comprehension of this complicated system and provide useful input for the investigation of various attributes of compounds and proteins. In this study, we attempted to understand this system by extracting properties from both proteins and compounds, in which proteins were represented by gene ontology and KEGG pathway enrichment scores and compounds were represented by molecular fragments. Advanced feature selection methods, including minimum redundancy maximum relevance, incremental feature selection, and the basic machine learning algorithm random forest, were used to analyze these properties and extract core factors for the determination of actual compound-protein interactions. Compound-protein interactions reported in The Binding Databases were used as positive samples. To improve the reliability of the results, the analytic procedure was executed five times using different negative samples. Simultaneously, five optimal prediction methods based on a random forest and yielding maximum MCCs of approximately 77.55 % were constructed and may be useful tools for the prediction of compound-protein interactions. This work provides new clues to understanding the system of compound-protein interactions by analyzing extracted core features. Our results indicate that compound-protein interactions are related to biological processes involving immune, developmental and hormone-associated pathways.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong; Jia, Zupeng
2017-06-01
The Multi-Material Arbitrary Lagrangian Eulerian (MMALE) method is an effective way to simulate the multi-material flow with severe surface deformation. Comparing with the traditional Arbitrary Lagrangian Eulerian (ALE) method, the MMALE method allows for multiple materials in a single cell which overcomes the difficulties in grid refinement process. In recent decades, many researches have been conducted for the Lagrangian, rezoning and surface reconstruction phases, but less attention has been paid to the multi-material remapping phase especially for the three-dimensional problems due to two complex geometric problems: the polyhedron subdivision and the polyhedron intersection. In this paper, we propose a ;Clipping and Projecting; algorithm for polyhedron intersection whose basic idea comes from the commonly used method by Grandy (1999) [29] and Jia et al. (2013) [34]. Our new algorithm solves the geometric problem by an incremental modification of the topology based on segment-plane intersections. A comparison with Jia et al. (2013) [34] shows our new method improves the efficiency by 55% to 65% when calculating polyhedron intersections. Moreover, the instability caused by the geometric degeneracy can be thoroughly avoided because the geometry integrity is preserved in the new algorithm. We also focus on the polyhedron subdivision process and describe an algorithm which could automatically and precisely tackle the various situations including convex, non-convex and multiple subdivisions. Numerical studies indicate that by using our polyhedron subdivision and intersection algorithm, the volume conversation of the remapping phase can be exactly preserved in the MMALE simulation.
VIP: Vortex Image Processing Package for High-contrast Direct Imaging
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean
2017-07-01
We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.
NASA Astrophysics Data System (ADS)
Mabit, Lionel; Meusburger, Katrin; Iurian, Andra-Rada; Owens, Philip N.; Toloza, Arsenio; Alewell, Christine
2014-05-01
Soil and sediment related research for terrestrial agri-environmental assessments requires accurate depth incremental sampling of soil and exposed sediment profiles. Existing coring equipment does not allow collecting soil/sediment increments at millimetre resolution. Therefore, the authors have designed an economic, portable, hand-operated surface soil/sediment sampler - the Fine Increment Soil Collector (FISC) - which allows extensive control of soil/sediment sampling process and easy recovery of the material collected by using a simple screw-thread extraction system. In comparison with existing sampling tools, the FISC has the following advantages and benefits: (i) it permits sampling of soil/sediment samples at the top of the profile; (ii) it is easy to adjust so as to collect soil/sediment at mm resolution; (iii) it is simple to operate by one single person; (iv) incremental samples can be performed in the field or at the laboratory; (v) it permits precise evaluation of bulk density at millimetre vertical resolution; and (vi) sample size can be tailored to analytical requirements. To illustrate the usefulness of the FISC in sampling soil and sediments for 7Be - a well-known cosmogenic soil tracer and fingerprinting tool - measurements, the sampler was tested in a forested soil located 45 km southeast of Vienna in Austria. The fine resolution increments of 7Be (i.e. 2.5 mm) affects directly the measurement of the 7Be total inventory but above all impacts the shape of the 7Be exponential profile which is needed to assess soil movement rates. The FISC can improve the determination of the depth distributions of other Fallout Radionuclides (FRN) - such as 137Cs, 210Pbexand239+240Pu - which are frequently used for soil erosion and sediment transport studies and/or sediment fingerprinting. Such a device also offers great potential to investigate FRN depth distributions associated with fallout events such as that associated with nuclear emergencies. Furthermore, prior to remediation activities - such as topsoil removal - in contaminated soils and sediments (e.g. by heavy metals, pesticides or nuclear power plant accident releases), basic environmental assessment often requires the determination of the extent and the depth penetration of the different contaminants, precision that can be provided by using the FISC.
Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matulef, Kevin Michael
The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewermore » resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.« less
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.
Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh
2017-06-01
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
ERIC Educational Resources Information Center
Canivez, Gary L.; Watkins, Marley W.; James, Trevor; Good, Rebecca; James, Kate
2014-01-01
Background: Subtest and factor scores have typically provided little incremental predictive validity beyond the omnibus IQ score. Aims: This study examined the incremental validity of Wechsler Intelligence Scale for Children-Fourth UK Edition (WISC-IV[superscript UK]; Wechsler, 2004a, "Wechsler Intelligence Scale for Children-Fourth UK…
NASA Astrophysics Data System (ADS)
Yager, Kevin; Albert, Thomas; Brower, Bernard V.; Pellechia, Matthew F.
2015-06-01
The domain of Geospatial Intelligence Analysis is rapidly shifting toward a new paradigm of Activity Based Intelligence (ABI) and information-based Tipping and Cueing. General requirements for an advanced ABIAA system present significant challenges in architectural design, computing resources, data volumes, workflow efficiency, data mining and analysis algorithms, and database structures. These sophisticated ABI software systems must include advanced algorithms that automatically flag activities of interest in less time and within larger data volumes than can be processed by human analysts. In doing this, they must also maintain the geospatial accuracy necessary for cross-correlation of multi-intelligence data sources. Historically, serial architectural workflows have been employed in ABIAA system design for tasking, collection, processing, exploitation, and dissemination. These simpler architectures may produce implementations that solve short term requirements; however, they have serious limitations that preclude them from being used effectively in an automated ABIAA system with multiple data sources. This paper discusses modern ABIAA architectural considerations providing an overview of an advanced ABIAA system and comparisons to legacy systems. It concludes with a recommended strategy and incremental approach to the research, development, and construction of a fully automated ABIAA system.
Wong, Carlos K H; Siu, Shing-Chung; Wan, Eric Y F; Jiao, Fang-Fang; Yu, Esther Y T; Fung, Colman S C; Wong, Ka-Wai; Leung, Angela Y M; Lam, Cindy L K
2016-05-01
The aim of the present study was to develop a simple nomogram that can be used to predict the risk of diabetes mellitus (DM) in the asymptomatic non-diabetic subjects based on non-laboratory- and laboratory-based risk algorithms. Anthropometric data, plasma fasting glucose, full lipid profile, exercise habits, and family history of DM were collected from Chinese non-diabetic subjects aged 18-70 years. Logistic regression analysis was performed on a random sample of 2518 subjects to construct non-laboratory- and laboratory-based risk assessment algorithms for detection of undiagnosed DM; both algorithms were validated on data of the remaining sample (n = 839). The Hosmer-Lemeshow test and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess the calibration and discrimination of the DM risk algorithms. Of 3357 subjects recruited, 271 (8.1%) had undiagnosed DM defined by fasting glucose ≥7.0 mmol/L or 2-h post-load plasma glucose ≥11.1 mmol/L after an oral glucose tolerance test. The non-laboratory-based risk algorithm, with scores ranging from 0 to 33, included age, body mass index, family history of DM, regular exercise, and uncontrolled blood pressure; the laboratory-based risk algorithm, with scores ranging from 0 to 37, added triglyceride level to the risk factors. Both algorithms demonstrated acceptable calibration (Hosmer-Lemeshow test: P = 0.229 and P = 0.483) and discrimination (AUC 0.709 and 0.711) for detection of undiagnosed DM. A simple-to-use nomogram for detecting undiagnosed DM has been developed using validated non-laboratory-based and laboratory-based risk algorithms. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Fate and Transport of Tungsten at Camp Edwards Small Arms Ranges
2007-08-01
area into the lower berm and/or trough. A similar approach was used in the lower berm area with samples collected from soil sloughing from the...bucket au- ger to collect samples beneath the bullet pockets and the trough. A multi - increment, subsurface soil sample was made by combining the...range. From these soil profiles, a total of 72 multi -increment subsurface soil sam- ples was collected (Table 2). The auger was cleaned between holes
Serious injury prediction algorithm based on large-scale data and under-triage control.
Nishimoto, Tetsuya; Mukaigawa, Kosuke; Tominaga, Shigeru; Lubbe, Nils; Kiuchi, Toru; Motomura, Tomokazu; Matsumoto, Hisashi
2017-01-01
The present study was undertaken to construct an algorithm for an advanced automatic collision notification system based on national traffic accident data compiled by Japanese police. While US research into the development of a serious-injury prediction algorithm is based on a logistic regression algorithm using the National Automotive Sampling System/Crashworthiness Data System, the present injury prediction algorithm was based on comprehensive police data covering all accidents that occurred across Japan. The particular focus of this research is to improve the rescue of injured vehicle occupants in traffic accidents, and the present algorithm assumes the use of an onboard event data recorder data from which risk factors such as pseudo delta-V, vehicle impact location, seatbelt wearing or non-wearing, involvement in a single impact or multiple impact crash and the occupant's age can be derived. As a result, a simple and handy algorithm suited for onboard vehicle installation was constructed from a sample of half of the available police data. The other half of the police data was applied to the validation testing of this new algorithm using receiver operating characteristic analysis. An additional validation was conducted using in-depth investigation of accident injuries in collaboration with prospective host emergency care institutes. The validated algorithm, named the TOYOTA-Nihon University algorithm, proved to be as useful as the US URGENCY and other existing algorithms. Furthermore, an under-triage control analysis found that the present algorithm could achieve an under-triage rate of less than 10% by setting a threshold of 8.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Per tree estimates with n-tree distance sampling: an application to increment core data
Thomas B. Lynch; Robert F. Wittwer
2002-01-01
Per tree estimates using the n trees nearest a point can be obtained by using a ratio of per unit area estimates from n-tree distance sampling. This ratio was used to estimate average age by d.b.h. classes for cottonwood trees (Populus deltoides Bartr. ex Marsh.) on the Cimarron National Grassland. Increment...
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Kan, Jianquan
2018-04-01
In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
Whole arm manipulation planning based on feedback velocity fields and sampling-based techniques.
Talaei, B; Abdollahi, F; Talebi, H A; Omidi Karkani, E
2013-09-01
Changing the configuration of a cooperative whole arm manipulator is not easy while enclosing an object. This difficulty is mainly because of risk of jamming caused by kinematic constraints. To reduce this risk, this paper proposes a feedback manipulation planning algorithm that takes grasp kinematics into account. The idea is based on a vector field that imposes perturbation in object motion inducing directions when the movement is considerably along manipulator redundant directions. Obstacle avoidance problem is then considered by combining the algorithm with sampling-based techniques. As experimental results confirm, the proposed algorithm is effective in avoiding jamming as well as obstacles for a 6-DOF dual arm whole arm manipulator. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
Fiuzy, Mohammad; Haddadnia, Javad; Mollania, Nasrin; Hashemian, Maryam; Hassanpour, Kazem
2012-01-01
Accurate Diagnosis of Breast Cancer is of prime importance. Fine Needle Aspiration test or "FNA", which has been used for several years in Europe, is a simple, inexpensive, noninvasive and accurate technique for detecting breast cancer. Expending the suitable features of the Fine Needle Aspiration results is the most important diagnostic problem in early stages of breast cancer. In this study, we introduced a new algorithm that can detect breast cancer based on combining artificial intelligent system and Fine Needle Aspiration (FNA). We studied the Features of Wisconsin Data Base Cancer which contained about 569 FNA test samples (212 patient samples (malignant) and 357 healthy samples (benign)). In this research, we combined Artificial Intelligence Approaches, such as Evolutionary Algorithm (EA) with Genetic Algorithm (GA), and also used Exact Classifier Systems (here by Fuzzy C-Means (FCM)) to separate malignant from benign samples. Furthermore, we examined artificial Neural Networks (NN) to identify the model and structure. This research proposed a new algorithm for an accurate diagnosis of breast cancer. According to Wisconsin Data Base Cancer (WDBC) data base, 62.75% of samples were benign, and 37.25% were malignant. After applying the proposed algorithm, we achieved high detection accuracy of about "96.579%" on 205 patients who were diagnosed as having breast cancer. It was found that the method had 93% sensitivity, 73% specialty, 65% positive predictive value, and 95% negative predictive value, respectively. If done by experts, Fine Needle Aspiration (FNA) can be a reliable replacement for open biopsy in palpable breast masses. Evaluation of FNA samples during aspiration can decrease insufficient samples. FNA can be the first line of diagnosis in women with breast masses, at least in deprived regions, and may increase health standards and clinical supervision of patients. Such a smart, economical, non-invasive, rapid and accurate system can be introduced as a useful diagnostic system for comprehensive treatment of breast cancer. Another advantage of this method is the possibility of diagnosing breast abnormalities. If done by experts, FNA can be a reliable replacement for open biopsy in palpable breast masses. Evaluation of FNA samples during aspiration can decrease insufficient samples.
Multi-INT Complex Event Processing using Approximate, Incremental Graph Pattern Search
2012-06-01
graph pattern search and SPARQL queries . Total execution time for 10 executions each of 5 random pattern searches in synthetic data sets...01/11 1000 10000 100000 RDF triples Time (secs) 10 20 Graph pattern algorithm SPARQL queries Initial Performance Comparisons 09/18/11 2011 Thrust Area
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
NASA Astrophysics Data System (ADS)
Baldi, Alfonso; Jacquot, Pierre
2003-05-01
Graphite-epoxy laminates are subjected to the "incremental hole-drilling" technique in order to investigate the residual stresses acting within each layer of the composite samples. In-plane speckle interferometry is used to measure the displacement field created by each drilling increment around the hole. Our approach features two particularities (1) we rely on the precise repositioning of the samples in the optical set-up after each new boring step, performed by means of a high precision, numerically controlled milling machine in the workshop; (2) for each increment, we acquire three displacement fields, along the length, the width of the samples, and at 45°, using a single symmetrical double beam illumination and a rotary stage holding the specimens. The experimental protocol is described in detail and the experimental results are presented, including a comparison with strain gages. Speckle interferometry appears as a suitable method to respond to the increasing demand for residual stress determination in composite samples.
Method and algorithm of automatic estimation of road surface type for variable damping control
NASA Astrophysics Data System (ADS)
Dąbrowski, K.; Ślaski, G.
2016-09-01
In this paper authors presented an idea of road surface estimation (recognition) on a base of suspension dynamic response signals statistical analysis. For preliminary analysis cumulated distribution function (CDF) was used, and some conclusion that various roads have responses values in a different ranges of limits for the same percentage of samples or for the same limits different percentages of samples are located within the range between limit values. That was the base for developed and presented algorithm which was tested using suspension response signals recorded during road test riding over various surfaces. Proposed algorithm can be essential part of adaptive damping control algorithm for a vehicle suspension or adaptive control strategy for suspension damping control.
Don C. Bragg
2002-01-01
This article is an introduction to the computer software used by the Potential Relative Increment (PRI) approach to optimal tree diameter growth modeling. These DOS programs extract qualified tree and plot data from the Eastwide Forest Inventory Data Base (EFIDB), calculate relative tree increment, sort for the highest relative increments by diameter class, and...
Cost-effectiveness of Collaborative Care for Depression in Human Immunodeficiency Virus Clinics
Fortney, John C; Gifford, Allen L; Rimland, David; Monson, Thomas; Rodriguez-Barradas, Maria C.; Pyne, Jeffrey M
2015-01-01
Objective To examine the cost-effectiveness of the HITIDES intervention. Design Randomized controlled effectiveness and implementation trial comparing depression collaborative care with enhanced usual care. Setting Three Veterans Health Administration (VHA) HIV clinics in the Southern US. Subjects 249 HIV-infected patients completed the baseline interview; 123 were randomized to the intervention and 126 to usual care. Intervention HITIDES consisted of an off-site HIV depression care team that delivered up to 12 months of collaborative care. The intervention used a stepped-care model for depression treatment and specific recommendations were based on the Texas Medication Algorithm Project and the VA/Department of Defense Depression Treatment Guidelines. Main outcome measure(s) Quality-adjusted life years (QALYs) were calculated using the 12-Item Short Form Health Survey, the Quality of Well Being Scale, and by converting depression-free days to QALYs. The base case analysis used outpatient, pharmacy, patient, and intervention costs. Cost-effectiveness was calculated using incremental cost effectiveness ratios (ICERs) and net health benefit (NHB). ICER distributions were generated using nonparametric bootstrap with replacement sampling. Results The HITIDES intervention was more effective and cost-saving compared to usual care in 78% of bootstrapped samples. The intervention NHB was positive and therefore deemed cost-effective using an ICER threshold of $50,000/QALY. Conclusions In HIV clinic settings this intervention was more effective and cost-saving compared to usual care. Implementation of off-site depression collaborative care programs in specialty care settings may be a strategy that not only improves outcomes for patients, but also maximizes the efficient use of limited healthcare resources. PMID:26102447
ERIC Educational Resources Information Center
Simonds, Elise C.; Handel, Richard W.; Archer, Robert P.
2008-01-01
This study evaluated the incremental validity of scores from the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Symptom Checklist-90-Revised (SCL-90-R) in a sample of mental health inpatients originally published by Archer, Griffin, and Aiduk (1995). The incremental validity of scores from the SCL-90-R primary symptom dimensions…
Olmstead, Todd A; Sindelar, Jody L; Petry, Nancy M
2007-03-16
To evaluate the cost-effectiveness of a prize-based intervention as an addition to usual care for stimulant abusers. This cost-effectiveness analysis is based on a randomized clinical trial implemented within the National Drug Abuse Treatment Clinical Trials Network. The trial was conducted at eight community-based outpatient psychosocial drug abuse treatment clinics. Four hundred and fifteen stimulant abusers were assigned to usual care (N=206) or usual care plus abstinence-based incentives (N=209) for 12 weeks. Participants randomized to the incentive condition earned the chance to draw for prizes for submitting substance negative samples; the number of draws earned increased with continuous abstinence time. Incremental cost-effectiveness ratios were estimated to compare prize-based incentives relative to usual care. The primary patient outcome was longest duration of confirmed stimulant abstinence (LDA). Unit costs were obtained via surveys administered at the eight participating clinics. Resource utilizations and patient outcomes were obtained from the clinical trial. Acceptability curves are presented to illustrate the uncertainty due to the sample and to provide policy relevant information. The incremental cost to lengthen the LDA by 1 week was 258 US dollars (95% confidence interval, 191-401 US dollars). Sensitivity analyses on several key parameters show that this value ranges from 163 to 269 US dollars. Compared with the usual care group, the incentive group had significantly longer LDAs and significantly higher costs.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
Input-output-controlled nonlinear equation solvers
NASA Technical Reports Server (NTRS)
Padovan, Joseph
1988-01-01
To upgrade the efficiency and stability of the successive substitution (SS) and Newton-Raphson (NR) schemes, the concept of input-output-controlled solvers (IOCS) is introduced. By employing the formal properties of the constrained version of the SS and NR schemes, the IOCS algorithm can handle indefiniteness of the system Jacobian, can maintain iterate monotonicity, and provide for separate control of load incrementation and iterate excursions, as well as having other features. To illustrate the algorithmic properties, the results for several benchmark examples are presented. These define the associated numerical efficiency and stability of the IOCS.
Grid Based Technologies for in silico Screening and Drug Design.
Potemkin, Vladimir; Grishina, Maria
2018-03-08
Various techniques for rational drug design are presented in the paper. The methods are based on a substitution of antipharmacophore atoms of the molecules of training dataset by new atoms and/or group of atoms increasing the atomic bioactivity increments obtained at a SAR study. Furthermore, a design methodology based on the genetic algorithm DesPot for discrete optimization and generation of new drug candidate structures is described. Additionally, wide spectra of SAR approaches (3D/4D QSAR interior and exterior-based methods - BiS, CiS, ConGO, CoMIn, high-quality docking method - ReDock) using MERA force field and/or AlteQ quantum chemical method for correct prognosis of bioactivity and bioactive probability is described. The design methods are implemented now at www.chemosophia.com web-site for online computational services. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia.
Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Dencker Johansen, Mette; Christensen, Toke Folke; Tarnow, Lise; Christiansen, Jens Sandahl; Hejlesen, Ole
2014-10-01
The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration algorithm. The accuracy of the two algorithms was compared using four performance metrics. The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD) sample-based sensitivity for the hypoglycemic threshold of 70 mg/dL was 31% (33%) for the Guardian RT algorithm and 70% (33%) for the new algorithm. The mean (SD) sample-based specificity at the same hypoglycemic threshold was 95% (8%) for the Guardian RT algorithm and 90% (16%) for the new calibration algorithm. The sensitivity of the event-based hypoglycemia detection for the hypoglycemic threshold of 70 mg/dL was 61% for the Guardian RT calibration and 89% for the new calibration algorithm. Application of the new calibration caused one false-positive instance for the event-based hypoglycemia detection, whereas the Guardian RT caused no false-positive instances. The overestimation of plasma glucose by CGM was corrected from 33.2 mg/dL in the Guardian RT algorithm to 21.9 mg/dL in the new calibration algorithm. The results suggest that the new algorithm may reduce the inaccuracy of Guardian RT CGM system within the hypoglycemic range; however, data from a larger number of patients are required to compare the clinical reliability of the two algorithms.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Paz, Andrea; Crawford, Andrew J
2012-11-01
Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.
Simonds, Elise C; Handel, Richard W; Archer, Robert P
2008-03-01
This study evaluated the incremental validity of scores from the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Symptom Checklist-90-Revised (SCL-90-R) in a sample of mental health inpatients originally published by Archer, Griffin, and Aiduk (1995). The incremental validity of scores from the SCL-90-R primary symptom dimensions and MMPI-2 Clinical, Content, and Restructured Clinical scales was assessed in a sample of 544 mental health inpatients using conceptually related items from the Brief Psychiatric Rating Scale (BPRS) as criteria. A series of hierarchical multiple regressions indicated that scores from the SCL-90-R primary symptom dimensions exhibited limited incremental validity (Mdn DeltaR(2) = .01, range = 0-.01), whereas scores from MMPI-2 scales contributed additional information in the prediction of ratings on all but one BPRS item (Mdn DeltaR( 2) = .08, range = .04-.12).
Bozorgmanesh, Mohammadreza; Sardarinia, Mahsa; Hajsheikholeslami, Farhad; Azizi, Fereidoun; Hadaegh, Farzad
2016-02-01
To examine whether a body shape index (ABSI) calculated by using waist circumference (WC) adjusted for height and weight could improve the predictive performances for cardiovascular disease (CVD) of the Framingham's general CVD algorithm and to compare its predictive performances with other anthropometric measures. We analyzed data on a 10-year population-based follow-up of 8,248 (4,471 women) individuals aged ≥30 years, free of CVD at baseline. CVD risk was estimated for a 1 SD increment in ABSI, body mass index (BMI), waist-to-hip ratio (WHpR) and waist-to-height ratio (WHtR), by incorporating them, one at a time, into multivariate accelerated failure time models. ABSI was associated with multivariate-adjusted increased risk of incident CVD among both men (1.26, 95% CI 1.09-1.46) and women (1.17, 1.03-1.32). Among men, for a one-SD increment, ABSI conferred a greater increase in the hazard of CVD [1.26 (1.09-1.46)] than did BMI [1.06 (0.94-1.20)], WC [1.15(1.03-1.28)], WHpR [1.02 (1.01-1.03)] and WHtR [1.16 (1.02-1.31)], and the corresponding figures among women were 1.17 (1.03-1.32), 1.02 (0.90-1.16), 1.11 (0.98-1.27), 1.03 (1.01-1.05) and 1.14 (0.99-1.03), respectively. ABSI as well as other anthropometric measures failed to add to the predictive ability of the Framingham general CVD algorithm either. Although ABSI could not improve the predictability of the Framingham algorithm, it provides more information than other traditional anthropometric measures in settings where information on traditional CVD risk factors are not available, and it can be used as a practical criterion to predict adiposity-related health risks in clinical assessments.
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
Unsupervised classification of multivariate geostatistical data: Two algorithms
NASA Astrophysics Data System (ADS)
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing
Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian
2016-01-01
Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
Decorating surfaces with bidirectional texture functions.
Zhou, Kun; Du, Peng; Wang, Lifeng; Matsushita, Yasuyuki; Shi, Jiaoying; Guo, Baining; Shum, Heung-Yeung
2005-01-01
We present a system for decorating arbitrary surfaces with bidirectional texture functions (BTF). Our system generates BTFs in two steps. First, we automatically synthesize a BTF over the target surface from a given BTF sample. Then, we let the user interactively paint BTF patches onto the surface such that the painted patches seamlessly integrate with the background patterns. Our system is based on a patch-based texture synthesis approach known as quilting. We present a graphcut algorithm for BTF synthesis on surfaces and the algorithm works well for a wide variety of BTF samples, including those which present problems for existing algorithms. We also describe a graphcut texture painting algorithm for creating new surface imperfections (e.g., dirt, cracks, scratches) from existing imperfections found in input BTF samples. Using these algorithms, we can decorate surfaces with real-world textures that have spatially-variant reflectance, fine-scale geometry details, and surfaces imperfections. A particularly attractive feature of BTF painting is that it allows us to capture imperfections of real materials and paint them onto geometry models. We demonstrate the effectiveness of our system with examples.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Fast Inbound Top-K Query for Random Walk with Restart.
Zhang, Chao; Jiang, Shan; Chen, Yucheng; Sun, Yidan; Han, Jiawei
2015-09-01
Random walk with restart (RWR) is widely recognized as one of the most important node proximity measures for graphs, as it captures the holistic graph structure and is robust to noise in the graph. In this paper, we study a novel query based on the RWR measure, called the inbound top-k (Ink) query. Given a query node q and a number k , the Ink query aims at retrieving k nodes in the graph that have the largest weighted RWR scores to q . Ink queries can be highly useful for various applications such as traffic scheduling, disease treatment, and targeted advertising. Nevertheless, none of the existing RWR computation techniques can accurately and efficiently process the Ink query in large graphs. We propose two algorithms, namely Squeeze and Ripple, both of which can accurately answer the Ink query in a fast and incremental manner. To identify the top- k nodes, Squeeze iteratively performs matrix-vector multiplication and estimates the lower and upper bounds for all the nodes in the graph. Ripple employs a more aggressive strategy by only estimating the RWR scores for the nodes falling in the vicinity of q , the nodes outside the vicinity do not need to be evaluated because their RWR scores are propagated from the boundary of the vicinity and thus upper bounded. Ripple incrementally expands the vicinity until the top- k result set can be obtained. Our extensive experiments on real-life graph data sets show that Ink queries can retrieve interesting results, and the proposed algorithms are orders of magnitude faster than state-of-the-art method.
ERIC Educational Resources Information Center
Bryan, Craig J.; Steiner-Pappalardo, Nicole; Rudd, M. David
2009-01-01
The incremental impact of adding a mnemonic to remember suicide warning signs to the Air Force Suicide Prevention Program (AFSPP) community awareness briefing was investigated with a sample of young, junior-enlisted airmen. Participants in the standard briefing significantly increased their ability to list suicide warning signs and improved…
ERIC Educational Resources Information Center
Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott
2008-01-01
The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…
Das, Shyama; Idicula, Sumam Mary
2011-01-01
The goal of biclustering in gene expression data matrix is to find a submatrix such that the genes in the submatrix show highly correlated activities across all conditions in the submatrix. A measure called mean squared residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within the submatrix. MSR difference is the incremental increase in MSR when a gene or condition is added to the bicluster. In this chapter, three biclustering algorithms using MSR threshold (MSRT) and MSR difference threshold (MSRDT) are experimented and compared. All these methods use seeds generated from K-Means clustering algorithm. Then these seeds are enlarged by adding more genes and conditions. The first algorithm makes use of MSRT alone. Both the second and third algorithms make use of MSRT and the newly introduced concept of MSRDT. Highly coherent biclusters are obtained using this concept. In the third algorithm, a different method is used to calculate the MSRDT. The results obtained on bench mark datasets prove that these algorithms are better than many of the metaheuristic algorithms.
Interactive outlining: an improved approach using active contours
NASA Astrophysics Data System (ADS)
Daneels, Dirk; van Campenhout, David; Niblack, Carlton W.; Equitz, Will; Barber, Ron; Fierens, Freddy
1993-04-01
The purpose of our work is to outline objects on images in an interactive environment. We use an improved method based on energy minimizing active contours or `snakes.' Kass et al., proposed a variational technique; Amini used dynamic programming; and Williams and Shah introduced a fast, greedy algorithm. We combine the advantages of the latter two methods in a two-stage algorithm. The first stage is a greedy procedure that provides fast initial convergence. It is enhanced with a cost term that extends over a large number of points to avoid oscillations. The second stage, when accuracy becomes important, uses dynamic programming. This step is accelerated by the use of alternating search neighborhoods and by dropping stable points from the iterations. We have also added several features for user interaction. First, the user can define points of high confidence. Mathematically, this results in an extra cost term and, in that way, the robustness in difficult areas (e.g., noisy edges, sharp corners) is improved. We also give the user the possibility of incremental contour tracking, thus providing feedback on the refinement process. The algorithm has been tested on numerous photographic clip art images and extensive tests on medical images are in progress.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
Different realizations of Cooper-Frye sampling with conservation laws
NASA Astrophysics Data System (ADS)
Schwarz, C.; Oliinychenko, D.; Pang, L.-G.; Ryu, S.; Petersen, H.
2018-01-01
Approaches based on viscous hydrodynamics for the hot and dense stage and hadronic transport for the final dilute rescattering stage are successfully applied to the dynamic description of heavy ion reactions at high beam energies. One crucial step in such hybrid approaches is the so-called particlization, which is the transition between the hydrodynamic description and the microscopic degrees of freedom. For this purpose, individual particles are sampled on the Cooper-Frye hypersurface. In this work, four different realizations of the sampling algorithms are compared, with three of them incorporating the global conservation laws of quantum numbers in each event. The algorithms are compared within two types of scenarios: a simple ‘box’ hypersurface consisting of only one static cell and a typical particlization hypersurface for Au+Au collisions at \\sqrt{{s}{NN}}=200 {GeV}. For all algorithms the mean multiplicities (or particle spectra) remain unaffected by global conservation laws in the case of large volumes. In contrast, the fluctuations of the particle numbers are affected considerably. The fluctuations of the newly developed SPREW algorithm based on the exponential weight, and the recently suggested SER algorithm based on ensemble rejection, are smaller than those without conservation laws and agree with the expectation from the canonical ensemble. The previously applied mode sampling algorithm produces dramatically larger fluctuations than expected in the corresponding microcanonical ensemble, and therefore should be avoided in fluctuation studies. This study might be of interest for the investigation of particle fluctuations and correlations, e.g. the suggested signatures for a phase transition or a critical endpoint, in hybrid approaches that are affected by global conservation laws.
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.
Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David
2017-02-01
Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.
Automated protein NMR structure determination using wavelet de-noised NOESY spectra.
Dancea, Felician; Günther, Ulrich
2005-11-01
A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.
Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems
2010-02-01
Reachable Set . . . 88 6-1 LittleDog Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6-2 Dog bounding up stairs ...planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories over extremely...a motion planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories
Balouchestani, Mohammadreza; Krishnan, Sridhar
2014-01-01
Long-term recording of Electrocardiogram (ECG) signals plays an important role in health care systems for diagnostic and treatment purposes of heart diseases. Clustering and classification of collecting data are essential parts for detecting concealed information of P-QRS-T waves in the long-term ECG recording. Currently used algorithms do have their share of drawbacks: 1) clustering and classification cannot be done in real time; 2) they suffer from huge energy consumption and load of sampling. These drawbacks motivated us in developing novel optimized clustering algorithm which could easily scan large ECG datasets for establishing low power long-term ECG recording. In this paper, we present an advanced K-means clustering algorithm based on Compressed Sensing (CS) theory as a random sampling procedure. Then, two dimensionality reduction methods: Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) followed by sorting the data using the K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers are applied to the proposed algorithm. We show our algorithm based on PCA features in combination with K-NN classifier shows better performance than other methods. The proposed algorithm outperforms existing algorithms by increasing 11% classification accuracy. In addition, the proposed algorithm illustrates classification accuracy for K-NN and PNN classifiers, and a Receiver Operating Characteristics (ROC) area of 99.98%, 99.83%, and 99.75% respectively.
Verification and Validation of KBS with Neural Network Components
NASA Technical Reports Server (NTRS)
Wen, Wu; Callahan, John
1996-01-01
Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.
Li, Qingli; Zhang, Jingfa; Wang, Yiting; Xu, Guoteng
2009-12-01
A molecular spectral imaging system has been developed based on microscopy and spectral imaging technology. The system is capable of acquiring molecular spectral images from 400 nm to 800 nm with 2 nm wavelength increments. The basic principles, instrumental systems, and system calibration method as well as its applications for the calculation of the stain-uptake by tissues are introduced. As a case study, the system is used for determining the pathogenesis of diabetic retinopathy and evaluating the therapeutic effects of erythropoietin. Some molecular spectral images of retinal sections of normal, diabetic, and treated rats were collected and analyzed. The typical transmittance curves of positive spots stained for albumin and advanced glycation end products are retrieved from molecular spectral data with the spectral response calibration algorithm. To explore and evaluate the protective effect of erythropoietin (EPO) on retinal albumin leakage of streptozotocin-induced diabetic rats, an algorithm based on Beer-Lambert's law is presented. The algorithm can assess the uptake by histologic retinal sections of stains used in quantitative pathology to label albumin leakage and advanced glycation end products formation. Experimental results show that the system is helpful for the ophthalmologist to reveal the pathogenesis of diabetic retinopathy and explore the protective effect of erythropoietin on retinal cells of diabetic rats. It also highlights the potential of molecular spectral imaging technology to provide more effective and reliable diagnostic criteria in pathology.
NASA Astrophysics Data System (ADS)
Streuber, Gregg Mitchell
Environmental and economic factors motivate the pursuit of more fuel-efficient aircraft designs. Aerodynamic shape optimization is a powerful tool in this effort, but is hampered by the presence of multimodality in many design spaces. Gradient-based multistart optimization uses a sampling algorithm and multiple parallel optimizations to reliably apply fast gradient-based optimization to moderately multimodal problems. Ensuring that the sampled geometries remain physically realizable requires manually developing specialized linear constraints for each class of problem. Utilizing free-form deformation geometry control allows these linear constraints to be written in a geometry-independent fashion, greatly easing the process of applying the algorithm to new problems. This algorithm was used to assess the presence of multimodality when optimizing a wing in subsonic and transonic flows, under inviscid and viscous conditions, and a blended wing-body under transonic, viscous conditions. Multimodality was present in every wing case, while the blended wing-body was found to be generally unimodal.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Zheng, Li; Silliman, Stephen E.
2000-01-01
A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first‐order solution is developed for the case of a transmissivity field which is isotropic and whose second‐order behavior can be characterized by an exponential covariance structure. The estimates of the variance σY2 and the integral scale λ of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two‐dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (σY2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both λ and σY2 in aquifers with the value of σY2 up to 2.0, but the errors in those estimates were higher for σY2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.
NASA Astrophysics Data System (ADS)
Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei
2016-07-01
A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.
An improved target velocity sampling algorithm for free gas elastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Walsh, Jonathan A.
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
An improved target velocity sampling algorithm for free gas elastic scattering
Romano, Paul K.; Walsh, Jonathan A.
2018-02-03
We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less
1997-02-01
application with a strong resemblance to a video game , concern has been raised that prior video game experience might have a moderating effect on scores. Much...such as spatial ability. The effects of computer or video game experience on work sample scores have not been systematically investigated. The purpose...of this study was to evaluate the incremental validity of prior video game experience over that of general aptitude as a predictor of work sample test
NASA Astrophysics Data System (ADS)
Fan, C.; Koeniger, P.; Wang, H.; Frechen, M.
2009-04-01
Sclerochronology, the study of periodic increments in skeletal organisms, can decipher the life history and environmental records preserved in fossil shells. Although there have been a number of studies that apply isotopic analyses to shells in open ocean and fresh water, investigations for brackish environments are rare. One of the common inhabitants in estuaries is the Crassostrea oyster. Kirby et al. (1998) demonstrated a close correspondence between the ligamental increments of convex and concave bands and yearly ^18O cycles; Andrus and Crowe (2000) found a close correspondence between translucent growth bands on the cross-section of the hinge and yearly ^18O cycles. They conclude that the morphological features on hinge and growth bands on the cross-section are formed annually and can be used to determine accurately age and growth rate in this species. However, Surge et al. (2001) could not find that these morphologic features have seasonal significance in the C. virginica shells. Therefore, these concave ridges are not reliable independent proxies of seasonality. These studies were carried out with C. virginica shells; none was studied with nature C. gigas, which was widely distributed along the Pacific coastal area. C. gigas has been introduced from its native home to all over the world, ranging from North America to Australia and Europe; it has become an important commercial harvest in many of these places. Buried Holocene oyster shells of C. gigas were sampled from a huge buried oyster reef on the West of Bohai Sea, China. One of these shells was selected for high resolution micro-sampling and stable isotope analyses testing the assumption that C. gigas ligamental increments are annual in nature. We analyzed 236 consecutive samples from the shell to show that morphologic features both on hinge and cross-section are annual by comparing them to the ^18O profiles. We tested the assumption that the morphologic features of C.gigas are delineated by convex tops and concave bottoms on hinge and corresponding translucent growth bands on cross-section. The shell has 13.5 ligamental increments, based on 13.5 convex bands and 13 concave bottoms on hinge. Convex tops correspond to ^18O minima (summers), whereas concave bottoms correspond to ^18O maxima, which were formed during the low temperature of winter in the study area. We demonstrate that the ligamental increments of convex tops, concave bottoms and translucent growth bands in the studied C. gigas shell are suitable indicators of annual growth increments. The life spans, growth rates, and the timing of death can be determined from the ligament increments and isotope profiles of buried oyster shells.
Sensitivity analysis of complex coupled systems extended to second and higher order derivatives
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
In design of engineering systems, the what if questions often arise such as: what will be the change of the aircraft payload, if the wing aspect ratio is incremented by 10 percent. Answers to such questions are commonly sought by incrementing the pertinent variable, and reevaluating the major disciplinary analyses involved. These analyses are contributed by engineering disciplines that are, usually, coupled, as are the aerodynamics, structures, and performance in the context of the question above. The what if questions can be answered precisely by computation of the derivatives. A method for calculation of the first derivatives has been developed previously. An algorithm is presented for calculation of the second and higher order derivatives.
Wearable EEG via lossless compression.
Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2016-08-01
This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
LETTER TO THE EDITOR: Free-response operator characteristic models for visual search
NASA Astrophysics Data System (ADS)
Hutchinson, T. P.
2007-05-01
Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.
Scattering properties of electromagnetic waves from metal object in the lower terahertz region
NASA Astrophysics Data System (ADS)
Chen, Gang; Dang, H. X.; Hu, T. Y.; Su, Xiang; Lv, R. C.; Li, Hao; Tan, X. M.; Cui, T. J.
2018-01-01
An efficient hybrid algorithm is proposed to analyze the electromagnetic scattering properties of metal objects in the lower terahertz (THz) frequency. The metal object can be viewed as perfectly electrical conducting object with a slightly rough surface in the lower THz region. Hence the THz scattered field from metal object can be divided into coherent and incoherent parts. The physical optics and truncated-wedge incremental-length diffraction coefficients methods are combined to compute the coherent part; while the small perturbation method is used for the incoherent part. With the MonteCarlo method, the radar cross section of the rough metal surface is computed by the multilevel fast multipole algorithm and the proposed hybrid algorithm, respectively. The numerical results show that the proposed algorithm has good accuracy to simulate the scattering properties rapidly in the lower THz region.
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Liebetreu, John; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben
2005-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.
2007-01-01
This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.
High performance transcription factor-DNA docking with GPU computing
2012-01-01
Background Protein-DNA docking is a very challenging problem in structural bioinformatics and has important implications in a number of applications, such as structure-based prediction of transcription factor binding sites and rational drug design. Protein-DNA docking is very computational demanding due to the high cost of energy calculation and the statistical nature of conformational sampling algorithms. More importantly, experiments show that the docking quality depends on the coverage of the conformational sampling space. It is therefore desirable to accelerate the computation of the docking algorithm, not only to reduce computing time, but also to improve docking quality. Methods In an attempt to accelerate the sampling process and to improve the docking performance, we developed a graphics processing unit (GPU)-based protein-DNA docking algorithm. The algorithm employs a potential-based energy function to describe the binding affinity of a protein-DNA pair, and integrates Monte-Carlo simulation and a simulated annealing method to search through the conformational space. Algorithmic techniques were developed to improve the computation efficiency and scalability on GPU-based high performance computing systems. Results The effectiveness of our approach is tested on a non-redundant set of 75 TF-DNA complexes and a newly developed TF-DNA docking benchmark. We demonstrated that the GPU-based docking algorithm can significantly accelerate the simulation process and thereby improving the chance of finding near-native TF-DNA complex structures. This study also suggests that further improvement in protein-DNA docking research would require efforts from two integral aspects: improvement in computation efficiency and energy function design. Conclusions We present a high performance computing approach for improving the prediction accuracy of protein-DNA docking. The GPU-based docking algorithm accelerates the search of the conformational space and thus increases the chance of finding more near-native structures. To the best of our knowledge, this is the first ad hoc effort of applying GPU or GPU clusters to the protein-DNA docking problem. PMID:22759575
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.
2000-01-01
This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.
Real-Time Detection of Dust Devils from Pressure Readings
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri
2009-01-01
A method for real-time detection of dust devils at a given location is based on identifying the abrupt, temporary decreases in atmospheric pressure that are characteristic of dust devils as they travel through that location. The method was conceived for use in a study of dust devils on the Martian surface, where bandwidth limitations encourage the transmission of only those blocks of data that are most likely to contain information about features of interest, such as dust devils. The method, which is a form of intelligent data compression, could readily be adapted to use for the same purpose in scientific investigation of dust devils on Earth. In this method, the readings of an atmospheric- pressure sensor are repeatedly digitized, recorded, and processed by an algorithm that looks for extreme deviations from a continually updated model of the current pressure environment. The question in formulating the algorithm is how to model current normal observations and what minimum magnitude deviation can be considered sufficiently anomalous as to indicate the presence of a dust devil. There is no single, simple answer to this question: any answer necessarily entails a compromise between false detections and misses. For the original Mars application, the answer was sought through analysis of sliding time windows of digitized pressure readings. Windows of 5-, 10-, and 15-minute durations were considered. The windows were advanced in increments of 30 seconds. Increments of other sizes can also be used, but computational cost increases as the increment decreases and analysis is performed more frequently. Pressure models were defined using a polynomial fit to the data within the windows. For example, the figure depicts pressure readings from a 10-minute window wherein the model was defined by a third-degree polynomial fit to the readings and dust devils were identified as negative deviations larger than both 3 standard deviations (from the mean) and 0.05 mbar in magnitude. An algorithm embodying the detection scheme of this example was found to yield a miss rate of just 8 percent and a false-detection rate of 57 percent when evaluated on historical pressure-sensor data collected by the Mars Pathfinder lander. Since dust devils occur infrequently over the course of a mission, prioritizing observations that contain successful detections could greatly conserve bandwidth allocated to a given mission. This technique can be used on future Mars landers and rovers, such as Mars Phoenix and the Mars Science Laboratory.
Harrison, Michelle; Collins, Curtis D
2015-03-01
Procalcitonin has emerged as a promising biomarker of bacterial infection. Published literature demonstrates that use of procalcitonin testing and an associated treatment pathway reduces duration of antibiotic therapy without impacting mortality. The objective of this study was to determine the financial impact of utilizing a procalcitonin-guided treatment algorithm in hospitalized patients with sepsis. Cost-minimization and cost-utility analysis. Hypothetical cohort of adult ICU patients with suspected bacterial infection and sepsis. Utilizing published clinical and economic data, a decision analytic model was developed from the U.S. hospital perspective. Effectiveness and utility measures were defined using cost-per-clinical episode and cost per quality-adjusted life years (QALYs). Upper and lower sensitivity ranges were determined for all inputs. Univariate and probabilistic sensitivity analyses assessed the robustness of our model and variables. Incremental cost-effectiveness ratios (ICERs) were calculated and compared to predetermined willingness-to-pay thresholds. Base-case results predicted the use of a procalcitonin-guided treatment algorithm dominated standard care with improved quality (0.0002 QALYs) and decreased overall treatment costs ($65). The model was sensitive to a number of key variables that had the potential to impact results, including algorithm adherence (<42.3%), number and cost of procalcitonin tests ordered (≥9 and >$46), days of antimicrobial reduction (<1.6 d), incidence of nephrotoxicity and rate of nephrotoxicity reduction. The combination of procalcitonin testing with an evidence-based treatment algorithm may improve patients' quality of life while decreasing costs in ICU patients with suspected bacterial infection and sepsis; however, results were highly dependent on a number of variables and assumptions.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.
Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen
2014-01-01
Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.
TrustRank: a Cold-Start tolerant recommender system
NASA Astrophysics Data System (ADS)
Zou, Haitao; Gong, Zhiguo; Zhang, Nan; Zhao, Wei; Guo, Jingzhi
2015-02-01
The explosive growth of the World Wide Web leads to the fast advancing development of e-commerce techniques. Recommender systems, which use personalised information filtering techniques to generate a set of items suitable to a given user, have received considerable attention. User- and item-based algorithms are two popular techniques for the design of recommender systems. These two algorithms are known to have Cold-Start problems, i.e., they are unable to effectively handle Cold-Start users who have an extremely limited number of purchase records. In this paper, we develop TrustRank, a novel recommender system which handles the Cold-Start problem by leveraging the user-trust networks which are commonly available for e-commerce applications. A user-trust network is formed by friendships or trust relationships that users specify among them. While it is straightforward to conjecture that a user-trust network is helpful for improving the accuracy of recommendations, a key challenge for using user-trust network to facilitate Cold-Start users is that these users also tend to have a very limited number of trust relationships. To address this challenge, we propose a pre-processing propagation of the Cold-Start users' trust network. In particular, by applying the personalised PageRank algorithm, we expand the friends of a given user to include others with similar purchase records to his/her original friends. To make this propagation algorithm scalable to a large amount of users, as required by real-world recommender systems, we devise an iterative computation algorithm of the original personalised TrustRank which can incrementally compute trust vectors for Cold-Start users. We conduct extensive experiments to demonstrate the consistently improvement provided by our proposed algorithm over the existing recommender algorithms on the accuracy of recommendations for Cold-Start users.
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
One Step at a Time: SBM as an Incremental Process.
ERIC Educational Resources Information Center
Conrad, Mark
1995-01-01
Discusses incremental SBM budgeting and answers questions regarding resource equity, bookkeeping requirements, accountability, decision-making processes, and purchasing. Approaching site-based management as an incremental process recognizes that every school system engages in some level of site-based decisions. Implementation can be gradual and…
Fiuzy, Mohammad; Haddadnia, Javad; Mollania, Nasrin; Hashemian, Maryam; Hassanpour, Kazem
2012-01-01
Background Accurate Diagnosis of Breast Cancer is of prime importance. Fine Needle Aspiration test or "FNA”, which has been used for several years in Europe, is a simple, inexpensive, noninvasive and accurate technique for detecting breast cancer. Expending the suitable features of the Fine Needle Aspiration results is the most important diagnostic problem in early stages of breast cancer. In this study, we introduced a new algorithm that can detect breast cancer based on combining artificial intelligent system and Fine Needle Aspiration (FNA). Methods We studied the Features of Wisconsin Data Base Cancer which contained about 569 FNA test samples (212 patient samples (malignant) and 357 healthy samples (benign)). In this research, we combined Artificial Intelligence Approaches, such as Evolutionary Algorithm (EA) with Genetic Algorithm (GA), and also used Exact Classifier Systems (here by Fuzzy C-Means (FCM)) to separate malignant from benign samples. Furthermore, we examined artificial Neural Networks (NN) to identify the model and structure. This research proposed a new algorithm for an accurate diagnosis of breast cancer. Results According to Wisconsin Data Base Cancer (WDBC) data base, 62.75% of samples were benign, and 37.25% were malignant. After applying the proposed algorithm, we achieved high detection accuracy of about "96.579%” on 205 patients who were diagnosed as having breast cancer. It was found that the method had 93% sensitivity, 73% specialty, 65% positive predictive value, and 95% negative predictive value, respectively. If done by experts, Fine Needle Aspiration (FNA) can be a reliable replacement for open biopsy in palpable breast masses. Evaluation of FNA samples during aspiration can decrease insufficient samples. FNA can be the first line of diagnosis in women with breast masses, at least in deprived regions, and may increase health standards and clinical supervision of patients. Conclusion Such a smart, economical, non-invasive, rapid and accurate system can be introduced as a useful diagnostic system for comprehensive treatment of breast cancer. Another advantage of this method is the possibility of diagnosing breast abnormalities. If done by experts, FNA can be a reliable replacement for open biopsy in palpable breast masses. Evaluation of FNA samples during aspiration can decrease insufficient samples. PMID:25352966
Gao, Wei; Zhang, Ya; Wang, Jianguo
2014-01-01
The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
Distributed Coordination of Energy Storage with Distributed Generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Stoorvogel, Antonie A.
2016-07-18
With a growing emphasis on energy efficiency and system flexibility, a great effort has been made recently in developing distributed energy resources (DER), including distributed generators and energy storage systems. This paper first formulates an optimal coordination problem considering constraints at both system and device levels, including power balance constraint, generator output limits, storage energy and power capacity and charging/discharging efficiencies. An algorithm is then proposed to dynamically and automatically coordinate DERs in a distributed manner. With the proposed algorithm, the agent at each DER only maintains a local incremental cost and updates it through information exchange with a fewmore » neighbors, without relying on any central decision maker. Simulation results are used to illustrate and validate the proposed algorithm.« less
Optical-fiber-based Mueller optical coherence tomography.
Jiao, Shuliang; Yu, Wurong; Stoica, George; Wang, Lihong V
2003-07-15
An optical-fiber-based multichannel polarization-sensitive Mueller optical coherence tomography (OCT) system was built to acquire the Jones or Mueller matrix of a scattering medium, such as biological tissue. For the first time to our knowledge, fiber-based polarization-sensitive OCT was dynamically calibrated to eliminate the polarization distortion caused by the single-mode optical fiber in the sample arm, thereby overcoming a key technical impediment to the application of optical fibers in this technology. The round-trip Jones matrix of the sampling fiber was acquired from the reflecting surface of the sample for each depth scan (A scan) with our OCT system. A new rigorous algorithm was then used to retrieve the calibrated polarization properties of the sample. This algorithm was validated with experimental data. The skin of a rat was imaged with this fiber-based system.
Robust algorithm for aligning two-dimensional chromatograms.
Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel
2012-11-06
Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Simulation tools for particle-based reaction-diffusion dynamics in continuous space
2014-01-01
Particle-based reaction-diffusion algorithms facilitate the modeling of the diffusional motion of individual molecules and the reactions between them in cellular environments. A physically realistic model, depending on the system at hand and the questions asked, would require different levels of modeling detail such as particle diffusion, geometrical confinement, particle volume exclusion or particle-particle interaction potentials. Higher levels of detail usually correspond to increased number of parameters and higher computational cost. Certain systems however, require these investments to be modeled adequately. Here we present a review on the current field of particle-based reaction-diffusion software packages operating on continuous space. Four nested levels of modeling detail are identified that capture incrementing amount of detail. Their applicability to different biological questions is discussed, arching from straight diffusion simulations to sophisticated and expensive models that bridge towards coarse grained molecular dynamics. PMID:25737778
Local reconstruction in computed tomography of diffraction enhanced imaging
NASA Astrophysics Data System (ADS)
Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia
2007-07-01
Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.
Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît
2011-01-01
Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.
Local linear discriminant analysis framework using sample neighbors.
Fan, Zizhu; Xu, Yong; Zhang, David
2011-07-01
The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
How Hierarchical Topics Evolve in Large Text Corpora.
Cui, Weiwei; Liu, Shixia; Wu, Zhuofeng; Wei, Hao
2014-12-01
Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Targeting Ballistic Lunar Capture Trajectories Using Periodic Orbits in the Sun-Earth CRTBP
NASA Technical Reports Server (NTRS)
Cooley, D.S.; Griesemer, Paul Ricord; Ocampo, Cesar
2009-01-01
A particular periodic orbit in the Earth-Sun circular restricted three body problem is shown to have the characteristics needed for a ballistic lunar capture transfer. An injection from a circular parking orbit into the periodic orbit serves as an initial guess for a targeting algorithm. By targeting appropriate parameters incrementally in increasingly complicated force models and using precise derivatives calculated from the state transition matrix, a reliable algorithm is produced. Ballistic lunar capture trajectories in restricted four body systems are shown to be able to be produced in a systematic way.
Pseudo-updated constrained solution algorithm for nonlinear heat conduction
NASA Technical Reports Server (NTRS)
Tovichakchaikul, S.; Padovan, J.
1983-01-01
This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.
Soh, Harold; Demiris, Yiannis
2014-01-01
Human beings not only possess the remarkable ability to distinguish objects through tactile feedback but are further able to improve upon recognition competence through experience. In this work, we explore tactile-based object recognition with learners capable of incremental learning. Using the sparse online infinite Echo-State Gaussian process (OIESGP), we propose and compare two novel discriminative and generative tactile learners that produce probability distributions over objects during object grasping/palpation. To enable iterative improvement, our online methods incorporate training samples as they become available. We also describe incremental unsupervised learning mechanisms, based on novelty scores and extreme value theory, when teacher labels are not available. We present experimental results for both supervised and unsupervised learning tasks using the iCub humanoid, with tactile sensors on its five-fingered anthropomorphic hand, and 10 different object classes. Our classifiers perform comparably to state-of-the-art methods (C4.5 and SVM classifiers) and findings indicate that tactile signals are highly relevant for making accurate object classifications. We also show that accurate "early" classifications are possible using only 20-30 percent of the grasp sequence. For unsupervised learning, our methods generate high quality clusterings relative to the widely-used sequential k-means and self-organising map (SOM), and we present analyses into the differences between the approaches.
Baldanzi, S; Ricci, G; Bottari, M; Chico, L; Simoncini, C; Siciliano, G
2017-07-01
DM1 is an autosomal-dominant disorder characterized by muscle weakness, myotonia, and multisystemic involvement. According to current literature fatigue and daytime sleepiness are among the main symptoms of DM1. Oxidative stress has been proposed to be one of the pathogenic factors of fatigue consequent to DM1. In this study, we investigated the dimensions of experienced fatigue and physiological fatigue in a sample of 26 DM1 patients (17 males, 9 females, mean age 41.6 years, SD±12.7); experienced fatigue has been studied through Fatigue Severity Scale (FSS), and physiological fatigue was measured through an intermittent incremental exercise of the forearm muscles using a myometer; oxidative stress balance markers trend during aerobic exercise test have been collected. The occurrence of central fatigue in the sample means that central activation worsens during the motor contraction; interestingly FSS score was significantly correlated to MVC (before and after the effort, r-before=-0.583, p<0.01, r-after= -0.534, p<0.05), and to motor disability measured by MRC (r=-0.496, p<0.05); moreover we found a strong tendency towards significance in the association to lactate baseline (r=0.378, p=0.057).Results are discussed to define whether or not, based on clinical and laboratory grounds, such exercise training protocol may be suitable for proper management of DM1 patients; proper assessment of fatigue should be included in algorithms for data collection in DM1 patient registries.
Classification and authentication of unknown water samples using machine learning algorithms.
Kundu, Palash K; Panchariya, P C; Kundu, Madhusree
2011-07-01
This paper proposes the development of water sample classification and authentication, in real life which is based on machine learning algorithms. The proposed techniques used experimental measurements from a pulse voltametry method which is based on an electronic tongue (E-tongue) instrumentation system with silver and platinum electrodes. E-tongue include arrays of solid state ion sensors, transducers even of different types, data collectors and data analysis tools, all oriented to the classification of liquid samples and authentication of unknown liquid samples. The time series signal and the corresponding raw data represent the measurement from a multi-sensor system. The E-tongue system, implemented in a laboratory environment for 6 numbers of different ISI (Bureau of Indian standard) certified water samples (Aquafina, Bisleri, Kingfisher, Oasis, Dolphin, and McDowell) was the data source for developing two types of machine learning algorithms like classification and regression. A water data set consisting of 6 numbers of sample classes containing 4402 numbers of features were considered. A PCA (principal component analysis) based classification and authentication tool was developed in this study as the machine learning component of the E-tongue system. A proposed partial least squares (PLS) based classifier, which was dedicated as well; to authenticate a specific category of water sample evolved out as an integral part of the E-tongue instrumentation system. The developed PCA and PLS based E-tongue system emancipated an overall encouraging authentication percentage accuracy with their excellent performances for the aforesaid categories of water samples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
The IMS Software Integration Platform
1993-04-12
products to incorporate all data shared by the IMS applications. Some entities (time-series, images, a algorithm -specific parameters) must be managed...dbwhoanii, dbcancel Transaction Management: dbcommit, dbrollback Key Counter Assignment: dbgetcounter String Handling: cstr ~to~pad, pad-to- cstr Error...increment *value; String Maniputation: int cstr topad (array, string, arraylength) char *array, *string; int arrayjlength; int pad tocstr (string
Mobile robot motion estimation using Hough transform
NASA Astrophysics Data System (ADS)
Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu
2018-05-01
This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.
Grating-based phase contrast tomosynthesis imaging: Proof-of-concept experimental studies
Li, Ke; Ge, Yongshuai; Garrett, John; Bevins, Nicholas; Zambelli, Joseph; Chen, Guang-Hong
2014-01-01
Purpose: This paper concerns the feasibility of x-ray differential phase contrast (DPC) tomosynthesis imaging using a grating-based DPC benchtop experimental system, which is equipped with a commercial digital flat-panel detector and a medical-grade rotating-anode x-ray tube. An extensive system characterization was performed to quantify its imaging performance. Methods: The major components of the benchtop system include a diagnostic x-ray tube with a 1.0 mm nominal focal spot size, a flat-panel detector with 96 μm pixel pitch, a sample stage that rotates within a limited angular span of ±30°, and a Talbot-Lau interferometer with three x-ray gratings. A total of 21 projection views acquired with 3° increments were used to reconstruct three sets of tomosynthetic image volumes, including the conventional absorption contrast tomosynthesis image volume (AC-tomo) reconstructed using the filtered-backprojection (FBP) algorithm with the ramp kernel, the phase contrast tomosynthesis image volume (PC-tomo) reconstructed using FBP with a Hilbert kernel, and the differential phase contrast tomosynthesis image volume (DPC-tomo) reconstructed using the shift-and-add algorithm. Three inhouse physical phantoms containing tissue-surrogate materials were used to characterize the signal linearity, the signal difference-to-noise ratio (SDNR), the three-dimensional noise power spectrum (3D NPS), and the through-plane artifact spread function (ASF). Results: While DPC-tomo highlights edges and interfaces in the image object, PC-tomo removes the differential nature of the DPC projection data and its pixel values are linearly related to the decrement of the real part of the x-ray refractive index. The SDNR values of polyoxymethylene in water and polystyrene in oil are 1.5 and 1.0, respectively, in AC-tomo, and the values were improved to 3.0 and 2.0, respectively, in PC-tomo. PC-tomo and AC-tomo demonstrate equivalent ASF, but their noise characteristics quantified by the 3D NPS were found to be different due to the difference in the tomosynthesis image reconstruction algorithms. Conclusions: It is feasible to simultaneously generate x-ray differential phase contrast, phase contrast, and absorption contrast tomosynthesis images using a grating-based data acquisition setup. The method shows promise in improving the visibility of several low-density materials and therefore merits further investigation. PMID:24387511
Distributed autonomous systems: resource management, planning, and control algorithms
NASA Astrophysics Data System (ADS)
Smith, James F., III; Nguyen, ThanhVu H.
2005-05-01
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
Assimilation of gridded terrestrial water storage observations from GRACE into a land surface model
NASA Astrophysics Data System (ADS)
Girotto, Manuela; De Lannoy, Gabriëlle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-05-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km2 at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
Assimilation of Gridded Terrestrial Water Storage Observations from GRACE into a Land Surface Model
NASA Technical Reports Server (NTRS)
Girotto, Manuela; De Lannoy, Gabrielle J. M.; Reichle, Rolf H.; Rodell, Matthew
2016-01-01
Observations of terrestrial water storage (TWS) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission have a coarse resolution in time (monthly) and space (roughly 150,000 km(sup 2) at midlatitudes) and vertically integrate all water storage components over land, including soil moisture and groundwater. Data assimilation can be used to horizontally downscale and vertically partition GRACE-TWS observations. This work proposes a variant of existing ensemble-based GRACE-TWS data assimilation schemes. The new algorithm differs in how the analysis increments are computed and applied. Existing schemes correlate the uncertainty in the modeled monthly TWS estimates with errors in the soil moisture profile state variables at a single instant in the month and then apply the increment either at the end of the month or gradually throughout the month. The proposed new scheme first computes increments for each day of the month and then applies the average of those increments at the beginning of the month. The new scheme therefore better reflects submonthly variations in TWS errors. The new and existing schemes are investigated here using gridded GRACE-TWS observations. The assimilation results are validated at the monthly time scale, using in situ measurements of groundwater depth and soil moisture across the U.S. The new assimilation scheme yields improved (although not in a statistically significant sense) skill metrics for groundwater compared to the open-loop (no assimilation) simulations and compared to the existing assimilation schemes. A smaller impact is seen for surface and root-zone soil moisture, which have a shorter memory and receive smaller increments from TWS assimilation than groundwater. These results motivate future efforts to combine GRACE-TWS observations with observations that are more sensitive to surface soil moisture, such as L-band brightness temperature observations from Soil Moisture Ocean Salinity (SMOS) or Soil Moisture Active Passive (SMAP). Finally, we demonstrate that the scaling parameters that are applied to the GRACE observations prior to assimilation should be consistent with the land surface model that is used within the assimilation system.
Adaptive and predictive control of a simulated robot arm.
Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo
2013-06-01
In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Remote sensing estimation of colored dissolved organic matter (CDOM) in optically shallow waters
NASA Astrophysics Data System (ADS)
Li, Jiwei; Yu, Qian; Tian, Yong Q.; Becker, Brian L.
2017-06-01
It is not well understood how bottom reflectance of optically shallow waters affects the algorithm performance of colored dissolved organic matters (CDOM) retrieval. This study proposes a new algorithm that considers bottom reflectance in estimating CDOM absorption from optically shallow inland or coastal waters. The field sampling was conducted during four research cruises within the Saginaw River, Kawkawlin River and Saginaw Bay of Lake Huron. A stratified field sampling campaign collected water samples, determined the depth at each sampling location and measured optical properties. The sampled CDOM absorption at 440 nm broadly ranged from 0.12 to 8.46 m-1. Field sample analysis revealed that bottom reflectance does significantly change water apparent optical properties. We developed a CDOM retrieval algorithm (Shallow water Bio-Optical Properties algorithm, SBOP) that effectively reduces uncertainty by considering bottom reflectance in shallow waters. By incorporating the bottom contribution in upwelling radiances, the SBOP algorithm was able to explain 74% of the variance of CDOM values (RMSE = 0.22 and R2 = 0.74). The bottom effect index (BEI) was introduced to efficiently separate optically shallow and optically deep waters. Based on the BEI, an adaptive approach was proposed that references the amount of bottom effect in order to identify the most suitable algorithm (optically shallow water algorithm [SBOP] or optically deep water algorithm [QAA-CDOM]) to improve CDOM estimation (RMSE = 0.22 and R2 = 0.81). Our results potentially help to advance the capability of remote sensing in monitoring carbon pools at the land-water interface.
Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo
2008-01-01
Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
A method for feature selection of APT samples based on entropy
NASA Astrophysics Data System (ADS)
Du, Zhenyu; Li, Yihong; Hu, Jinsong
2018-05-01
By studying the known APT attack events deeply, this paper propose a feature selection method of APT sample and a logic expression generation algorithm IOCG (Indicator of Compromise Generate). The algorithm can automatically generate machine readable IOCs (Indicator of Compromise), to solve the existing IOCs logical relationship is fixed, the number of logical items unchanged, large scale and cannot generate a sample of the limitations of the expression. At the same time, it can reduce the redundancy and useless APT sample processing time consumption, and improve the sharing rate of information analysis, and actively respond to complex and volatile APT attack situation. The samples were divided into experimental set and training set, and then the algorithm was used to generate the logical expression of the training set with the IOC_ Aware plug-in. The contrast expression itself was different from the detection result. The experimental results show that the algorithm is effective and can improve the detection effect.
Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach
NASA Astrophysics Data System (ADS)
Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew
2017-05-01
This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.
Label consistent K-SVD: learning a discriminative dictionary for recognition.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2013-11-01
A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.
Miao, Qiguang; Cao, Ying; Xia, Ge; Gong, Maoguo; Liu, Jiachen; Song, Jianfeng
2016-11-01
AdaBoost has attracted much attention in the machine learning community because of its excellent performance in combining weak classifiers into strong classifiers. However, AdaBoost tends to overfit to the noisy data in many applications. Accordingly, improving the antinoise ability of AdaBoost plays an important role in many applications. The sensitiveness to the noisy data of AdaBoost stems from the exponential loss function, which puts unrestricted penalties to the misclassified samples with very large margins. In this paper, we propose two boosting algorithms, referred to as RBoost1 and RBoost2, which are more robust to the noisy data compared with AdaBoost. RBoost1 and RBoost2 optimize a nonconvex loss function of the classification margin. Because the penalties to the misclassified samples are restricted to an amount less than one, RBoost1 and RBoost2 do not overfocus on the samples that are always misclassified by the previous base learners. Besides the loss function, at each boosting iteration, RBoost1 and RBoost2 use numerically stable ways to compute the base learners. These two improvements contribute to the robustness of the proposed algorithms to the noisy training and testing samples. Experimental results on the synthetic Gaussian data set, the UCI data sets, and a real malware behavior data set illustrate that the proposed RBoost1 and RBoost2 algorithms perform better when the training data sets contain noisy data.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J
2010-03-01
PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.
2010-01-01
Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306
Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong
2015-01-01
The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.
SST algorithms in ACSPO reanalysis of AVHRR GAC data from 2002-2013
NASA Astrophysics Data System (ADS)
Petrenko, B.; Ignatov, A.; Kihai, Y.; Zhou, X.; Stroup, J.
2014-05-01
In response to a request from the NOAA Coral Reef Watch Program, NOAA SST Team initiated reprocessing of 4 km resolution GAC data from AVHRRs flown onboard NOAA and MetOp satellites. The objective is to create a longterm Level 2 Advanced Clear-Sky Processor for Oceans (ACSPO) SST product, consistent with NOAA operations. ACSPO-Reanalysis (RAN) is used as input in the NOAA geo-polar blended Level 4 SST and potentially other Level 4 SST products. In the first stage of reprocessing (reanalysis 1, or RAN1), data from NOAA-15, -16, -17, -18, -19, and Metop-A and -B, from 2002-present have been processed with ACSPO v2.20, and matched up with quality controlled in situ data from in situ Quality Monitor (iQuam) version 1. The ~12 years time series of matchups were used to develop and explore the SST retrieval algorithms, with emphasis on minimizing spatial biases in retrieved SSTs, close reproduction of the magnitudes of true SST variations, and maximizing temporal, spatial and inter-platform stability of retrieval metrics. Two types of SST algorithms were considered: conventional SST regressions, and recently developed incremental regressions. The conventional equations were adopted in the EUMETSAT OSI-SAF formulation, which, according to our previous analyses, provide relatively small regional biases and well-balanced combination of precision and sensitivity, in its class. Incremental regression equations were specifically elaborated to automatically correct for model minus observation biases, always present when RTM simulations are employed. Improved temporal stability was achieved by recalculation of SST coefficients from matchups on a daily basis, with a +/-45 day window around the current date. This presentation describes the candidate SST algorithms considered for the next round of ACSPO reanalysis, RAN2.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
NASA Astrophysics Data System (ADS)
Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi
2017-11-01
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
ERIC Educational Resources Information Center
Zhang, Mo; Deane, Paul
2015-01-01
In educational measurement contexts, essays have been evaluated and formative feedback has been given based on the end product. In this study, we used a large sample collected from middle school students in the United States to investigate the factor structure of the writing process features gathered from keystroke logs and the association of that…
Patterns of diametric growth in stem-analyzed laurel trees (Cordia alliodora) in a Panamanian forest
Bernard R Parresol; Margaret S. Devall
2013-01-01
Based on cross-dated increment cores, yearly diameters of trees were reconstructed for 21 laurels (Cordia alliodora) growing in a natural secondary forest on Gigante Peninsula, Panama. From this sample of dominant-codominant trees, ages were 14â35 years with an average of 25 years. Growth typically slowed at 7 years old, indicating effects of...
Lønne, Greger; Johnsen, Lars Gunnar; Aas, Eline; Lydersen, Stian; Andresen, Hege; Rønning, Roar; Nygaard, Øystein P
2015-04-15
Randomized clinical trial with 2-year follow-up. To compare the cost-effectiveness of X-stop to minimally invasive decompression in patients with symptomatic lumbar spinal stenosis. Lumbar spinal stenosis is the most common indication for operative treatment in elderly. Although surgery is more costly than nonoperative treatment, health outcomes for more than 2 years were shown to be significantly better. Surgical treatment with minimally invasive decompression is widely used. X-stop is introduced as another minimally invasive technique showing good results compared with nonoperative treatment. We enrolled 96 patients aged 50 to 85 years, with symptoms of neurogenic intermittent claudication within 250-m walking distance and 1- or 2-level lumbar spinal stenosis, randomized to either minimally invasive decompression or X-stop. Quality-adjusted life-years were based on EuroQol EQ-5D. The hospital unit costs were estimated by means of the top-down approach. Each cost unit was converted into a monetary value by dividing the overall cost by the amount of cost units produced. The analysis of costs and health outcomes is presented by the incremental cost-effectiveness ratio. The study was terminated after a midway interim analysis because of significantly higher reoperation rate in the X-stop group (33%). The incremental cost for X-stop compared with minimally invasive decompression was &OV0556;2832 (95% confidence interval: 1886-3778), whereas the incremental health gain was 0.11 quality-adjusted life-year (95% confidence interval: -0.01 to 0.23). Based on the incremental cost and effect, the incremental cost-effectiveness ratio was &OV0556;25,700. The majority of the bootstrap samples displayed in the northeast corner of the cost-effectiveness plane, giving a 50% likelihood that X-stop is cost-effective at the extra cost of &OV0556;25,700 (incremental cost-effectiveness ratio) for a quality-adjusted life-year. The significantly higher cost of X-stop is mainly due to implant cost and the significantly higher reoperation rate. 2.
Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de
2017-11-05
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.
Shu, Tongxin; Xia, Min; Chen, Jiahong; de Silva, Clarence
2017-01-01
Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy. PMID:29113087
2013-01-01
Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202
Dellicour, Simon; Lecocq, Thomas
2013-10-01
GCALIGNER 1.0 is a computer program designed to perform a preliminary data comparison matrix of chemical data obtained by GC without MS information. The alignment algorithm is based on the comparison between the retention times of each detected compound in a sample. In this paper, we test the GCALIGNER efficiency on three datasets of the chemical secretions of bumble bees. The algorithm performs the alignment with a low error rate (<3%). GCALIGNER 1.0 is a useful, simple and free program based on an algorithm that enables the alignment of table-type data from GC. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Layered clustering multi-fault diagnosis for hydraulic piston pump
NASA Astrophysics Data System (ADS)
Du, Jun; Wang, Shaoping; Zhang, Haiyan
2013-04-01
Efficient diagnosis is very important for improving reliability and performance of aircraft hydraulic piston pump, and it is one of the key technologies in prognostic and health management system. In practice, due to harsh working environment and heavy working loads, multiple faults of an aircraft hydraulic pump may occur simultaneously after long time operations. However, most existing diagnosis methods can only distinguish pump faults that occur individually. Therefore, new method needs to be developed to realize effective diagnosis of simultaneous multiple faults on aircraft hydraulic pump. In this paper, a new method based on the layered clustering algorithm is proposed to diagnose multiple faults of an aircraft hydraulic pump that occur simultaneously. The intensive failure mechanism analyses of the five main types of faults are carried out, and based on these analyses the optimal combination and layout of diagnostic sensors is attained. The three layered diagnosis reasoning engine is designed according to the faults' risk priority number and the characteristics of different fault feature extraction methods. The most serious failures are first distinguished with the individual signal processing. To the desultory faults, i.e., swash plate eccentricity and incremental clearance increases between piston and slipper, the clustering diagnosis algorithm based on the statistical average relative power difference (ARPD) is proposed. By effectively enhancing the fault features of these two faults, the ARPDs calculated from vibration signals are employed to complete the hypothesis testing. The ARPDs of the different faults follow different probability distributions. Compared with the classical fast Fourier transform-based spectrum diagnosis method, the experimental results demonstrate that the proposed algorithm can diagnose the multiple faults, which occur synchronously, with higher precision and reliability.
Network control processor for a TDMA system
NASA Astrophysics Data System (ADS)
Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.
Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Hertel, Nadine; Kotchie, Robert W; Samyshkin, Yevgeniy; Radford, Matthew; Humphreys, Samantha; Jameson, Kevin
2012-01-01
Frequent exacerbations which are both costly and potentially life-threatening are a major concern to patients with chronic obstructive pulmonary disease (COPD), despite the availability of several treatment options. This study aimed to assess the lifetime costs and outcomes associated with alternative treatment regimens for patients with severe COPD in the UK setting. A Markov cohort model was developed to predict lifetime costs, outcomes, and cost-effectiveness of various combinations of a long-acting muscarinic antagonist (LAMA), a long-acting beta agonist (LABA), an inhaled corticosteroid (ICS), and roflumilast in a fully incremental analysis. Patients willing and able to take ICS, and those refusing or intolerant to ICS were analyzed separately. Efficacy was expressed as relative rate ratios of COPD exacerbation associated with alternative treatment regimens, taken from a mixed treatment comparison. The analysis was conducted from the UK National Health Service (NHS) perspective. Parameter uncertainty was explored using one-way and probabilistic sensitivity analysis. Based on the results of the fully incremental analysis a cost-effectiveness frontier was determined, indicating those treatment regimens which represent the most cost-effective use of NHS resources. For ICS-tolerant patients the cost-effectiveness frontier suggested LAMA as initial treatment. Where patients continue to exacerbate and additional therapy is required, LAMA + LABA/ICS can be a cost-effective option, followed by LAMA + LABA/ICS + roflumilast (incremental cost-effectiveness ratio [ICER] versus LAMA + LABA/ICS: £16,566 per quality-adjusted life-year [QALY] gained). The ICER in ICS-intolerant patients, comparing LAMA + LABA + roflumilast versus LAMA + LABA, was £13,764/QALY gained. The relative rate ratio of exacerbations was identified as the primary driver of cost-effectiveness. The treatment algorithm recommended in UK clinical practice represents a cost-effective approach for the management of COPD. The addition of roflumilast to the standard of care regimens is a clinical and cost-effective treatment option for patients with severe COPD, who continue to exacerbate despite existing bronchodilator therapy.
How enhanced molecular ions in Cold EI improve compound identification by the NIST library.
Alon, Tal; Amirav, Aviv
2015-12-15
Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more weight in library search identification algorithms. These ions are uniquely abundant in Cold EI, which therefore enables enhanced compound characterization along with improved NIST library based identification. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
Nonlinear Dynamic Characteristics of Oil-in-Water Emulsions
NASA Astrophysics Data System (ADS)
Yin, Zhaoqi; Han, Yunfeng; Ren, Yingyu; Yang, Qiuyi; Jin, Ningde
2016-08-01
In this article, the nonlinear dynamic characteristics of oil-in-water emulsions under the addition of surfactant were experimentally investigated. Firstly, based on the vertical upward oil-water two-phase flow experiment in 20 mm inner diameter (ID) testing pipe, dynamic response signals of oil-in-water emulsions were recorded using vertical multiple electrode array (VMEA) sensor. Afterwards, the recurrence plot (RP) algorithm and multi-scale weighted complexity entropy causality plane (MS-WCECP) were employed to analyse the nonlinear characteristics of the signals. The results show that the certainty is decreasing and the randomness is increasing with the increment of surfactant concentration. This article provides a novel method for revealing the nonlinear dynamic characteristics, complexity, and randomness of oil-in-water emulsions with experimental measurement signals.
Vehicle Maneuver Detection with Accelerometer-Based Classification.
Cervantes-Villanueva, Javier; Carrillo-Zapata, Daniel; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F
2016-09-29
In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.
Design, implementation and application of distributed order PI control.
Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan
2013-05-01
In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
A decision support system using combined-classifier for high-speed data stream in smart grid
NASA Astrophysics Data System (ADS)
Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun
2016-11-01
Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.
Analytical three-point Dixon method: With applications for spiral water-fat imaging.
Wang, Dinghui; Zwart, Nicholas R; Li, Zhiqiang; Schär, Michael; Pipe, James G
2016-02-01
The goal of this work is to present a new three-point analytical approach with flexible even or uneven echo increments for water-fat separation and to evaluate its feasibility with spiral imaging. Two sets of possible solutions of water and fat are first found analytically. Then, two field maps of the B0 inhomogeneity are obtained by linear regression. The initial identification of the true solution is facilitated by the root-mean-square error of the linear regression and the incorporation of a fat spectrum model. The resolved field map after a region-growing algorithm is refined iteratively for spiral imaging. The final water and fat images are recalculated using a joint water-fat separation and deblurring algorithm. Successful implementations were demonstrated with three-dimensional gradient-echo head imaging and single breathhold abdominal imaging. Spiral, high-resolution T1 -weighted brain images were shown with comparable sharpness to the reference Cartesian images. With appropriate choices of uneven echo increments, it is feasible to resolve the aliasing of the field map voxel-wise. High-quality water-fat spiral imaging can be achieved with the proposed approach. © 2015 Wiley Periodicals, Inc.
Kim, Song-Ju; Aono, Masashi; Hara, Masahiko
2010-07-01
We propose a model - the "tug-of-war (TOW) model" - to conduct unique parallel searches using many nonlocally-correlated search agents. The model is based on the property of a single-celled amoeba, the true slime mold Physarum, which maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a "nonlocal correlation" among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). This nonlocal correlation was shown to be useful for decision making in the case of a dilemma. The multi-armed bandit problem is to determine the optimal strategy for maximizing the total reward sum with incompatible demands, by either exploiting the rewards obtained using the already collected information or exploring new information for acquiring higher payoffs involving risks. Our model can efficiently manage the "exploration-exploitation dilemma" and exhibits good performances. The average accuracy rate of our model is higher than those of well-known algorithms such as the modified -greedy algorithm and modified softmax algorithm, especially, for solving relatively difficult problems. Moreover, our model flexibly adapts to changing environments, a property essential for living organisms surviving in uncertain environments.
Optimization of diesel engine performance by the Bees Algorithm
NASA Astrophysics Data System (ADS)
Azfanizam Ahmad, Siti; Sunthiram, Devaraj
2018-03-01
Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.
Research of high power and stable laser in portable Raman spectrometer based on SHINERS technology
NASA Astrophysics Data System (ADS)
Cui, Yongsheng; Yin, Yu; Wu, Yulin; Ni, Xuxiang; Zhang, Xiuda; Yan, Huimin
2013-08-01
The intensity of Raman light is very weak, which is only from 10-12 to 10-6 of the incident light. In order to obtain the required sensitivity, the traditional Raman spectrometer tends to be heavy weight and large volume, so it is often used as indoor test device. Based on the Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) method, Raman optical spectrum signal can be enhanced significantly and the portable Raman spectrometer combined with SHINERS method will be widely used in various fields. The laser source must be stable enough and able to output monochromatic narrow band laser with stable power in the portable Raman spectrometer based on the SHINERS method. When the laser is working, the change of temperature can induce wavelength drift, thus the power stability of excitation light will be affected, so we need to strictly control the working temperature of the laser, In order to ensure the stability of laser power and output current, this paper adopts the WLD3343 laser constant current driver chip of Wavelength Electronics company and MCU P89LPC935 to drive LML - 785.0 BF - XX laser diode(LD). Using this scheme, the Raman spectrometer can be small in size and the drive current can be constant. At the same time, we can achieve functions such as slow start, over-current protection, over-voltage protection, etc. Continuous adjustable output can be realized under control, and the requirement of high power output can be satisfied. Max1968 chip is adopted to realize the accurate control of the laser's temperature. In this way, it can meet the demand of miniaturization. In term of temperature control, integral truncation effect of traditional PID algorithm is big, which is easy to cause static difference. Each output of incremental PID algorithm has nothing to do with the current position, and we can control the output coefficients to avoid full dose output and immoderate adjustment, then the speed of balance will be improved observably. Variable integral incremental digital PID algorithm is used in the TEC temperature control system. The experimental results show that comparing with other schemes, the output power of laser in our scheme is more stable and reliable, moreover the peak value is bigger, and the temperature can be precisely controlled in +/-0.1°C, then the volume of the device is smaller. Using this laser equipment, the ideal Raman spectra of materials can be obtained combined with SHINERS technology and spectrometer equipment.
QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.
Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L
2016-10-01
In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.
Gerety, Gregg; Bebakar, Wan Mohamad Wan; Chaykin, Louis; Ozkaya, Mesut; Macura, Stanislava; Hersløv, Malene Lundgren; Behnke, Thomas
2016-05-01
This 26-week, multicenter, randomized, open-label, parallel-group, treat-to-target trial in adults with type 2 diabetes compared the efficacy and safety of treatment intensification algorithms with twice-daily (BID) insulin degludec/insulin aspart (IDegAsp). Patients randomized 1:1 to IDegAsp BID used either a 'Simple' algorithm (twice-weekly dose adjustments based on a single prebreakfast and pre-evening meal self-monitored plasma glucose [SMPG] measurement; IDegAsp[BIDSimple], n = 136) or a 'Stepwise' algorithm (once-weekly dose adjustments based on the lowest of 3 pre-breakfast and 3 pre-evening meal SMPG values; IDegAsp[BIDStep-wise], n = 136). After 26 weeks, mean change from baseline in glycated hemoglobin (HbA1c) with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise] (-15 mmol/mol versus -14 mmol/mol; 95% confidence interval [CI] upper limit, <4 mmol/mol) (baseline HbA1c: 66.3 mmol/mol IDegAsp[BIDSimple] and 66.6 mmol/mol IDegAsp[BIDStep-wise]). The proportion of patients who achieved HbA1c <7.0% (<53 mmol/mol) at the end of the trial was 66.9% with IDegAsp[BIDSimple] and 62.5% with IDegAsp[BIDStep-wise]. Fasting plasma glucose levels were reduced with each titration algorithm (-1.51 mmol/L IDegAsp[BIDSimple] versus -1.95 mmol/L IDegAsp[BIDStep-wise]). Weight gain was 3.8 kg IDegAsp[BIDSimple] versus 2.6 kg IDegAsp[BIDStep-wise], and rates of overall confirmed hypoglycemia (5.16 episodes per patient-year of exposure [PYE] versus 8.93 PYE) and nocturnal confirmed hypoglycemia (0.78 PYE versus 1.33 PYE) were significantly lower with IDegAsp[BIDStep-wise] versus IDegAsp[BIDSimple]. There were no significant differences in insulin dose increments between groups. Treatment intensification with IDegAsp[BIDSimple] was noninferior to IDegAsp[BIDStep-wise]. Both titration algorithms were well tolerated; however, the more conservative step-wise algorithm led to less weight gain and fewer hypoglycemic episodes. Clinicaltrials.gov: NCT01680341.
NASA Astrophysics Data System (ADS)
Hu, Zixi; Yao, Zhewei; Li, Jinglai
2017-03-01
Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.