Sample records for random access algorithm

  1. Random-access algorithms for multiuser computer communication networks. Doctoral thesis, 1 September 1986-31 August 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papantoni-Kazakos, P.; Paterakis, M.

    1988-07-01

    For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less

  2. Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks

    PubMed Central

    Ranganathan, Radha; Kannan, Kathiravan

    2015-01-01

    IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066

  3. Using Motion Planning to Determine the Existence of an Accessible Route in a CAD Environment

    ERIC Educational Resources Information Center

    Pan, Xiaoshan; Han, Charles S.; Law, Kincho H.

    2010-01-01

    We describe an algorithm based on motion-planning techniques to determine the existence of an accessible route through a facility for a wheeled mobility device. The algorithm is based on LaValle's work on rapidly exploring random trees and is enhanced to take into consideration the particularities of the accessible route domain. Specifically, the…

  4. Quantum random access memory.

    PubMed

    Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo

    2008-04-25

    A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.

  5. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  6. Solving large test-day models by iteration on data and preconditioned conjugate gradient.

    PubMed

    Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A

    1999-12-01

    A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.

  7. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  8. A novel strategy for load balancing of distributed medical applications.

    PubMed

    Logeswaran, Rajasvaran; Chen, Li-Choo

    2012-04-01

    Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.

  9. Satellite Doppler data processing using a microcomputer

    NASA Technical Reports Server (NTRS)

    Schmid, P. E.; Lynn, J. J.

    1977-01-01

    A microcomputer which was developed to compute ground radio beacon position locations using satellite measurements of Doppler frequency shift is described. Both the computational algorithms and the microcomputer hardware incorporating these algorithms were discussed. Results are presented where the microcomputer in conjunction with the NIMBUS-6 random access measurement system provides real time calculation of beacon latitude and longitude.

  10. A Random Access Algorithm for Frequency Hopped Spread Spectrum Packet Radio Networks

    DTIC Science & Technology

    1986-03-01

    in section 4.3, the probability p is given by the following expression. N rj 1-p - N1 (NR7-1). f -• (20)~ j=o j. F,=ji < N We now present a...Algorithms," Univ. of Connecticut, EECS Dept. Technical Report UCT/ DEECS /TR-86-1, January 1986. Also, submitted for publication. 119] W. Peterson and E

  11. Unified Method for Delay Analysis of Random Multiple Access Algorithms.

    DTIC Science & Technology

    1985-08-01

    packets in the first cell of the stack. The rules of the algorithm yield the following relation for the wi’s: n-1 n w 0= ; W =1; i i 9Q h I+ + zwI .+N...for computer communica- tions", in Proc. 1970 Fall Joint Computer Conf., AFIPS Press, vol. 37, 1970, pp. 281 -285. (15] N. D. Vvedenskaya and B. S

  12. Locating Encrypted Data Hidden Among Non-Encrypted Data Using Statistical Tools

    DTIC Science & Technology

    2007-03-01

    length of a compressed sequence). If a bit sequence can be significantly compressed , then it is not random. Lempel - Ziv Compression Test This test...communication, targeting, and a host other of tasks. This software will most assuredly contain classified data or algorithms requiring protection in...containing the classified data and algorithms . As the program is executed the solider would have access to the common unclassified tasks, however, to

  13. Aspects of GPU perfomance in algorithms with random memory access

    NASA Astrophysics Data System (ADS)

    Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.

    2017-10-01

    The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.

  14. Cluster-Based Multipolling Sequencing Algorithm for Collecting RFID Data in Wireless LANs

    NASA Astrophysics Data System (ADS)

    Choi, Woo-Yong; Chatterjee, Mainak

    2015-03-01

    With the growing use of RFID (Radio Frequency Identification), it is becoming important to devise ways to read RFID tags in real time. Access points (APs) of IEEE 802.11-based wireless Local Area Networks (LANs) are being integrated with RFID networks that can efficiently collect real-time RFID data. Several schemes, such as multipolling methods based on the dynamic search algorithm and random sequencing, have been proposed. However, as the number of RFID readers associated with an AP increases, it becomes difficult for the dynamic search algorithm to derive the multipolling sequence in real time. Though multipolling methods can eliminate the polling overhead, we still need to enhance the performance of the multipolling methods based on random sequencing. To that extent, we propose a real-time cluster-based multipolling sequencing algorithm that drastically eliminates more than 90% of the polling overhead, particularly so when the dynamic search algorithm fails to derive the multipolling sequence in real time.

  15. A stochastic simulation method for the assessment of resistive random access memory retention reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw

    This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.

  16. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  17. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach

    PubMed Central

    Tan, Robin; Perkowski, Marek

    2017-01-01

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems. PMID:28230745

  18. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    PubMed

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  19. Crowdsourcing reproducible seizure forecasting in human and canine epilepsy

    PubMed Central

    Wagenaar, Joost; Abbot, Drew; Adkins, Phillip; Bosshard, Simone C.; Chen, Min; Tieng, Quang M.; He, Jialune; Muñoz-Almaraz, F. J.; Botella-Rocamora, Paloma; Pardo, Juan; Zamora-Martinez, Francisco; Hills, Michael; Wu, Wei; Korshunova, Iryna; Cukierski, Will; Vite, Charles; Patterson, Edward E.; Litt, Brian; Worrell, Gregory A.

    2016-01-01

    See Mormann and Andrzejak (doi:10.1093/brain/aww091) for a scientific commentary on this article.   Accurate forecasting of epileptic seizures has the potential to transform clinical epilepsy care. However, progress toward reliable seizure forecasting has been hampered by lack of open access to long duration recordings with an adequate number of seizures for investigators to rigorously compare algorithms and results. A seizure forecasting competition was conducted on kaggle.com using open access chronic ambulatory intracranial electroencephalography from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide bandwidth intracranial electroencephalographic monitoring. Data were provided to participants as 10-min interictal and preictal clips, with approximately half of the 60 GB data bundle labelled (interictal/preictal) for algorithm training and half unlabelled for evaluation. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leader board. The contest ran from August to November 2014, and 654 participants submitted 17 856 classifications of the unlabelled test data. The top performing entry scored 0.84 area under the classification curve. Following the contest, additional held-out unlabelled data clips were provided to the top 10 participants and they submitted classifications for the new unseen data. The resulting area under the classification curves were well above chance forecasting, but did show a mean 6.54 ± 2.45% (min, max: 0.30, 20.2) decline in performance. The kaggle.com model using open access data and algorithms generated reproducible research that advanced seizure forecasting. The overall performance from multiple contestants on unseen data was better than a random predictor, and demonstrates the feasibility of seizure forecasting in canine and human epilepsy. PMID:27034258

  20. Kanerva's sparse distributed memory: An associative memory algorithm well-suited to the Connection Machine

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1988-01-01

    The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.

  1. Parsing Citations in Biomedical Articles Using Conditional Random Fields

    PubMed Central

    Zhang, Qing; Cao, Yong-Gang; Yu, Hong

    2011-01-01

    Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Year). With a subset of html format open-access PubMed Central articles, we report an overall 97.95% F1-score. The citation parser can be accessed at: http://www.cs.uwm.edu/~qing/projects/cithit/index.html. PMID:21419403

  2. A tensor network approach to many-body localization

    NASA Astrophysics Data System (ADS)

    Yu, Xiongjie; Pekker, David; Clark, Bryan

    Understanding the many-body localized phase requires access to eigenstates in the middle of the many-body spectrum. While exact-diagonalization is able to access these eigenstates, it is restricted to systems sizes of about 22 spins. To overcome this limitation, we develop tensor network algorithms which increase the accessible system size by an order of magnitude. We describe both our new algorithms as well as the additional physics about MBL we can extract from them. For example, we demonstrate the power of these methods by verifying the breakdown of the Eigenstate Thermalization Hypothesis (ETH) in the many-body localized phase of the random field Heisenberg model, and show the saturation of entanglement in the MBL phase and generate eigenstates that differ by local excitations. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.

  3. Android Malware Classification Using K-Means Clustering Algorithm

    NASA Astrophysics Data System (ADS)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  4. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  5. Design of redundant array of independent DVD libraries based on iSCSI

    NASA Astrophysics Data System (ADS)

    Chen, Yupeng; Pan, Longfa

    2003-04-01

    This paper presents a new approach to realize the redundant array of independent DVD libraries (RAID-LoIP) by using the iSCSI technology and traditional RAID algorithms. Our design reaches the high performance of optical storage system with following features: large storage size, highly accessing rate, random access, long distance of DVD libraries, block I/O storage, long storage life. Our RAID-LoIP system can be a good solution for broadcasting media asset storage system.

  6. Crowdsourcing reproducible seizure forecasting in human and canine epilepsy.

    PubMed

    Brinkmann, Benjamin H; Wagenaar, Joost; Abbot, Drew; Adkins, Phillip; Bosshard, Simone C; Chen, Min; Tieng, Quang M; He, Jialune; Muñoz-Almaraz, F J; Botella-Rocamora, Paloma; Pardo, Juan; Zamora-Martinez, Francisco; Hills, Michael; Wu, Wei; Korshunova, Iryna; Cukierski, Will; Vite, Charles; Patterson, Edward E; Litt, Brian; Worrell, Gregory A

    2016-06-01

    SEE MORMANN AND ANDRZEJAK DOI101093/BRAIN/AWW091 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE  : Accurate forecasting of epileptic seizures has the potential to transform clinical epilepsy care. However, progress toward reliable seizure forecasting has been hampered by lack of open access to long duration recordings with an adequate number of seizures for investigators to rigorously compare algorithms and results. A seizure forecasting competition was conducted on kaggle.com using open access chronic ambulatory intracranial electroencephalography from five canines with naturally occurring epilepsy and two humans undergoing prolonged wide bandwidth intracranial electroencephalographic monitoring. Data were provided to participants as 10-min interictal and preictal clips, with approximately half of the 60 GB data bundle labelled (interictal/preictal) for algorithm training and half unlabelled for evaluation. The contestants developed custom algorithms and uploaded their classifications (interictal/preictal) for the unknown testing data, and a randomly selected 40% of data segments were scored and results broadcasted on a public leader board. The contest ran from August to November 2014, and 654 participants submitted 17 856 classifications of the unlabelled test data. The top performing entry scored 0.84 area under the classification curve. Following the contest, additional held-out unlabelled data clips were provided to the top 10 participants and they submitted classifications for the new unseen data. The resulting area under the classification curves were well above chance forecasting, but did show a mean 6.54 ± 2.45% (min, max: 0.30, 20.2) decline in performance. The kaggle.com model using open access data and algorithms generated reproducible research that advanced seizure forecasting. The overall performance from multiple contestants on unseen data was better than a random predictor, and demonstrates the feasibility of seizure forecasting in canine and human epilepsy.media-1vid110.1093/brain/aww045_video_abstractaww045_video_abstract. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain.

  7. Online Bagging and Boosting

    NASA Technical Reports Server (NTRS)

    Oza, Nikunji C.

    2005-01-01

    Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by presenting some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time.

  8. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  9. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  10. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  11. An efficient parallel-processing method for transposing large matrices in place.

    PubMed

    Portnoff, M R

    1999-01-01

    We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

  12. Integration of spectral, spatial and morphometric data into lithological mapping: A comparison of different Machine Learning Algorithms in the Kurdistan Region, NE Iraq

    NASA Astrophysics Data System (ADS)

    Othman, Arsalan A.; Gloaguen, Richard

    2017-09-01

    Lithological mapping in mountainous regions is often impeded by limited accessibility due to relief. This study aims to evaluate (1) the performance of different supervised classification approaches using remote sensing data and (2) the use of additional information such as geomorphology. We exemplify the methodology in the Bardi-Zard area in NE Iraq, a part of the Zagros Fold - Thrust Belt, known for its chromite deposits. We highlighted the improvement of remote sensing geological classification by integrating geomorphic features and spatial information in the classification scheme. We performed a Maximum Likelihood (ML) classification method besides two Machine Learning Algorithms (MLA): Support Vector Machine (SVM) and Random Forest (RF) to allow the joint use of geomorphic features, Band Ratio (BR), Principal Component Analysis (PCA), spatial information (spatial coordinates) and multispectral data of the Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite. The RF algorithm showed reliable results and discriminated serpentinite, talus and terrace deposits, red argillites with conglomerates and limestone, limy conglomerates and limestone conglomerates, tuffites interbedded with basic lavas, limestone and Metamorphosed limestone and reddish green shales. The best overall accuracy (∼80%) was achieved by Random Forest (RF) algorithms in the majority of the sixteen tested combination datasets.

  13. Time Delay Measurements of Key Generation Process on Smart Cards

    DTIC Science & Technology

    2015-03-01

    random number generator is available (Chatterjee & Gupta, 2009). The ECC algorithm will grow in usage as information becomes more and more secure. Figure...Worldwide Mobile Enterprise Security Software 2012–2016 Forecast and Analysis), mobile identity and access management is expected to grow by 27.6 percent...iPad, tablets) as well as 80000 BlackBerry phones. The mobility plan itself will be deployed in three phases over 2014, with the first phase

  14. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  15. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  16. Demonstration of Qubit Operations Below a Rigorous Fault Tolerance Threshold With Gate Set Tomography (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2017-02-15

    Maunz2 Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone...information processors have been demonstrated experimentally using superconducting circuits1–3, electrons in semiconductors4–6, trapped atoms and...qubit quantum information processor has been realized14, and single- qubit gates have demonstrated randomized benchmarking (RB) infidelities as low as 10

  17. Infrared fiber coupled acousto-optic tunable filter spectrometer

    NASA Technical Reports Server (NTRS)

    Levin, K. H.; Kindler, E.; Ko, T.; Lee, F.; Tran, D. C.; Tapphorn, R. M.

    1990-01-01

    A spectrometer design is introduced which combines an acoustooptic tunable filter (AOTF) and IR-transmitting flouride-glass fibers. The AOTF crystal is fabricated from TeO2 and permits random access to any wavelength in less than 50 microseconds, and the resulting spectrometer is tested for the remote analysis of gases and hydrocarbons. The AOTF spectrometer, when operated with a high-speed frequency synthesizer and optimized algorithms, permits accurate high-speed spectroscopy in the mid-IR spectral region.

  18. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  19. GTRAC: fast retrieval from compressed collections of genomic variants

    PubMed Central

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-01-01

    Motivation: The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. Results: We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. Availability and Implementation: The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC Contact: kedart@stanford.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27587665

  20. GTRAC: fast retrieval from compressed collections of genomic variants.

    PubMed

    Tatwawadi, Kedar; Hernaez, Mikel; Ochoa, Idoia; Weissman, Tsachy

    2016-09-01

    The dramatic decrease in the cost of sequencing has resulted in the generation of huge amounts of genomic data, as evidenced by projects such as the UK10K and the Million Veteran Project, with the number of sequenced genomes ranging in the order of 10 K to 1 M. Due to the large redundancies among genomic sequences of individuals from the same species, most of the medical research deals with the variants in the sequences as compared with a reference sequence, rather than with the complete genomic sequences. Consequently, millions of genomes represented as variants are stored in databases. These databases are constantly updated and queried to extract information such as the common variants among individuals or groups of individuals. Previous algorithms for compression of this type of databases lack efficient random access capabilities, rendering querying the database for particular variants and/or individuals extremely inefficient, to the point where compression is often relinquished altogether. We present a new algorithm for this task, called GTRAC, that achieves significant compression ratios while allowing fast random access over the compressed database. For example, GTRAC is able to compress a Homo sapiens dataset containing 1092 samples in 1.1 GB (compression ratio of 160), while allowing for decompression of specific samples in less than a second and decompression of specific variants in 17 ms. GTRAC uses and adapts techniques from information theory, such as a specialized Lempel-Ziv compressor, and tailored succinct data structures. The GTRAC algorithm is available for download at: https://github.com/kedartatwawadi/GTRAC CONTACT: : kedart@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Mycology at a distance.

    PubMed

    Koneman, Elmer; Gade, Wayne

    2002-01-01

    GermWare Mycology is an image-rich, CD-ROM-based instruction divided into tutorial and reference programs. The tutorial program, designed for new students, provides only for sequential progress through each of the subject modules, so that each page of information is seen. In contrast, the reference program allows the more experienced learner with random and direct access to each facet of information. The aspergilli, the agents of chromomycosis, the dermatophytes, the dimorphic fungi, the hyaline molds, the dematiaceous molds, the yeasts, and the zygomyces are divided into separate modules. The tutorial program also includes an opening 'isolation procedures' module, in which details of specimen collection, culture media, and microscopic techniques are presented. The random access program includes system maps separating out each of the fungal species, and flow diagrams allowing an algorithm approach to species identifications. A global map is also included through which each fungal species can be directly accessed by the simple click of the mouse. Random access to information on the ecology, clinical presentations, pathology and therapy of the various mycotic diseases is also a feature of the reference program. A series of self-assessment exercises is included at the end of each module, with immediate 'pop-up' feedback to both correct and incorrect answers. The entire program includes over 2500 screens and over 700 color images and diagrams. GermWare Mycology is available through the Colorado Association for Continuing Medical Laboratory Education (CACMLE), who also can provide continuing education credits for individuals who complete a separate examination. For more information contact CACMLE at (303) 321-1734 or info@cacmle.org.

  2. Template Based Low Data Rate Speech Encoder

    DTIC Science & Technology

    1993-09-30

    Nasality Distinguishes In/ from d/ 95.6 96.9 1m/ from /b/, etc. Sustention Distinguishes /f/ from /p/, $7.5 88.3 ibi from N/, Al from /0 8. etc. Sibilation...processor performs mainly Processor Workstation input/output (I/O) operations. The dynamic random access memory (DRAM) has 16 million bytes of...storage capacity. To execute the 800-b/s voice algorithm, the following amount of memory is needed: 5 MB for tables, 1.5 MB for it "program, and 30 KB for

  3. Hidden long evolutionary memory in a model biochemical network

    NASA Astrophysics Data System (ADS)

    Ali, Md. Zulfikar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-04-01

    We introduce a minimal model for the evolution of functional protein-interaction networks using a sequence-based mutational algorithm, and apply the model to study neutral drift in networks that yield oscillatory dynamics. Starting with a functional core module, random evolutionary drift increases network complexity even in the absence of specific selective pressures. Surprisingly, we uncover a hidden order in sequence space that gives rise to long-term evolutionary memory, implying strong constraints on network evolution due to the topology of accessible sequence space.

  4. Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency.

    PubMed

    Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E

    2018-03-01

    We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.

  5. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.

    PubMed

    Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza

    2012-05-01

    Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  7. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  8. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  9. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry

    PubMed Central

    Röst, Hannes L.; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    Motivation In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Results Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Availability Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS. PMID:25927999

  10. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  11. Radiation effects in reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Quinn, Heather

    2017-04-01

    Field-programmable gate arrays (FPGAs) are co-processing hardware used in image and signal processing. FPGA are programmed with custom implementations of an algorithm. These algorithms are highly parallel hardware designs that are faster than software implementations. This flexibility and speed has made FPGAs attractive for many space programs that need in situ, high-speed signal processing for data categorization and data compression. Most commercial FPGAs are affected by the space radiation environment, though. Problems with TID has restricted the use of flash-based FPGAs. Static random access memory based FPGAs must be mitigated to suppress errors from single-event upsets. This paper provides a review of radiation effects issues in reconfigurable FPGAs and discusses methods for mitigating these problems. With careful design it is possible to use these components effectively and resiliently.

  12. Systems aspects of COBE science data compression

    NASA Technical Reports Server (NTRS)

    Freedman, I.; Boggess, E.; Seiler, E.

    1993-01-01

    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.

  13. Adaptive random walks on the class of Web graphs

    NASA Astrophysics Data System (ADS)

    Tadić, B.

    2001-09-01

    We study random walk with adaptive move strategies on a class of directed graphs with variable wiring diagram. The graphs are grown from the evolution rules compatible with the dynamics of the world-wide Web [B. Tadić, Physica A 293, 273 (2001)], and are characterized by a pair of power-law distributions of out- and in-degree for each value of the parameter β, which measures the degree of rewiring in the graph. The walker adapts its move strategy according to locally available information both on out-degree of the visited node and in-degree of target node. A standard random walk, on the other hand, uses the out-degree only. We compute the distribution of connected subgraphs visited by an ensemble of walkers, the average access time and survival probability of the walks. We discuss these properties of the walk dynamics relative to the changes in the global graph structure when the control parameter β is varied. For β≥ 3, corresponding to the world-wide Web, the access time of the walk to a given level of hierarchy on the graph is much shorter compared to the standard random walk on the same graph. By reducing the amount of rewiring towards rigidity limit β↦βc≲ 0.1, corresponding to the range of naturally occurring biochemical networks, the survival probability of adaptive and standard random walk become increasingly similar. The adaptive random walk can be used as an efficient message-passing algorithm on this class of graphs for large degree of rewiring.

  14. Implementation of real-time digital signal processing systems

    NASA Technical Reports Server (NTRS)

    Narasimha, M.; Peterson, A.; Narayan, S.

    1978-01-01

    Special purpose hardware implementation of DFT Computers and digital filters is considered in the light of newly introduced algorithms and IC devices. Recent work by Winograd on high-speed convolution techniques for computing short length DFT's, has motivated the development of more efficient algorithms, compared to the FFT, for evaluating the transform of longer sequences. Among these, prime factor algorithms appear suitable for special purpose hardware implementations. Architectural considerations in designing DFT computers based on these algorithms are discussed. With the availability of monolithic multiplier-accumulators, a direct implementation of IIR and FIR filters, using random access memories in place of shift registers, appears attractive. The memory addressing scheme involved in such implementations is discussed. A simple counter set-up to address the data memory in the realization of FIR filters is also described. The combination of a set of simple filters (weighting network) and a DFT computer is shown to realize a bank of uniform bandpass filters. The usefulness of this concept in arriving at a modular design for a million channel spectrum analyzer, based on microprocessors, is discussed.

  15. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  16. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.

    PubMed

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L

    2017-12-13

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).

  17. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.

    2017-10-01

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  18. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  19. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  20. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  1. HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual

    DTIC Science & Technology

    1991-05-01

    algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized

  2. Equivalent Discrete-Time Channel Modeling for Molecular Communication With Emphasize on an Absorbing Receiver.

    PubMed

    Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam

    2017-01-01

    This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.

  3. Photon-counting intensified random-access charge injection device

    NASA Astrophysics Data System (ADS)

    Norton, Timothy J.; Morrissey, Patrick F.; Haas, Patrick; Payne, Leslie J.; Carbone, Joseph; Kimble, Randy A.

    1999-11-01

    At NASA GSFC we are developing a high resolution solar-blind photon counting detector system for UV space based astronomy. The detector comprises a high gain MCP intensifier fiber- optically coupled to a charge injection device (CID). The detector system utilizes an FPGA based centroiding system to locate the center of photon events from the intensifier to high accuracy. The photon event addresses are passed via a PCI interface with a GPS derived time stamp inserted per frame to an integrating memory. Here we present imaging performance data which show resolution of MCP tube pore structure at an MCP pore diameter of 8 micrometer. This data validates the ICID concept for intensified photon counting readout. We also discuss correction techniques used in the removal of fixed pattern noise effects inherent in the centroiding algorithms used and present data which shows the local dynamic range of the device. Progress towards development of a true random access CID (RACID 810) is also discussed and astronomical data taken with the ICID detector system demonstrating the photon event time-tagging mode of the system is also presented.

  4. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  5. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  6. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  7. Distributed Data-aggregation Consensus for Sensor Networks: Relaxation of Consensus Concept and Convergence Property

    DTIC Science & Technology

    2014-08-01

    consensus algorithm called randomized gossip is more suitable [7, 8]. In asynchronous randomized gossip algorithms, pairs of neighboring nodes exchange...messages and perform updates in an asynchronous and unattended manner, and they also 1 The class of broadcast gossip algorithms [9, 10, 11, 12] are...dynamics [2] and asynchronous pairwise randomized gossip [7, 8], broadcast gossip algorithms do not require that nodes know the identities of their

  8. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    PubMed

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  9. An improved algorithm of image processing technique for film thickness measurement in a horizontal stratified gas-liquid two-phase flow

    NASA Astrophysics Data System (ADS)

    Kuntoro, Hadiyan Yusuf; Hudaya, Akhmad Zidni; Dinaryanto, Okto; Majid, Akmal Irfan; Deendarlianto

    2016-06-01

    Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methods and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (hL) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.

  10. An improved algorithm of image processing technique for film thickness measurement in a horizontal stratified gas-liquid two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuntoro, Hadiyan Yusuf, E-mail: hadiyan.y.kuntoro@mail.ugm.ac.id; Majid, Akmal Irfan; Deendarlianto, E-mail: deendarlianto@ugm.ac.id

    Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methodsmore » and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (h{sub L}) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.« less

  11. Multimodal biometric approach for cancelable face template generation

    NASA Astrophysics Data System (ADS)

    Paul, Padma Polash; Gavrilova, Marina

    2012-06-01

    Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.

  12. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  13. RandSpg: An open-source program for generating atomistic crystal structures with specific spacegroups

    NASA Astrophysics Data System (ADS)

    Avery, Patrick; Zurek, Eva

    2017-04-01

    A new algorithm, RANDSPG, that can be used to generate trial crystal structures with specific space groups and compositions is described. The program has been designed for systems where the atoms are independent of one another, and it is therefore primarily suited towards inorganic systems. The structures that are generated adhere to user-defined constraints such as: the lattice shape and size, stoichiometry, set of space groups to be generated, and factors that influence the minimum interatomic separations. In addition, the user can optionally specify if the most general Wyckoff position is to be occupied or constrain select atoms to specific Wyckoff positions. Extensive testing indicates that the algorithm is efficient and reliable. The library is lightweight, portable, dependency-free and is published under a license recognized by the Open Source Initiative. A web interface for the algorithm is publicly accessible at http://xtalopt.openmolecules.net/randSpg/randSpg.html. RANDSPG has also been interfaced with the XTALOPT evolutionary algorithm for crystal structure prediction, and it is illustrated that the use of symmetric lattices in the first generation of randomly created individuals decreases the number of structures that need to be optimized to find the global energy minimum.

  14. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  15. 76 FR 73676 - Certain Dynamic Random Access Memory Devices, and Products Containing Same; Receipt of Complaint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... INTERNATIONAL TRADE COMMISSION [DN 2859] Certain Dynamic Random Access Memory Devices, and.... International Trade Commission has received a complaint entitled In Re Certain Dynamic Random Access Memory... certain dynamic random access memory devices, and products containing same. The complaint names Elpida...

  16. 75 FR 16507 - In the Matter of Certain Semiconductor Chips Having Synchronous Dynamic Random Access Memory...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ... Semiconductor Chips Having Synchronous Dynamic Random Access Memory Controllers and Products Containing Same... synchronous dynamic random access memory controllers and products containing same by reason of infringement of... semiconductor chips having synchronous dynamic random access memory controllers and products containing same...

  17. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.

    PubMed

    Yang, Shengxiang

    2008-01-01

    In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.

  18. Ad Hoc Access Gateway Selection Algorithm

    NASA Astrophysics Data System (ADS)

    Jie, Liu

    With the continuous development of mobile communication technology, Ad Hoc access network has become a hot research, Ad Hoc access network nodes can be used to expand capacity of multi-hop communication range of mobile communication system, even business adjacent to the community, improve edge data rates. For mobile nodes in Ad Hoc network to internet, internet communications in the peer nodes must be achieved through the gateway. Therefore, the key Ad Hoc Access Networks will focus on the discovery gateway, as well as gateway selection in the case of multi-gateway and handover problems between different gateways. This paper considers the mobile node and the gateway, based on the average number of hops from an average access time and the stability of routes, improved gateway selection algorithm were proposed. An improved gateway selection algorithm, which mainly considers the algorithm can improve the access time of Ad Hoc nodes and the continuity of communication between the gateways, were proposed. This can improve the quality of communication across the network.

  19. Access Restoration Project Task 1.2 Report 2 (of 2) Algorithms for Debris Volume and Water Depth Computation : Appendix A

    DOT National Transportation Integrated Search

    0000-01-01

    n the Access Restoration Project Task 1.2 Report 1, the algorithms for detecting roadway debris piles and flooded areas were described in detail. Those algorithms take CRS data as input and automatically detect the roadway obstructions. Although the ...

  20. Graphic matching based on shape contexts and reweighted random walks

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun

    2018-04-01

    Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.

  1. Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems

    PubMed Central

    Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban

    2017-01-01

    An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045

  2. Exploring compression techniques for ROOT IO

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  3. 76 FR 80964 - Certain Dynamic Random Access Memory Devices, and Products Containing Same; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-27

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-821] Certain Dynamic Random Access Memory... importation, and the sale within the United States after importation of certain dynamic random access memory... certain dynamic random access memory devices, and products containing same that infringe one or more of...

  4. Comparison between different thickness umbrella-shaped expandable radiofrequency electrodes (SuperSlim and CoAccess): Experimental and clinical study

    PubMed Central

    KODA, MASAHIKO; TOKUNAGA, SHIHO; MATONO, TOMOMITSU; SUGIHARA, TAKAAKI; NAGAHARA, TAKAKAZU; MURAWAKI, YOSHIKAZU

    2011-01-01

    The purpose of the present study was to compare the size and configuration of the ablation zones created by SuperSlim and CoAccess electrodes, using various ablation algorithms in ex vivo bovine liver and in clinical cases. In the experimental study, we ablated explanted bovine liver using 2 types of electrodes and 4 ablation algorithms (combinations of incremental power supply, stepwise expansion and additional low-power ablation) and evaluated the ablation area and time. In the clinical study, we compared the ablation volume and the shape of the ablation zone between both electrodes in 23 hepatocellular carcinoma (HCC) cases with the best algorithm (incremental power supply, stepwise expansion and additional low-power ablation) as derived from the experimental study. In the experimental study, the ablation area and time by the CoAccess electrode were significantly greater compared to those by the SuperSlim electrode for the single-step (algorithm 1, p=0.0209 and 0.0325, respectively) and stepwise expansion algorithms (algorithm 2, p=0.0002 and <0.0001, respectively; algorithm 3, p= 0.006 and 0.0407, respectively). However, differences were not significant for the additional low-power ablation algorithm. In the clinical study, the ablation volume and time in the CoAccess group were significantly larger and longer, respectively, compared to those in the SuperSlim group (p=0.0242 and 0.009, respectively). Round ablation zones were acquired in 91.7% of the CoAccess group, while irregular ablation zones were obtained in 45.5% of the SuperSlim group (p=0.0428). In conclusion, the CoAccess electrode achieves larger and more uniform ablation zones compared with the SuperSlim electrode, though it requires longer ablation times in experimental and clinical studies. PMID:22977647

  5. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    PubMed Central

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-01-01

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. PMID:27754380

  6. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    PubMed

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  7. What do first-time mothers worry about? A study of usage patterns and content of calls made to a postpartum support telephone hotline.

    PubMed

    Osman, Hibah; Chaaya, Monique; El Zein, Lama; Naassan, Georges; Wick, Livia

    2010-10-15

    Telephone hotlines designed to address common concerns in the early postpartum could be a useful resource for parents. Our aim was to test the feasibility of using a telephone as an intervention in a randomized controlled trial. We also aimed to test to use of algorithms to address parental concerns through a telephone hotline. Healthy first-time mothers were recruited from postpartum wards of hospitals throughout Lebanon. Participants were given the number of a 24-hour telephone hotline that they could access for the first four months after delivery. Calls were answered by a midwife using algorithms developed by the study team whenever possible. Callers with medical complaints were referred to their physicians. Call patterns and content were recorded and analyzed. Eighty-four of the 353 women enrolled (24%) used the hotline. Sixty percent of the women who used the service called more than once, and all callers reported they were satisfied with the service. The midwife received an average of three calls per day and most calls occurred during the first four weeks postpartum. Our algorithms were used to answer questions in 62.8% of calls and 18.6% of calls required referral to a physician. Of the questions related to mothers, 66% were about breastfeeding. Sixty percent of questions related to the infant were about routine care and 23% were about excessive crying. Utilization of a telephone hotline service for postpartum support is highest in the first four weeks postpartum. Most questions are related to breastfeeding, routine newborn care, and management of a fussy infant. It is feasible to test a telephone hotline as an intervention in a randomized controlled trial. Algorithms can be developed to provide standardized answers to the most common questions.

  8. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  9. A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales

    NASA Astrophysics Data System (ADS)

    Elliott, Frank W.; Majda, Andrew J.

    1995-03-01

    A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.

  10. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  11. A novel asynchronous access method with binary interfaces

    PubMed Central

    2008-01-01

    Background Traditionally synchronous access strategies require users to comply with one or more time constraints in order to communicate intent with a binary human-machine interface (e.g., mechanical, gestural or neural switches). Asynchronous access methods are preferable, but have not been used with binary interfaces in the control of devices that require more than two commands to be successfully operated. Methods We present the mathematical development and evaluation of a novel asynchronous access method that may be used to translate sporadic activations of binary interfaces into distinct outcomes for the control of devices requiring an arbitrary number of commands to be controlled. With this method, users are required to activate their interfaces only when the device under control behaves erroneously. Then, a recursive algorithm, incorporating contextual assumptions relevant to all possible outcomes, is used to obtain an informed estimate of user intention. We evaluate this method by simulating a control task requiring a series of target commands to be tracked by a model user. Results When compared to a random selection, the proposed asynchronous access method offers a significant reduction in the number of interface activations required from the user. Conclusion This novel access method offers a variety of advantages over traditionally synchronous access strategies and may be adapted to a wide variety of contexts, with primary relevance to applications involving direct object manipulation. PMID:18959797

  12. Dynamic fair node spectrum allocation for ad hoc networks using random matrices

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Lemieux, George; Chester, Dave; Sonnenberg, Jerry

    2015-05-01

    Dynamic Spectrum Access (DSA) is widely seen as a solution to the problem of limited spectrum, because of its ability to adapt the operating frequency of a radio. Mobile Ad Hoc Networks (MANETs) can extend high-capacity mobile communications over large areas where fixed and tethered-mobile systems are not available. In one use case with high potential impact, cognitive radio employs spectrum sensing to facilitate the identification of allocated frequencies not currently accessed by their primary users. Primary users own the rights to radiate at a specific frequency and geographic location, while secondary users opportunistically attempt to radiate at a specific frequency when the primary user is not using it. We populate a spatial radio environment map (REM) database with known information that can be leveraged in an ad hoc network to facilitate fair path use of the DSA-discovered links. Utilization of high-resolution geospatial data layers in RF propagation analysis is directly applicable. Random matrix theory (RMT) is useful in simulating network layer usage in nodes by a Wishart adjacency matrix. We use the Dijkstra algorithm for discovering ad hoc network node connection patterns. We present a method for analysts to dynamically allocate node-node path and link resources using fair division. User allocation of limited resources as a function of time must be dynamic and based on system fairness policies. The context of fair means that first available request for an asset is not envied as long as it is not yet allocated or tasked in order to prevent cycling of the system. This solution may also save money by offering a Pareto efficient repeatable process. We use a water fill queue algorithm to include Shapley value marginal contributions for allocation.

  13. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.

  14. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  15. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-05-29

    We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor.more » For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  16. A random walk approach to quantum algorithms.

    PubMed

    Kendon, Vivien M

    2006-12-15

    The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.

  17. Modeling and predicting the Spanish Bachillerato academic results over the next few years using a random network model

    NASA Astrophysics Data System (ADS)

    Cortés, J.-C.; Colmenar, J.-M.; Hidalgo, J.-I.; Sánchez-Sánchez, A.; Santonja, F.-J.; Villanueva, R.-J.

    2016-01-01

    Academic performance is a concern of paramount importance in Spain, where around of 30 % of the students in the last two courses in high school, before to access to the labor market or to the university, do not achieve the minimum knowledge required according to the Spanish educational law in force. In order to analyze this problem, we propose a random network model to study the dynamics of the academic performance in Spain. Our approach is based on the idea that both, good and bad study habits, are a mixture of personal decisions and influence of classmates. Moreover, in order to consider the uncertainty in the estimation of model parameters, we perform a lot of simulations taking as the model parameters the ones that best fit data returned by the Differential Evolution algorithm. This technique permits to forecast model trends in the next few years using confidence intervals.

  18. An improved label propagation algorithm based on node importance and random walk for community detection

    NASA Astrophysics Data System (ADS)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  19. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  20. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  1. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  2. Prediction of Protein-Protein Interaction Sites by Random Forest Algorithm with mRMR and IFS

    PubMed Central

    Li, Bi-Qing; Feng, Kai-Yan; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2012-01-01

    Prediction of protein-protein interaction (PPI) sites is one of the most challenging problems in computational biology. Although great progress has been made by employing various machine learning approaches with numerous characteristic features, the problem is still far from being solved. In this study, we developed a novel predictor based on Random Forest (RF) algorithm with the Minimum Redundancy Maximal Relevance (mRMR) method followed by incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility. We also included five 3D structural features to predict protein-protein interaction sites and achieved an overall accuracy of 0.672997 and MCC of 0.347977. Feature analysis showed that 3D structural features such as Depth Index (DPX) and surface curvature (SC) contributed most to the prediction of protein-protein interaction sites. It was also shown via site-specific feature analysis that the features of individual residues from PPI sites contribute most to the determination of protein-protein interaction sites. It is anticipated that our prediction method will become a useful tool for identifying PPI sites, and that the feature analysis described in this paper will provide useful insights into the mechanisms of interaction. PMID:22937126

  3. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  4. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  5. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  6. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  7. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  8. Study Of Genetic Diversity Between Grasspea Landraces Using Morphological And Molecular Marker

    NASA Astrophysics Data System (ADS)

    Sedehi, Abbasali Vahabi; Lotfi, Asefeh; Solooki, Mahmood

    2008-01-01

    Grass pea is a beneficial crop to Iran since it has some major advantageous such as high grain and forage quality, high drought tolerance and medium level of salinity tolerance and a good native germplasm variation which accessible for breeding programs. This study was carried out to evaluate morphological traits of the grass pea landraces using a randomized complete block design with 3 replications at Research Farm of Isfahan University of Technology. To evaluate genetic diversity of 14 grass pea landraces from various locations in Iran were investigated using 32 RAPD & ISJ primers at Biocenter of University of Zabol. Analysis of variance indicated a highly significant differences among 14 grass pea landrace for the morphological traits. Average of polymorphism percentage of RAPD primer was 73.9%. Among used primer, 12 random primers showed polymorphism and a total of 56 different bands were observed in the genotypes. Jafar-abad and Sar-chahan genotypes with similarity coefficient of 66% and Khoram-abad 2 and Khoram-abad 7 genotypes with similarity coefficient of 3% were the most related and the most distinct genotypes, respectively. Fourteen primers out of 17 semi random primers produced 70 polymorphic bands which included 56% of the total 126 produced bands. Genetic relatedness among population was investigated using Jacard coefficient and unweighted pair group mean analysis (UPGMA) algorithm. The result of this research verified possibility of use of RAPD & ISJ markers for estimation of genetic diversity, management of genetic resources and determination of repetitive accessions in grass pea.

  9. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  10. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  11. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  12. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  13. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  14. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  15. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  16. Pure random search for ambient sensor distribution optimisation in a smart home environment.

    PubMed

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2011-01-01

    Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.

  17. Land cover and land use mapping of the iSimangaliso Wetland Park, South Africa: comparison of oblique and orthogonal random forest algorithms

    NASA Astrophysics Data System (ADS)

    Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad

    2016-01-01

    In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.

  18. Comparison of genetic algorithm methods for fuel management optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-12-31

    The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.

  19. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  20. Random Bits Forest: a Strong Classifier/Regressor for Big Data

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li

    2016-07-01

    Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).

  1. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  2. Fault Detection of Aircraft System with Random Forest Algorithm and Similarity Measure

    PubMed Central

    Park, Wookje; Jung, Sikhang

    2014-01-01

    Research on fault detection algorithm was developed with the similarity measure and random forest algorithm. The organized algorithm was applied to unmanned aircraft vehicle (UAV) that was readied by us. Similarity measure was designed by the help of distance information, and its usefulness was also verified by proof. Fault decision was carried out by calculation of weighted similarity measure. Twelve available coefficients among healthy and faulty status data group were used to determine the decision. Similarity measure weighting was done and obtained through random forest algorithm (RFA); RF provides data priority. In order to get a fast response of decision, a limited number of coefficients was also considered. Relation of detection rate and amount of feature data were analyzed and illustrated. By repeated trial of similarity calculation, useful data amount was obtained. PMID:25057508

  3. An energy-efficient and secure hybrid algorithm for wireless sensor networks using a mobile data collector

    NASA Astrophysics Data System (ADS)

    Dayananda, Karanam Ravichandran; Straub, Jeremy

    2017-05-01

    This paper proposes a new hybrid algorithm for security, which incorporates both distributed and hierarchal approaches. It uses a mobile data collector (MDC) to collect information in order to save energy of sensor nodes in a wireless sensor network (WSN) as, in most networks, these sensor nodes have limited energy. Wireless sensor networks are prone to security problems because, among other things, it is possible to use a rogue sensor node to eavesdrop on or alter the information being transmitted. To prevent this, this paper introduces a security algorithm for MDC-based WSNs. A key use of this algorithm is to protect the confidentiality of the information sent by the sensor nodes. The sensor nodes are deployed in a random fashion and form group structures called clusters. Each cluster has a cluster head. The cluster head collects data from the other nodes using the time-division multiple access protocol. The sensor nodes send their data to the cluster head for transmission to the base station node for further processing. The MDC acts as an intermediate node between the cluster head and base station. The MDC, using its dynamic acyclic graph path, collects the data from the cluster head and sends it to base station. This approach is useful for applications including warfighting, intelligent building and medicine. To assess the proposed system, the paper presents a comparison of its performance with other approaches and algorithms that can be used for similar purposes.

  4. 78 FR 35645 - Certain Static Random Access Memories and Products Containing Same; Commission Determination...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-13

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-792] Certain Static Random Access Memories and Products Containing Same; Commission Determination Affirming a Final Initial Determination..., and the sale within the United States after importation of certain static random access memories and...

  5. Individual pore and interconnection size analysis of macroporous ceramic scaffolds using high-resolution X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca

    2016-08-15

    The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less

  6. Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Woo, Simon S.; James, Mark; Paloulian, George K.

    2012-01-01

    As communication and networking technologies advance, networks will become highly complex and heterogeneous, interconnecting different network domains. There is a need to provide user authentication and data protection in order to further facilitate critical mission operations, especially in the tactical and mission-critical net-centric networking environment. The Autonomous Information Unit (AIU) technology was designed to provide the fine-grain data access and user control in a net-centric system-testing environment to meet these objectives. The AIU is a fundamental capability designed to enable fine-grain data access and user control in the cross-domain networking environments, where an AIU is composed of the mission data, metadata, and policy. An AIU provides a mechanism to establish trust among deployed AIUs based on recombining shared secrets, authentication and verify users with a username, X.509 certificate, enclave information, and classification level. AIU achieves data protection through (1) splitting data into multiple information pieces using the Shamir's secret sharing algorithm, (2) encrypting each individual information piece using military-grade AES-256 encryption, and (3) randomizing the position of the encrypted data based on the unbiased and memory efficient in-place Fisher-Yates shuffle method. Therefore, it becomes virtually impossible for attackers to compromise data since attackers need to obtain all distributed information as well as the encryption key and the random seeds to properly arrange the data. In addition, since policy can be associated with data in the AIU, different user access and data control strategies can be included. The AIU technology can greatly enhance information assurance and security management in the bandwidth-limited and ad hoc net-centric environments. In addition, AIU technology can be applicable to general complex network domains and applications where distributed user authentication and data protection are necessary. AIU achieves fine-grain data access and user control, reducing the security risk significantly, simplifying the complexity of various security operations, and providing the high information assurance across different network domains.

  7. Improved Writing-Conductor Designs For Magnetic Memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.

    1994-01-01

    Writing currents reduced to practical levels. Improved conceptual designs for writing conductors in micromagnet/Hall-effect random-access integrated-circuit memory reduces electrical current needed to magnetize micromagnet in each memory cell. Basic concept of micromagnet/Hall-effect random-access memory presented in "Magnetic Analog Random-Access Memory" (NPO-17999).

  8. 78 FR 25767 - Certain Static Random Access Memories and Products Containing Same; Commission Determination To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-02

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-792] Certain Static Random Access Memories and Products Containing Same; Commission Determination To Review in Part a Final Initial... States after importation of certain static random access memories and products containing the same by...

  9. Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance

    NASA Astrophysics Data System (ADS)

    Kornfeld, Gertrude H.

    1987-09-01

    Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.

  10. PCM-Based Durable Write Cache for Fast Disk I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhuo; Wang, Bin; Carpenter, Patrick

    2012-01-01

    Flash based solid-state devices (FSSDs) have been adopted within the memory hierarchy to improve the performance of hard disk drive (HDD) based storage system. However, with the fast development of storage-class memories, new storage technologies with better performance and higher write endurance than FSSDs are emerging, e.g., phase-change memory (PCM). Understanding how to leverage these state-of-the-art storage technologies for modern computing systems is important to solve challenging data intensive computing problems. In this paper, we propose to leverage PCM for a hybrid PCM-HDD storage architecture. We identify the limitations of traditional LRU caching algorithms for PCM-based caches, and develop amore » novel hash-based write caching scheme called HALO to improve random write performance of hard disks. To address the limited durability of PCM devices and solve the degraded spatial locality in traditional wear-leveling techniques, we further propose novel PCM management algorithms that provide effective wear-leveling while maximizing access parallelism. We have evaluated this PCM-based hybrid storage architecture using applications with a diverse set of I/O access patterns. Our experimental results demonstrate that the HALO caching scheme leads to an average reduction of 36.8% in execution time compared to the LRU caching scheme, and that the SFC wear leveling extends the lifetime of PCM by a factor of 21.6.« less

  11. Spectroscopic diagnosis of laryngeal carcinoma using near-infrared Raman spectroscopy and random recursive partitioning ensemble techniques.

    PubMed

    Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei

    2009-06-01

    In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.

  12. Computer methods for sampling from the gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.E.; Tadikamalla, P.R.

    1978-01-01

    Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.

  13. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madduri, Kamesh; Ediger, David; Jiang, Karl

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 millionmore » vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.« less

  14. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.

  15. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  16. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  17. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study

    NASA Astrophysics Data System (ADS)

    Polan, Daniel F.; Brady, Samuel L.; Kaufman, Robert A.

    2016-09-01

    There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 n , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86  ±  0.03 for pediatric patient protocols, and 0.85  ±  0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.

  18. DEVELOPMENT AND PERFORMANCE OF TEXT-MINING ALGORITHMS TO EXTRACT SOCIOECONOMIC STATUS FROM DE-IDENTIFIED ELECTRONIC HEALTH RECORDS.

    PubMed

    Hollister, Brittany M; Restrepo, Nicole A; Farber-Eger, Eric; Crawford, Dana C; Aldrich, Melinda C; Non, Amy

    2017-01-01

    Socioeconomic status (SES) is a fundamental contributor to health, and a key factor underlying racial disparities in disease. However, SES data are rarely included in genetic studies due in part to the difficultly of collecting these data when studies were not originally designed for that purpose. The emergence of large clinic-based biobanks linked to electronic health records (EHRs) provides research access to large patient populations with longitudinal phenotype data captured in structured fields as billing codes, procedure codes, and prescriptions. SES data however, are often not explicitly recorded in structured fields, but rather recorded in the free text of clinical notes and communications. The content and completeness of these data vary widely by practitioner. To enable gene-environment studies that consider SES as an exposure, we sought to extract SES variables from racial/ethnic minority adult patients (n=9,977) in BioVU, the Vanderbilt University Medical Center biorepository linked to de-identified EHRs. We developed several measures of SES using information available within the de-identified EHR, including broad categories of occupation, education, insurance status, and homelessness. Two hundred patients were randomly selected for manual review to develop a set of seven algorithms for extracting SES information from de-identified EHRs. The algorithms consist of 15 categories of information, with 830 unique search terms. SES data extracted from manual review of 50 randomly selected records were compared to data produced by the algorithm, resulting in positive predictive values of 80.0% (education), 85.4% (occupation), 87.5% (unemployment), 63.6% (retirement), 23.1% (uninsured), 81.8% (Medicaid), and 33.3% (homelessness), suggesting some categories of SES data are easier to extract in this EHR than others. The SES data extraction approach developed here will enable future EHR-based genetic studies to integrate SES information into statistical analyses. Ultimately, incorporation of measures of SES into genetic studies will help elucidate the impact of the social environment on disease risk and outcomes.

  19. 1What do first-time mothers worry about? A study of usage patterns and content of calls made to a postpartum support telephone hotline

    PubMed Central

    2010-01-01

    Background Telephone hotlines designed to address common concerns in the early postpartum could be a useful resource for parents. Our aim was to test the feasibility of using a telephone as an intervention in a randomized controlled trial. We also aimed to test to use of algorithms to address parental concerns through a telephone hotline. Methods Healthy first-time mothers were recruited from postpartum wards of hospitals throughout Lebanon. Participants were given the number of a 24-hour telephone hotline that they could access for the first four months after delivery. Calls were answered by a midwife using algorithms developed by the study team whenever possible. Callers with medical complaints were referred to their physicians. Call patterns and content were recorded and analyzed. Results Eighty-four of the 353 women enrolled (24%) used the hotline. Sixty percent of the women who used the service called more than once, and all callers reported they were satisfied with the service. The midwife received an average of three calls per day and most calls occurred during the first four weeks postpartum. Our algorithms were used to answer questions in 62.8% of calls and 18.6% of calls required referral to a physician. Of the questions related to mothers, 66% were about breastfeeding. Sixty percent of questions related to the infant were about routine care and 23% were about excessive crying. Conclusions Utilization of a telephone hotline service for postpartum support is highest in the first four weeks postpartum. Most questions are related to breastfeeding, routine newborn care, and management of a fussy infant. It is feasible to test a telephone hotline as an intervention in a randomized controlled trial. Algorithms can be developed to provide standardized answers to the most common questions. PMID:20946690

  20. Random access with adaptive packet aggregation in LTE/LTE-A.

    PubMed

    Zhou, Kaijie; Nikaein, Navid

    While random access presents a promising solution for efficient uplink channel access, the preamble collision rate can significantly increase when massive number of devices simultaneously access the channel. To address this issue and improve the reliability of the random access, an adaptive packet aggregation method is proposed. With the proposed method, a device does not trigger a random access for every single packet. Instead, it starts a random access when the number of aggregated packets reaches a given threshold. This method reduces the packet collision rate at the expense of an extra latency, which is used to accumulate multiple packets into a single transmission unit. Therefore, the tradeoff between packet loss rate and channel access latency has to be carefully selected. We use semi-Markov model to derive the packet loss rate and channel access latency as functions of packet aggregation number. Hence, the optimal amount of aggregated packets can be found, which keeps the loss rate below the desired value while minimizing the access latency. We also apply for the idea of packet aggregation for power saving, where a device aggregates as many packets as possible until the latency constraint is reached. Simulations are carried out to evaluate our methods. We find that the packet loss rate and/or power consumption are significantly reduced with the proposed method.

  1. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  2. Near-Neighbor Algorithms for Processing Bearing Data

    DTIC Science & Technology

    1989-05-10

    neighbor algorithms need not be universally more cost -effective than brute force methods. While the data access time of near-neighbor techniques scales with...the number of objects N better than brute force, the cost of setting up the data structure could scale worse than (Continues) 20...for the near neighbors NN2 1 (i). Depending on the particular NN algorithm, the cost of accessing near neighbors for each ai E S1 scales as either N

  3. 76 FR 2336 - Dynamic Random Access Memory Semiconductors From the Republic of Korea: Final Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... DEPARTMENT OF COMMERCE International Trade Administration [C-580-851] Dynamic Random Access Memory... administrative review of the countervailing duty order on dynamic random access memory semiconductors from the... following events have occurred since the publication of the preliminary results of this review. See Dynamic...

  4. ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog

    NASA Technical Reports Server (NTRS)

    Gray, F. P., Jr. (Editor)

    1979-01-01

    A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.

  5. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    NASA Astrophysics Data System (ADS)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  6. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations

    PubMed Central

    Ho, ThienLuan; Oh, Seung-Rohk

    2017-01-01

    Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs). In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively. PMID:29016700

  7. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  8. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  9. Focusing light through random scattering media by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-01-01

    The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.

  10. Management of Central Venous Access Device-Associated Skin Impairment: An Evidence-Based Algorithm.

    PubMed

    Broadhurst, Daphne; Moureau, Nancy; Ullman, Amanda J

    Patients relying on central venous access devices (CVADs) for treatment are frequently complex. Many have multiple comorbid conditions, including renal impairment, nutritional deficiencies, hematologic disorders, or cancer. These conditions can impair the skin surrounding the CVAD insertion site, resulting in an increased likelihood of skin damage when standard CVAD management practices are employed. Supported by the World Congress of Vascular Access (WoCoVA), developed an evidence- and consensus-based algorithm to improve CVAD-associated skin impairment (CASI) identification and diagnosis, guide clinical decision-making, and improve clinician confidence in managing CASI. A scoping review of relevant literature surrounding CASI management was undertaken March 2014, and results were distributed to an international advisory panel. A CASI algorithm was developed by an international advisory panel of clinicians with expertise in wounds, vascular access, pediatrics, geriatric care, home care, intensive care, infection control and acute care, using a 2-phase, modified Delphi technique. The algorithm focuses on identification and treatment of skin injury, exit site infection, noninfectious exudate, and skin irritation/contact dermatitis. It comprised 3 domains: assessment, skin protection, and patient comfort. External validation of the algorithm was achieved by prospective pre- and posttest design, using clinical scenarios and self-reported clinician confidence (Likert scale), and incorporating algorithm feasibility and face validity endpoints. The CASI algorithm was found to significantly increase participants' confidence in the assessment and management of skin injury (P = .002), skin irritation/contact dermatitis (P = .001), and noninfectious exudate (P < .01). A majority of participants reported the algorithm as easy to understand (24/25; 96%), containing all necessary information (24/25; 96%). Twenty-four of 25 (96%) stated that they would recommend the tool to guide management of CASI.

  11. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  12. Electromagnetic interference-aware transmission scheduling and power control for dynamic wireless access in hospital environments.

    PubMed

    Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio

    2011-11-01

    We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.

  13. 75 FR 44283 - In the Matter of Certain Dynamic Random Access Memory Semiconductors and Products Containing Same...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Random Access Memory Semiconductors and Products Containing Same, Including Memory Modules; Notice of a... importation of certain dynamic random access memory semiconductors and products containing same, including memory modules, by reason of infringement of certain claims of U.S. Patent Nos. 5,480,051; 5,422,309; 5...

  14. Planning nonlinear access paths for temporal bone surgery.

    PubMed

    Fauser, Johannes; Sakas, Georgios; Mukhopadhyay, Anirban

    2018-05-01

    Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. We define a specific motion planning problem in [Formula: see text] with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present [Formula: see text]-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. The benefits of [Formula: see text]-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.

  15. Rendering of 3D-wavelet-compressed concentric mosaic scenery with progressive inverse wavelet synthesis (PIWS)

    NASA Astrophysics Data System (ADS)

    Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin

    2000-05-01

    The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.

  16. Validation of psoriatic arthritis diagnoses in electronic medical records using natural language processing

    PubMed Central

    Cai, Tianxi; Karlson, Elizabeth W.

    2013-01-01

    Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955

  17. System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft

    NASA Technical Reports Server (NTRS)

    Pullen, Samuel P.; Parkinson, Bradford W.

    1994-01-01

    This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.

  18. Automatic planning of needle placement for robot-assisted percutaneous procedures.

    PubMed

    Belbachir, Esia; Golkar, Ehsan; Bayle, Bernard; Essert, Caroline

    2018-04-18

    Percutaneous procedures allow interventional radiologists to perform diagnoses or treatments guided by an imaging device, typically a computed tomography (CT) scanner with a high spatial resolution. To reduce exposure to radiations and improve accuracy, robotic assistance to needle insertion is considered in the case of X-ray guided procedures. We introduce a planning algorithm that computes a needle placement compatible with both the patient's anatomy and the accessibility of the robot within the scanner gantry. Our preoperative planning approach is based on inverse kinematics, fast collision detection, and bidirectional rapidly exploring random trees coupled with an efficient strategy of node addition. The algorithm computes the allowed needle entry zones over the patient's skin (accessibility map) from 3D models of the patient's anatomy, the environment (CT, bed), and the robot. The result includes the admissible robot joint path to target the prescribed internal point, through the entry point. A retrospective study was performed on 16 patients datasets in different conditions: without robot (WR) and with the robot on the left or the right side of the bed (RL/RR). We provide an accessibility map ensuring a collision-free path of the robot and allowing for a needle placement compatible with the patient's anatomy. The result is obtained in an average time of about 1 min, even in difficult cases. The accessibility maps of RL and RR covered about a half of the surface of WR map in average, which offers a variety of options to insert the needle with the robot. We also measured the average distance between the needle and major obstacles such as the vessels and found that RL and RR produced needle placements almost as safe as WR. The introduced planning method helped us prove that it is possible to use such a "general purpose" redundant manipulator equipped with a dedicated tool to perform percutaneous interventions in cluttered spaces like a CT gantry.

  19. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  20. Random sequential adsorption of cubes

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  1. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  2. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    PubMed Central

    Foulley, Jean-Louis; Van Dyk, David A

    2000-01-01

    This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399

  3. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  4. A different Deutsch-Jozsa

    NASA Astrophysics Data System (ADS)

    Bera, Debajyoti

    2015-06-01

    One of the early achievements of quantum computing was demonstrated by Deutsch and Jozsa (Proc R Soc Lond A Math Phys Sci 439(1907):553, 1992) regarding classification of a particular type of Boolean functions. Their solution demonstrated an exponential speedup compared to classical approaches to the same problem; however, their solution was the only known quantum algorithm for that specific problem so far. This paper demonstrates another quantum algorithm for the same problem, with the same exponential advantage compared to classical algorithms. The novelty of this algorithm is the use of quantum amplitude amplification, a technique that is the key component of another celebrated quantum algorithm developed by Grover (Proceedings of the twenty-eighth annual ACM symposium on theory of computing, ACM Press, New York, 1996). A lower bound for randomized (classical) algorithms is also presented which establishes a sound gap between the effectiveness of our quantum algorithm and that of any randomized algorithm with similar efficiency.

  5. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    PubMed Central

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  6. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  7. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  8. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    NASA Astrophysics Data System (ADS)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  9. SU-D-201-06: Random Walk Algorithm Seed Localization Parameters in Lung Positron Emission Tomography (PET) Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Asl, A Kamali; Geramifar, P

    2015-06-15

    Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lungmore » lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and diagnosis.« less

  10. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  11. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    NASA Astrophysics Data System (ADS)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  12. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  13. Spin-the-bottle Sort and Annealing Sort: Oblivious Sorting via Round-robin Random Comparisons

    PubMed Central

    Goodrich, Michael T.

    2013-01-01

    We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a temperature parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require Ω(n2 log n) expected time in order to succeed, and that in O(n2 log n) time this algorithm succeeds with high probability for any input. We also show there is a specification of Annealing sort that runs in O(n log n) time and succeeds with very high probability. PMID:24550575

  14. Multimedia transmission in MC-CDMA using adaptive subcarrier power allocation and CFO compensation

    NASA Astrophysics Data System (ADS)

    Chitra, S.; Kumaratharan, N.

    2018-02-01

    Multicarrier code division multiple access (MC-CDMA) system is one of the most effective techniques in fourth-generation (4G) wireless technology, due to its high data rate, high spectral efficiency and resistance to multipath fading. However, MC-CDMA systems are greatly deteriorated by carrier frequency offset (CFO) which is due to Doppler shift and oscillator instabilities. It leads to loss of orthogonality among the subcarriers and causes intercarrier interference (ICI). Water filling algorithm (WFA) is an efficient resource allocation algorithm to solve the power utilisation problems among the subcarriers in time-dispersive channels. The conventional WFA fails to consider the effect of CFO. To perform subcarrier power allocation with reduced CFO and to improve the capacity of MC-CDMA system, residual CFO compensated adaptive subcarrier power allocation algorithm is proposed in this paper. The proposed technique allocates power only to subcarriers with high channel to noise power ratio. The performance of the proposed method is evaluated using random binary data and image as source inputs. Simulation results depict that the bit error rate performance and ICI reduction capability of the proposed modified WFA offered superior performance in both power allocation and image compression for high-quality multimedia transmission in the presence of CFO and imperfect channel state information conditions.

  15. Design of an Efficient CAC for a Broadband DVB-S/DVB-RCS Satellite Access Network

    NASA Astrophysics Data System (ADS)

    Inzerilli, Tiziano; Montozzi, Simone

    2003-07-01

    This paper deals with efficient utilization of network resources in an advanced broadband satellite access system. It proposes a technique for admission control of IP streams with guaranteed QoS which does not interfere with the particular BoD (Bandwidth on Demand) algorithm that handles access to uplink bandwidth, an essential part of a DVB- RCS architecture. This feature of the admission control greatly simplify its integration in the satellite network. The purpose of this admission control algorithm in particular is to suitably and dynamically configure the overall traffic control parameters, in the access terminal of the user and service segment, with a simple approach which does not introduces limitations and/or constraints to the BoD algorithm. Performance of the proposed algorithm is evaluated thorugh Opnet simulations using an ad-hoc platform modeling DVB-based satellite access.The results presented in this paper were obtained within SATIP6 project, which is sponsored within the 5th EU Research Programme, IST. The aims of the project are to evaluate and demonstrate key issues of the integration of satellite-based access networks into the Internet in order to support multimedia services over wide areas. The satellite link layer is based on DVB-S on the forward link and DVB-RCS on the return link. Adaptation and optimization of the DVB-RCS access standard in order to support QoS provision are central issues of the project. They are handled through an integration of Connection Admission Control (CAC), Traffic Shaping and Policing techniques.

  16. Employing canopy hyperspectral narrowband data and random forest algorithm to differentiate palmer amaranth from colored cotton

    USDA-ARS?s Scientific Manuscript database

    Palmer amaranth (Amaranthus palmeri S. Wats.) invasion negatively impacts cotton (Gossypium hirsutum L.) production systems throughout the United States. The objective of this study was to evaluate canopy hyperspectral narrowband data as input into the random forest machine learning algorithm to dis...

  17. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  18. Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem

    PubMed Central

    Wang, Hefeng; Wu, Lian-Ao

    2016-01-01

    An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834

  19. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  20. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  1. Breakthrough in current-in-plane tunneling measurement precision by application of multi-variable fitting algorithm.

    PubMed

    Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H

    2017-09-01

    We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.

  2. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  3. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  4. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  5. Sample-Based Surface Coloring

    PubMed Central

    Bürger, Kai; Krüger, Jens; Westermann, Rüdiger

    2011-01-01

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392

  6. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    PubMed Central

    Ahmed, Afaz Uddin; Tariqul Islam, Mohammad; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation. PMID:25133214

  7. A novel user classification method for femtocell network by using affinity propagation algorithm and artificial neural network.

    PubMed

    Ahmed, Afaz Uddin; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.

  8. Optimized random phase only holograms.

    PubMed

    Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto

    2018-02-15

    We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.

  9. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  10. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm.

    PubMed

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-24

    Indoor occupants' positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans' position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.

  11. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm

    PubMed Central

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-01

    Indoor occupants’ positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans’ position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization. PMID:29364188

  12. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  13. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  14. Parallel Algorithms for Groebner-Basis Reduction

    DTIC Science & Technology

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  15. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    NASA Astrophysics Data System (ADS)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  16. Parallel Algorithms for Switching Edges in Heterogeneous Graphs.

    PubMed

    Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-06-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.

  17. Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆

    PubMed Central

    Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav

    2017-01-01

    An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680

  18. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  19. Optical network unit placement in Fiber-Wireless (FiWi) access network by Moth-Flame optimization algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Puja; Prakash, Shashi

    2017-07-01

    Hybrid wireless-optical broadband access network (WOBAN) or Fiber-Wireless (FiWi) is the integration of wireless access network and optical network. This hybrid multi-domain network adopts the advantages of wireless and optical domains and serves the demand of technology savvy users. FiWi exhibits the properties of cost effectiveness, robustness, flexibility, high capacity, reliability and is self organized. Optical Network Unit (ONU) placement problem in FiWi contributes in simplifying the network design and enhances the performance in terms of cost efficiency and increased throughput. Several individual-based algorithms, such as Simulated Annealing (SA), Tabu Search, etc. have been suggested for ONU placement, but these algorithms suffer from premature convergence (trapping in a local optima). The present research work undertakes the deployment of FiWi and proposes a novel nature-inspired heuristic paradigm called Moth-Flame optimization (MFO) algorithm for multiple optical network units' placement. MFO is a population based algorithm. Population-based algorithms are better in handling local optima avoidance. The simulation results are compared with the existing Greedy and Simulated Annealing algorithms to optimize the position of ONUs. To the best of our knowledge, MFO algorithm has been used for the first time in this domain, moreover it has been able to provide very promising and competitive results. The performance of MFO algorithm has been analyzed by varying the 'b' parameter. MFO algorithm results in faster convergence than the existing strategies of Greedy and SA and returns a lower value of overall cost function. The results exhibit the dependence of the objective function on the distribution of wireless users also.

  20. Relaxation of Distributed Data Aggregation for Underwater Acoustic Sensor Networks

    DTIC Science & Technology

    2014-03-31

    2 3.1 Gossip algorithms for distributed averaging . . . . . . . . . . . . . . . . . 3 3.2 Distributed particle filtering...algorithm that had direct access to all of the measurements. We use gossip algorithms (discussed in Section 3.1) to diffuse information across the...2 3.1 Gossip algorithms for distributed averaging We begin by discussing gossip algorithms, which we use to synchronize and spread infor- mation

  1. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  2. Seismic noise attenuation using an online subspace tracking algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  3. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  4. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  5. Studies of the DIII-D disruption database using Machine Learning algorithms

    NASA Astrophysics Data System (ADS)

    Rea, Cristina; Granetz, Robert; Meneghini, Orso

    2017-10-01

    A Random Forests Machine Learning algorithm, trained on a large database of both disruptive and non-disruptive DIII-D discharges, predicts disruptive behavior in DIII-D with about 90% of accuracy. Several algorithms have been tested and Random Forests was found superior in performances for this particular task. Over 40 plasma parameters are included in the database, with data for each of the parameters taken from 500k time slices. We focused on a subset of non-dimensional plasma parameters, deemed to be good predictors based on physics considerations. Both binary (disruptive/non-disruptive) and multi-label (label based on the elapsed time before disruption) classification problems are investigated. The Random Forests algorithm provides insight on the available dataset by ranking the relative importance of the input features. It is found that q95 and Greenwald density fraction (n/nG) are the most relevant parameters for discriminating between DIII-D disruptive and non-disruptive discharges. A comparison with the Gradient Boosted Trees algorithm is shown and the first results coming from the application of regression algorithms are presented. Work supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0014264 and DE-FG02-95ER54309.

  6. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  7. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  9. Greedy Gossip With Eavesdropping

    NASA Astrophysics Data System (ADS)

    Ustebay, Deniz; Oreshkin, Boris N.; Coates, Mark J.; Rabbat, Michael G.

    2010-07-01

    This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.

  10. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    NASA Astrophysics Data System (ADS)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.

  11. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  12. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  13. Speeding up image quality improvement in random phase-free holograms using ringing artifact characteristics.

    PubMed

    Nagahama, Yuki; Shimobaba, Tomoyoshi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-05-01

    A holographic projector utilizes holography techniques. However, there are several barriers to realizing holographic projections. One is deterioration of hologram image quality caused by speckle noise and ringing artifacts. The combination of the random phase-free method and the Gerchberg-Saxton (GS) algorithm has improved the image quality of holograms. However, the GS algorithm requires significant computation time. We propose faster methods for image quality improvement of random phase-free holograms using the characteristics of ringing artifacts.

  14. Multi-label spacecraft electrical signal classification method based on DBN and random forest

    PubMed Central

    Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng

    2017-01-01

    In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data. PMID:28486479

  15. Multi-label spacecraft electrical signal classification method based on DBN and random forest.

    PubMed

    Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng

    2017-01-01

    In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data.

  16. Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation

    PubMed Central

    Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan

    2013-01-01

    The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902

  17. DMSP SSJ4 Data Restoration, Classification, and On-Line Data Access

    NASA Technical Reports Server (NTRS)

    Wing, Simon; Bredekamp, Joseph H. (Technical Monitor)

    2000-01-01

    Compress and clean raw data file for permanent storage We have identified various error conditions/types and developed algorithms to get rid of these errors/noises, including the more complicated noise in the newer data sets. (status = 100% complete). Internet access of compacted raw data. It is now possible to access the raw data via our web site, http://www.jhuapl.edu/Aurora/index.html. The software to read and plot the compacted raw data is also available from the same web site. The users can now download the raw data, read, plot, or manipulate the data as they wish on their own computer. The users are able to access the cleaned data sets. Internet access of the color spectrograms. This task has also been completed. It is now possible to access the spectrograms from the web site mentioned above. Improve the particle precipitation region classification. The algorithm for doing this task has been developed and implemented. As a result, the accuracies improved. Now the web site routinely distributes the results of applying the new algorithm to the cleaned data set. Mark the classification region on the spectrograms. The software to mark the classification region in the spectrograms has been completed. This is also available from our web site.

  18. 75 FR 14467 - In the Matter of: Certain Dynamic Random Access Memory Semiconductors and Products Containing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... Access Memory Semiconductors and Products Containing Same, Including Memory Modules; Notice of... the sale within the United States after importation of certain dynamic random access memory semiconductors and products containing same, including memory modules, by reason of infringement of certain...

  19. Random Walk Quantum Clustering Algorithm Based on Space

    NASA Astrophysics Data System (ADS)

    Xiao, Shufen; Dong, Yumin; Ma, Hongyang

    2018-01-01

    In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.

  20. An improved genetic algorithm and its application in the TSP problem

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Qin, Jinlei

    2011-12-01

    Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.

  1. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches.

    PubMed

    Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.

  2. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    NASA Astrophysics Data System (ADS)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  3. Grouper: A Compact, Streamable Triangle Mesh Data Structure.

    PubMed

    Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek

    2013-05-08

    We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.

  4. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  5. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  6. A registry-based randomized trial comparing radial and femoral approaches in women undergoing percutaneous coronary intervention: the SAFE-PCI for Women (Study of Access Site for Enhancement of PCI for Women) trial.

    PubMed

    Rao, Sunil V; Hess, Connie N; Barham, Britt; Aberle, Laura H; Anstrom, Kevin J; Patel, Tejan B; Jorgensen, Jesse P; Mazzaferri, Ernest L; Jolly, Sanjit S; Jacobs, Alice; Newby, L Kristin; Gibson, C Michael; Kong, David F; Mehran, Roxana; Waksman, Ron; Gilchrist, Ian C; McCourt, Brian J; Messenger, John C; Peterson, Eric D; Harrington, Robert A; Krucoff, Mitchell W

    2014-08-01

    This study sought to determine the effect of radial access on outcomes in women undergoing percutaneous coronary intervention (PCI) using a registry-based randomized trial. Women are at increased risk of bleeding and vascular complications after PCI. The role of radial access in women is unclear. Women undergoing cardiac catheterization or PCI were randomized to radial or femoral arterial access. Data from the CathPCI Registry and trial-specific data were merged into a final study database. The primary efficacy endpoint was Bleeding Academic Research Consortium type 2, 3, or 5 bleeding or vascular complications requiring intervention. The primary feasibility endpoint was access site crossover. The primary analysis cohort was the subgroup undergoing PCI; sensitivity analyses were conducted in the total randomized population. The trial was stopped early for a lower than expected event rate. A total of 1,787 women (691 undergoing PCI) were randomized at 60 sites. There was no significant difference in the primary efficacy endpoint between radial or femoral access among women undergoing PCI (radial 1.2% vs. 2.9% femoral, odds ratio [OR]: 0.39; 95% confidence interval [CI]: 0.12 to 1.27); among women undergoing cardiac catheterization or PCI, radial access significantly reduced bleeding and vascular complications (0.6% vs. 1.7%; OR: 0.32; 95% CI: 0.12 to 0.90). Access site crossover was significantly higher among women assigned to radial access (PCI cohort: 6.1% vs. 1.7%; OR: 3.65; 95% CI: 1.45 to 9.17); total randomized cohort: (6.7% vs. 1.9%; OR: 3.70; 95% CI: 2.14 to 6.40). More women preferred radial access. In this pragmatic trial, which was terminated early, the radial approach did not significantly reduce bleeding or vascular complications in women undergoing PCI. Access site crossover occurred more often in women assigned to radial access. (SAFE-PCI for Women; NCT01406236). Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  7. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  8. A Toolbox for Ab Initio 3-D Reconstructions in Single-particle Electron Microscopy

    PubMed Central

    Voss, Neil R; Lyumkis, Dmitry; Cheng, Anchi; Lau, Pick-Wei; Mulder, Anke; Lander, Gabriel C; Brignole, Edward J; Fellmann, Denis; Irving, Christopher; Jacovetty, Erica L; Leung, Albert; Pulokas, James; Quispe, Joel D; Winkler, Hanspeter; Yoshioka, Craig; Carragher, Bridget; Potter, Clinton S

    2010-01-01

    Structure determination of a novel macromolecular complex via single-particle electron microscopy depends upon overcoming the challenge of establishing a reliable 3-D reconstruction using only 2-D images. There are a variety of strategies that deal with this issue, but not all of them are readily accessible and straightforward to use. We have developed a “toolbox” of ab initio reconstruction techniques that provide several options for calculating 3-D volumes in an easily managed and tightly controlled work-flow that adheres to standard conventions and formats. This toolbox is designed to streamline the reconstruction process by removing the necessity for bookkeeping, while facilitating transparent data transfer between different software packages. It currently includes procedures for calculating ab initio reconstructions via random or orthogonal tilt geometry, tomograms, and common lines, all of which have been tested using the 50S ribosomal subunit. Our goal is that the accessibility of multiple independent reconstruction algorithms via this toolbox will improve the ease with which models can be generated, and provide a means of evaluating the confidence and reliability of the final reconstructed map. PMID:20018246

  9. Standardizing Interfaces for External Access to Data and Processing for the NASA Ozone Product Evaluation and Test Element (PEATE)

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt A.; Fleig, Albert J.

    2008-01-01

    NASA's traditional science data processing systems have focused on specific missions, and providing data access, processing and services to the funded science teams of those specific missions. Recently NASA has been modifying this stance, changing the focus from Missions to Measurements. Where a specific Mission has a discrete beginning and end, the Measurement considers long term data continuity across multiple missions. Total Column Ozone, a critical measurement of atmospheric composition, has been monitored for'decades on a series of Total Ozone Mapping Spectrometer (TOMS) instruments. Some important European missions also monitor ozone, including the Global Ozone Monitoring Experiment (GOME) and SCIAMACHY. With the U.S.IEuropean cooperative launch of the Dutch Ozone Monitoring Instrument (OMI) on NASA Aura satellite, and the GOME-2 instrumental on MetOp, the ozone monitoring record has been further extended. In conjunction with the U.S. Department of Defense (DoD) and the National Oceanic and Atmospheric Administration (NOAA), NASA is now preparing to evaluate data and algorithms for the next generation Ozone Mapping and Profiler Suite (OMPS) which will launch on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) in 2010. NASA is constructing the Science Data Segment (SDS) which is comprised of several elements to evaluate the various NPP data products and algorithms. The NPP SDS Ozone Product Evaluation and Test Element (PEATE) will build on the heritage of the TOMS and OM1 mission based processing systems. The overall measurement based system that will encompass these efforts is the Atmospheric Composition Processing System (ACPS). We have extended the system to include access to publically available data sets from other instruments where feasible, including non-NASA missions as appropriate. The heritage system was largely monolithic providing a very controlled processing flow from data.ingest of satellite data to the ultimate archive of specific operational data products. The ACPS allows more open access with standard protocols including HTTP, SOAPIXML, RSS and various REST incarnations. External entities can be granted access to various modules within the system, including an extended data archive, metadata searching, production planning and processing. Data access is provided with very fine grained access control. It is possible to easily designate certain datasets as being available to the public, or restricted to groups of researchers, or limited strictly to the originator. This can be used, for example, to release one's best validated data to the public, but restrict the "new version" of data processed with a new, unproven algorithm until it is ready. Similarly, the system can provide access to algorithms, both as modifiable source code (where possible) and fully integrated executable Algorithm Plugin Packages (APPs). This enables researchers to download publically released versions of the processing algorithms and easily reproduce the processing remotely, while interacting with the ACPS. The algorithms can be modified allowing better experimentation and rapid improvement. The modified algorithms can be easily integrated back into the production system for large scale bulk processing to evaluate improvements. The system includes complete provenance tracking of algorithms, data and the entire processing environment. The origin of any data or algorithms is recorded and the entire history of the processing chains are stored such that a researcher can understand the entire data flow. Provenance is captured in a form suitable for the system to guarantee scientific reproducability of any data product it distributes even in cases where the physical data products themselves have been deleted due to space constraints. We are currently working on Semantic Web ontologies for representing the various provenance information. A new web site focusing on consolidating informaon about the measurement, processing system, and data access has been established to encourage interaction with the overall scientific community. We will describe the system, its data processing capabilities, and the methods the community can use to interact with the standard interfaces of the system.

  10. A comparative study of controlled random search algorithms with application to inverse aerofoil design

    NASA Astrophysics Data System (ADS)

    Manzanares-Filho, N.; Albuquerque, R. B. F.; Sousa, B. S.; Santos, L. G. C.

    2018-06-01

    This article presents a comparative study of some versions of the controlled random search algorithm (CRSA) in global optimization problems. The basic CRSA, originally proposed by Price in 1977 and improved by Ali et al. in 1997, is taken as a starting point. Then, some new modifications are proposed to improve the efficiency and reliability of this global optimization technique. The performance of the algorithms is assessed using traditional benchmark test problems commonly invoked in the literature. This comparative study points out the key features of the modified algorithm. Finally, a comparison is also made in a practical engineering application, namely the inverse aerofoil shape design.

  11. Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.

    2017-08-01

    We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.

  12. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    NASA Astrophysics Data System (ADS)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  13. Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication

    NASA Astrophysics Data System (ADS)

    Jakovetic, Dusan; Xavier, João; Moura, José M. F.

    2011-08-01

    We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.

  14. Selective epidemic vaccination under the performant routing algorithms

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  15. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  16. A Wearable Healthcare System With a 13.7 μA Noise Tolerant ECG Processor.

    PubMed

    Izumi, Shintaro; Yamashita, Ken; Nakano, Masanao; Kawaguchi, Hiroshi; Kimura, Hiromitsu; Marumoto, Kyoji; Fuchikami, Takaaki; Fujimori, Yoshikazu; Nakajima, Hiroshi; Shiga, Toshikazu; Yoshimoto, Masahiko

    2015-10-01

    To prevent lifestyle diseases, wearable bio-signal monitoring systems for daily life monitoring have attracted attention. Wearable systems have strict size and weight constraints, which impose significant limitations of the battery capacity and the signal-to-noise ratio of bio-signals. This report describes an electrocardiograph (ECG) processor for use with a wearable healthcare system. It comprises an analog front end, a 12-bit ADC, a robust Instantaneous Heart Rate (IHR) monitor, a 32-bit Cortex-M0 core, and 64 Kbyte Ferroelectric Random Access Memory (FeRAM). The IHR monitor uses a short-term autocorrelation (STAC) algorithm to improve the heart-rate detection accuracy despite its use in noisy conditions. The ECG processor chip consumes 13.7 μA for heart rate logging application.

  17. Assessing Class-Wide Consistency and Randomness in Responses to True or False Questions Administered Online

    ERIC Educational Resources Information Center

    Pawl, Andrew; Teodorescu, Raluca E.; Peterson, Joseph D.

    2013-01-01

    We have developed simple data-mining algorithms to assess the consistency and the randomness of student responses to problems consisting of multiple true or false statements. In this paper we describe the algorithms and use them to analyze data from introductory physics courses. We investigate statements that emerge as outliers because the class…

  18. Improved belief propagation algorithm finds many Bethe states in the random-field Ising model on random graphs

    NASA Astrophysics Data System (ADS)

    Perugini, G.; Ricci-Tersenghi, F.

    2018-01-01

    We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.

  19. Digital health technology and trauma: development of an app to standardize care.

    PubMed

    Hsu, Jeremy M

    2015-04-01

    Standardized practice results in less variation, therefore reducing errors and improving outcome. Optimal trauma care is achieved through standardization, as is evidenced by the widespread adoption of the Advanced Trauma Life Support approach. The challenge for an individual institution is how does one educate and promulgate these standardized processes widely and efficiently? In today's world, digital health technology must be considered in the process. The aim of this study was to describe the process of developing an app, which includes standardized trauma algorithms. The objective of the app was to allow easy, real-time access to trauma algorithms, and therefore reduce omissions/errors. A set of trauma algorithms, relevant to the local setting, was derived from the best available evidence. After obtaining grant funding, a collaborative endeavour was undertaken with an external specialist app developing company. The process required 6 months to translate the existing trauma algorithms into an app. The app contains 32 separate trauma algorithms, formatted as a single-page flow diagram. It utilizes specific smartphone features such as 'pinch to zoom', jump-words and pop-ups to allow rapid access to the desired information. Improvements in trauma care outcomes result from reducing variation. By incorporating digital health technology, a trauma app has been developed, allowing easy and intuitive access to evidenced-based algorithms. © 2015 Royal Australasian College of Surgeons.

  20. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  1. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  2. Holographic memories with encryption-selectable function

    NASA Astrophysics Data System (ADS)

    Su, Wei-Chia; Lee, Xuan-Hao

    2006-03-01

    Volume holographic storage has received increasing attention owing to its potential high storage capacity and access rate. In the meanwhile, encrypted holographic memory using random phase encoding technique is attractive for an optical community due to growing demand for protection of information. In this paper, encryption-selectable holographic storage algorithms in LiNbO 3 using angular multiplexing are proposed and demonstrated. Encryption-selectable holographic memory is an advance concept of security storage for content protection. It offers more flexibility to encrypt the data or not optionally during the recording processes. In our system design, the function of encryption and non-encryption storage is switched by a random phase pattern and a uniform phase pattern. Based on a 90-degree geometry, the input patterns including the encryption and non-encryption storage are stored via angular multiplexing with reference plane waves at different incident angles. Image is encrypted optionally by sliding the ground glass into one of the recording waves or removing it away in each exposure. The ground glass is a key for encryption. Besides, it is also an important key available for authorized user to decrypt the encrypted information.

  3. Dynamic Resource Allocation and Access Class Barring Scheme for Delay-Sensitive Devices in Machine to Machine (M2M) Communications.

    PubMed

    Li, Ning; Cao, Chao; Wang, Cong

    2017-06-15

    Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.

  4. Nonconvergence of the Wang-Landau algorithms with multiple random walkers.

    PubMed

    Belardinelli, R E; Pereyra, V D

    2016-05-01

    This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.

  5. Integrated semiconductor-magnetic random access memory system

    NASA Technical Reports Server (NTRS)

    Katti, Romney R. (Inventor); Blaes, Brent R. (Inventor)

    2001-01-01

    The present disclosure describes a non-volatile magnetic random access memory (RAM) system having a semiconductor control circuit and a magnetic array element. The integrated magnetic RAM system uses CMOS control circuit to read and write data magnetoresistively. The system provides a fast access, non-volatile, radiation hard, high density RAM for high speed computing.

  6. 75 FR 20564 - Dynamic Random Access Memory Semiconductors from the Republic of Korea: Extension of Time Limit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-20

    ... DEPARTMENT OF COMMERCE International Trade Administration [C-580-851] Dynamic Random Access Memory Semiconductors from the Republic of Korea: Extension of Time Limit for Preliminary Results of Countervailing Duty... access memory semiconductors from the Republic of Korea, covering the period January 1, 2008 through...

  7. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  8. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  9. Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O(r {sup 6}) to O(r {sup 4})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shenvi, Neil; Yang, Yang; Yang, Weitao

    In recent years, interest in the random-phase approximation (RPA) has grown rapidly. At the same time, tensor hypercontraction has emerged as an intriguing method to reduce the computational cost of electronic structure algorithms. In this paper, we combine the particle-particle random phase approximation with tensor hypercontraction to produce the tensor-hypercontracted particle-particle RPA (THC-ppRPA) algorithm. Unlike previous implementations of ppRPA which scale as O(r{sup 6}), the THC-ppRPA algorithm scales asymptotically as only O(r{sup 4}), albeit with a much larger prefactor than the traditional algorithm. We apply THC-ppRPA to several model systems and show that it yields the same results as traditionalmore » ppRPA to within mH accuracy. Our method opens the door to the development of post-Kohn Sham functionals based on ppRPA without the excessive asymptotic cost of traditional ppRPA implementations.« less

  10. Tensor hypercontracted ppRPA: Reducing the cost of the particle-particle random phase approximation from O(r 6) to O(r 4)

    NASA Astrophysics Data System (ADS)

    Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-07-01

    In recent years, interest in the random-phase approximation (RPA) has grown rapidly. At the same time, tensor hypercontraction has emerged as an intriguing method to reduce the computational cost of electronic structure algorithms. In this paper, we combine the particle-particle random phase approximation with tensor hypercontraction to produce the tensor-hypercontracted particle-particle RPA (THC-ppRPA) algorithm. Unlike previous implementations of ppRPA which scale as O(r6), the THC-ppRPA algorithm scales asymptotically as only O(r4), albeit with a much larger prefactor than the traditional algorithm. We apply THC-ppRPA to several model systems and show that it yields the same results as traditional ppRPA to within mH accuracy. Our method opens the door to the development of post-Kohn Sham functionals based on ppRPA without the excessive asymptotic cost of traditional ppRPA implementations.

  11. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  12. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    NASA Astrophysics Data System (ADS)

    Zhang, G.

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  13. Precise algorithm to generate random sequential adsorption of hard polygons at saturation.

    PubMed

    Zhang, G

    2018-04-01

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.

  14. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  15. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  16. Algorithmic Approach With Clinical Pathology Consultation Improves Access to Specialty Care for Patients With Systemic Lupus Erythematosus.

    PubMed

    Chen, Lei; Welsh, Kerry J; Chang, Brian; Kidd, Laura; Kott, Marylee; Zare, Mohammad; Carroll, Kelley; Nguyen, Andy; Wahed, Amer; Tholpady, Ashok; Pung, Norin; McKee, Donna; Risin, Semyon A; Hunter, Robert L

    2016-09-01

    Harris Health System (HHS) is a safety net system providing health care to the underserved of Harris County, Texas. There was a 6-month waiting period for a rheumatologist consult for patients with suspected systemic lupus erythematosus (SLE). The objective of the intervention was to improve access to specialty care. An algorithmic approach to testing for SLE was implemented initially through the HHS referral center. The algorithm was further offered as a "one-click" order for physicians, with automated reflex testing, interpretation, and case triaging by clinical pathology. Data review revealed that prior to the intervention, 80% of patients did not have complete laboratory workups available at the first rheumatology visit. Implementation of algorithmic testing and triaging of referrals by pathologists resulted in decreasing the waiting time for a rheumatologist by 50%. Clinical pathology intervention and case triaging can improve access to care in a county health care system. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. A Heuristics Approach for Classroom Scheduling Using Genetic Algorithm Technique

    NASA Astrophysics Data System (ADS)

    Ahmad, Izah R.; Sufahani, Suliadi; Ali, Maselan; Razali, Siti N. A. M.

    2018-04-01

    Reshuffling and arranging classroom based on the capacity of the audience, complete facilities, lecturing time and many more may lead to a complexity of classroom scheduling. While trying to enhance the productivity in classroom planning, this paper proposes a heuristic approach for timetabling optimization. A new algorithm was produced to take care of the timetabling problem in a university. The proposed of heuristics approach will prompt a superior utilization of the accessible classroom space for a given time table of courses at the university. Genetic Algorithm through Java programming languages were used in this study and aims at reducing the conflicts and optimizes the fitness. The algorithm considered the quantity of students in each class, class time, class size, time accessibility in each class and lecturer who in charge of the classes.

  18. Prostate cancer prediction using the random forest algorithm that takes into account transrectal ultrasound findings, age, and serum levels of prostate-specific antigen.

    PubMed

    Xiao, Li-Hong; Chen, Pei-Ran; Gou, Zhong-Ping; Li, Yong-Zhong; Li, Mei; Xiang, Liang-Cheng; Feng, Ping

    2017-01-01

    The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P < 0.001), as well as in all transrectal ultrasound characteristics (P < 0.05) except uneven echo (P = 0.609). The random forest model based on age, prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.

  19. Temperature dependent characteristics of the random telegraph noise on contact resistive random access memory

    NASA Astrophysics Data System (ADS)

    Chang, Liang-Shun; Lin, Chrong Jung; King, Ya-Chin

    2014-01-01

    The temperature dependent characteristics of the random telegraphic noise (RTN) on contact resistive random access memory (CRRAM) are studied in this work. In addition to the bi-level switching, the occurrences of the middle states in the RTN signal are investigated. Based on the unique its temperature dependent characteristics, a new temperature sensing scheme is proposed for applications in ultra-low power sensor modules.

  20. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things.

    PubMed

    Yi, Meng; Chen, Qingkui; Xiong, Neal N

    2016-11-03

    This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.

  1. The Evolution of Random Number Generation in MUVES

    DTIC Science & Technology

    2017-01-01

    mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number

  2. Optimizing Constrained Single Period Problem under Random Fuzzy Demand

    NASA Astrophysics Data System (ADS)

    Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin

    2008-09-01

    In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.

  3. Approximate ground states of the random-field Potts model from graph cuts

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay

    2018-05-01

    While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.

  4. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  5. 76 FR 55417 - In the Matter of Certain Dynamic Random Access Memory and Nand Flash Memory Devices and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ... Access Memory and Nand Flash Memory Devices and Products Containing Same; Notice of Institution of... importation, and the sale within the United States after importation of certain dynamic random access memory and NAND flash memory devices and products containing same by reason of infringement of certain claims...

  6. Prediction of Baseflow Index of Catchments using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, B.; Hatfield, K.

    2017-12-01

    We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.

  7. Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sczyrba, Alex; Pratap, Abhishek; Canon, Shane

    2011-03-22

    Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less

  8. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.

    PubMed

    Fayyaz S, S Kiavash; Liu, Xiaoyue Cathy; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George's transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis.

  9. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis

    PubMed Central

    Fayyaz S., S. Kiavash; Zhang, Guohui

    2017-01-01

    The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. PMID:28981544

  10. Evaluation of an Algorithm to Predict Menstrual-Cycle Phase at the Time of Injury.

    PubMed

    Tourville, Timothy W; Shultz, Sandra J; Vacek, Pamela M; Knudsen, Emily J; Bernstein, Ira M; Tourville, Kelly J; Hardy, Daniel M; Johnson, Robert J; Slauterbeck, James R; Beynnon, Bruce D

    2016-01-01

    Women are 2 to 8 times more likely to sustain an anterior cruciate ligament (ACL) injury than men, and previous studies indicated an increased risk for injury during the preovulatory phase of the menstrual cycle (MC). However, investigations of risk rely on retrospective classification of MC phase, and no tools for this have been validated. To evaluate the accuracy of an algorithm for retrospectively classifying MC phase at the time of a mock injury based on MC history and salivary progesterone (P4) concentration. Descriptive laboratory study. Research laboratory. Thirty-one healthy female collegiate athletes (age range, 18-24 years) provided serum or saliva (or both) samples at 8 visits over 1 complete MC. Self-reported MC information was obtained on a randomized date (1-45 days) after mock injury, which is the typical timeframe in which researchers have access to ACL-injured study participants. The MC phase was classified using the algorithm as applied in a stand-alone computational fashion and also by 4 clinical experts using the algorithm and additional subjective hormonal history information to help inform their decision. To assess algorithm accuracy, phase classifications were compared with the actual MC phase at the time of mock injury (ascertained using urinary luteinizing hormone tests and serial serum P4 samples). Clinical expert and computed classifications were compared using κ statistics. Fourteen participants (45%) experienced anovulatory cycles. The algorithm correctly classified MC phase for 23 participants (74%): 22 (76%) of 29 who were preovulatory/anovulatory and 1 (50%) of 2 who were postovulatory. Agreement between expert and algorithm classifications ranged from 80.6% (κ = 0.50) to 93% (κ = 0.83). Classifications based on same-day saliva sample and optimal P4 threshold were the same as those based on MC history alone (87.1% correct). Algorithm accuracy varied during the MC but at no time were both sensitivity and specificity levels acceptable. These findings raise concerns about the accuracy of previous retrospective MC-phase classification systems, particularly in a population with a high occurrence of anovulatory cycles.

  11. TITRATION: A Randomized Study to Assess 2 Treatment Algorithms with New Insulin Glargine 300 units/mL.

    PubMed

    Yale, Jean-François; Berard, Lori; Groleau, Mélanie; Javadi, Pasha; Stewart, John; Harris, Stewart B

    2017-10-01

    It was uncertain whether an algorithm that involves increasing insulin dosages by 1 unit/day may cause more hypoglycemia with the longer-acting insulin glargine 300 units/mL (GLA-300). The objective of this study was to compare safety and efficacy of 2 titration algorithms, INSIGHT and EDITION, for GLA-300 in people with uncontrolled type 2 diabetes mellitus, mainly in a primary care setting. This was a 12-week, open-label, randomized, multicentre pilot study. Participants were randomly assigned to 1 of 2 algorithms: they either increased their dosage by 1 unit/day (INSIGHT, n=108) or the dose was adjusted by the investigator at least once weekly, but no more often than every 3 days (EDITION, n=104). The target fasting self-monitored blood glucose was in the range of 4.4 to 5.6 mmol/L. The percentages of participants reaching the primary endpoint of fasting self-monitored blood glucose ≤5.6 mmol/L without nocturnal hypoglycemia were 19.4% (INSIGHT) and 18.3% (EDITION). At week 12, 26.9% (INSIGHT) and 28.8% (EDITION) of participants achieved a glycated hemoglobin value of ≤7%. No differences in the incidence of hypoglycemia of any category were noted between algorithms. Participants in both arms of the study were much more satisfied with their new treatment as assessed by the Diabetes Treatment Satisfaction Questionnaire. Most health-care professionals (86%) preferred the INSIGHT over the EDITION algorithm. The frequency of adverse events was similar between algorithms. A patient-driven titration algorithm of 1 unit/day with GLA-300 is effective and comparable to the previously tested EDITION algorithm and is preferred by health-care professionals. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  12. Data transmission system and method

    NASA Technical Reports Server (NTRS)

    Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)

    2010-01-01

    A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.

  13. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  14. Thermodynamic cost of computation, algorithmic complexity and the information metric

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  15. Linking search space structure, run-time dynamics, and problem difficulty : a step toward demystifying tabu search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2004-09-01

    Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearestmore » optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.« less

  16. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    PubMed

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy and bias. This modification may be used to speed the calculus of genome-assisted evaluation in large data sets such us those obtained from consortiums. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, G.

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  18. Precise algorithm to generate random sequential adsorption of hard polygons at saturation

    DOE PAGES

    Zhang, G.

    2018-04-30

    Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less

  19. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  20. Evaluating progressive-rendering algorithms in appearance design tasks.

    PubMed

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  1. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  2. 76 FR 35238 - Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public Interest

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-16

    ... Static Random Access Memories and Products Containing Same, DN 2816; the Commission is soliciting... importation of certain static random access memories and products containing same. The complaint names as...

  3. Random-access technique for modular bathymetry data storage in a continental shelf wave refraction program

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1974-01-01

    A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.

  4. LTE-advanced random access mechanism for M2M communication: A review

    NASA Astrophysics Data System (ADS)

    Mustafa, Rashid; Sarowa, Sandeep; Jaglan, Reena Rathee; Khan, Mohammad Junaid; Agrawal, Sunil

    2016-03-01

    Machine Type Communications (MTC) enables one or more self-sufficient machines to communicate directly with one another without human interference. MTC applications include smart grid, security, e-Health and intelligent automation system. To support huge numbers of MTC devices, one of the challenging issues is to provide a competent way for numerous access in the network and to minimize network overload. In this article, the different control mechanisms for overload random access are reviewed to avoid congestion caused by random access channel (RACH) of MTC devices. However, past and present wireless technologies have been engineered for Human-to-Human (H2H) communications, in particular, for transmission of voice. Consequently the Long Term Evolution (LTE) -Advanced is expected to play a central role in communicating Machine to Machine (M2M) and are very optimistic about H2H communications. Distinct and unique characteristics of M2M communications create new challenges from those in H2H communications. In this article, we investigate the impact of massive M2M terminals attempting random access to LTE-Advanced all at once. We discuss and review the solutions to alleviate the overload problem by Third Generation Partnership Project (3GPP). As a result, we evaluate and compare these solutions that can effectively eliminate the congestion on the random access channel for M2M communications without affecting H2H communications.

  5. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  6. Escalated convergent artificial bee colony

    NASA Astrophysics Data System (ADS)

    Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu

    2016-03-01

    Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.

  7. Query construction, entropy, and generalization in neural-network models

    NASA Astrophysics Data System (ADS)

    Sollich, Peter

    1994-05-01

    We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.

  8. Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2015-11-01

    A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  9. Connecting Satellite-Based Precipitation Estimates to Users

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric

    2018-01-01

    Beginning in 1997, the Merged Precipitation Group at NASA Goddard has distributed gridded global precipitation products built by combining satellite and surface gauge data. This started with the Global Precipitation Climatology Project (GPCP), then the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), and recently the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG). This 20+-year (and on-going) activity has yielded an important set of insights and lessons learned for making state-of-the-art precipitation data accessible to the diverse communities of users. Merged-data products critically depend on the input sensors and the retrieval algorithms providing accurate, reliable estimates, but it is also important to provide ancillary information that helps users determine suitability for their application. We typically provide fields of estimated random error, and recently reintroduced the quality index concept at user request. Also at user request we have added a (diagnostic) field of estimated precipitation phase. Over time, increasingly more ancillary fields have been introduced for intermediate products that give expert users insight into the detailed performance of the combination algorithm, such as individual merged microwave and microwave-calibrated infrared estimates, the contributing microwave sensor types, and the relative influence of the infrared estimate.

  10. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    EPA Science Inventory

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  11. Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kollár, Martin

    2012-05-01

    In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (ie WCDMA and LTE).

  12. Subjective randomness as statistical inference.

    PubMed

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  14. Toward a Principled Sampling Theory for Quasi-Orders

    PubMed Central

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  15. Toward a Principled Sampling Theory for Quasi-Orders.

    PubMed

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  16. Fast Demand Forecast of Electric Vehicle Charging Stations for Cell Phone Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majidpour, Mostafa; Qiu, Charlie; Chung, Ching-Yen

    This paper describes the core cellphone application algorithm which has been implemented for the prediction of energy consumption at Electric Vehicle (EV) Charging Stations at UCLA. For this interactive user application, the total time of accessing database, processing the data and making the prediction, needs to be within a few seconds. We analyze four relatively fast Machine Learning based time series prediction algorithms for our prediction engine: Historical Average, kNearest Neighbor, Weighted k-Nearest Neighbor, and Lazy Learning. The Nearest Neighbor algorithm (k Nearest Neighbor with k=1) shows better performance and is selected to be the prediction algorithm implemented for themore » cellphone application. Two applications have been designed on top of the prediction algorithm: one predicts the expected available energy at the station and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is about one second for both applications.« less

  17. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  18. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    NASA Technical Reports Server (NTRS)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  19. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  20. Evidence-Based Guideline: Treatment of Convulsive Status Epilepticus in Children and Adults: Report of the Guideline Committee of the American Epilepsy Society

    PubMed Central

    Shinnar, Shlomo; Gloss, David; Alldredge, Brian; Arya, Ravindra; Bainbridge, Jacquelyn; Bare, Mary; Bleck, Thomas; Dodson, W. Edwin; Garrity, Lisa; Jagoda, Andy; Lowenstein, Daniel; Pellock, John; Riviello, James; Sloan, Edward; Treiman, David M.

    2016-01-01

    CONTEXT: The optimal pharmacologic treatment for early convulsive status epilepticus is unclear. OBJECTIVE: To analyze efficacy, tolerability and safety data for anticonvulsant treatment of children and adults with convulsive status epilepticus and use this analysis to develop an evidence-based treatment algorithm. DATA SOURCES: Structured literature review using MEDLINE, Embase, Current Contents, and Cochrane library supplemented with article reference lists. STUDY SELECTION: Randomized controlled trials of anticonvulsant treatment for seizures lasting longer than 5 minutes. DATA EXTRACTION: Individual studies were rated using predefined criteria and these results were used to form recommendations, conclusions, and an evidence-based treatment algorithm. RESULTS: A total of 38 randomized controlled trials were identified, rated and contributed to the assessment. Only four trials were considered to have class I evidence of efficacy. Two studies were rated as class II and the remaining 32 were judged to have class III evidence. In adults with convulsive status epilepticus, intramuscular midazolam, intravenous lorazepam, intravenous diazepam and intravenous phenobarbital are established as efficacious as initial therapy (Level A). Intramuscular midazolam has superior effectiveness compared to intravenous lorazepam in adults with convulsive status epilepticus without established intravenous access (Level A). In children, intravenous lorazepam and intravenous diazepam are established as efficacious at stopping seizures lasting at least 5 minutes (Level A) while rectal diazepam, intramuscular midazolam, intranasal midazolam, and buccal midazolam are probably effective (Level B). No significant difference in effectiveness has been demonstrated between intravenous lorazepam and intravenous diazepam in adults or children with convulsive status epilepticus (Level A). Respiratory and cardiac symptoms are the most commonly encountered treatment-emergent adverse events associated with intravenous anticonvulsant drug administration in adults with convulsive status epilepticus (Level A). The rate of respiratory depression in patients with convulsive status epilepticus treated with benzodiazepines is lower than in patients with convulsive status epilepticus treated with placebo indicating that respiratory problems are an important consequence of untreated convulsive status epilepticus (Level A). When both are available, fosphenytoin is preferred over phenytoin based on tolerability but phenytoin is an acceptable alternative (Level A). In adults, compared to the first therapy, the second therapy is less effective while the third therapy is substantially less effective (Level A). In children, the second therapy appears less effective and there are no data about third therapy efficacy (Level C). The evidence was synthesized into a treatment algorithm. CONCLUSIONS: Despite the paucity of well-designed randomized controlled trials, practical conclusions and an integrated treatment algorithm for the treatment of convulsive status epilepticus across the age spectrum (infants through adults) can be constructed. Multicenter, multinational efforts are needed to design, conduct and analyze additional randomized controlled trials that can answer the many outstanding clinically relevant questions identified in this guideline. PMID:26900382

  1. A New Aloha Anti-Collision Algorithm Based on CDMA

    NASA Astrophysics Data System (ADS)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  2. A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands

    NASA Astrophysics Data System (ADS)

    Moslemipour, Ghorbanali

    2018-07-01

    This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.

  3. Symmetric Stream Cipher using Triple Transposition Key Method and Base64 Algorithm for Security Improvement

    NASA Astrophysics Data System (ADS)

    Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur

    2017-12-01

    Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.

  4. smallWig: parallel compression of RNA-seq WIG files.

    PubMed

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate this claim, we performed a statistical analysis of expression data in different transform domains and developed accompanying entropy coding methods that bridge the gap between theoretical and practical WIG file compression rates. We tested different variants of the smallWig compression algorithm on a number of integer-and real- (floating point) valued RNA-seq WIG files generated by the ENCODE project. The results reveal that, on average, smallWig offers 18-fold compression rate improvements, up to 2.5-fold compression time improvements, and 1.5-fold decompression time improvements when compared with bigWig. On the tested files, the memory usage of the algorithm never exceeded 90 KB. When more elaborate context mixing compressors were used within smallWig, the obtained compression rates were as much as 23 times better than those of bigWig. For smallWig used in the random query mode, which also supports retrieval of the summary statistics, an overhead in the compression rate of roughly 3-17% was introduced depending on the chosen system parameters. An increase in encoding and decoding time of 30% and 55% represents an additional performance loss caused by enabling random data access. We also implemented smallWig using multi-processor programming. This parallelization feature decreases the encoding delay 2-3.4 times compared with that of a single-processor implementation, with the number of processors used ranging from 2 to 8; in the same parameter regime, the decoding delay decreased 2-5.2 times. The smallWig software can be downloaded from: http://stanford.edu/~zhiyingw/smallWig/smallwig.html, http://publish.illinois.edu/milenkovic/, http://web.stanford.edu/~tsachy/. zhiyingw@stanford.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  6. Combinatorial approximation algorithms for MAXCUT using random walks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Kale, Satyen

    We give the first combinatorial approximation algorithm for MaxCut that beats the trivial 0.5 factor by a constant. The main partitioning procedure is very intuitive, natural, and easily described. It essentially performs a number of random walks and aggregates the information to provide the partition. We can control the running time to get an approximation factor-running time tradeoff. We show that for any constant b > 1.5, there is an {tilde O}(n{sup b}) algorithm that outputs a (0.5 + {delta})-approximation for MaxCut, where {delta} = {delta}(b) is some positive constant. One of the components of our algorithm is a weakmore » local graph partitioning procedure that may be of independent interest. Given a starting vertex i and a conductance parameter {phi}, unless a random walk of length {ell} = O(log n) starting from i mixes rapidly (in terms of {phi} and {ell}), we can find a cut of conductance at most {phi} close to the vertex. The work done per vertex found in the cut is sublinear in n.« less

  7. Multivariate analysis and extraction of parameters in resistive RAMs using the Quantum Point Contact model

    NASA Astrophysics Data System (ADS)

    Roldán, J. B.; Miranda, E.; González-Cordero, G.; García-Fernández, P.; Romero-Zaliz, R.; González-Rodelas, P.; Aguilera, A. M.; González, M. B.; Jiménez-Molinos, F.

    2018-01-01

    A multivariate analysis of the parameters that characterize the reset process in Resistive Random Access Memory (RRAM) has been performed. The different correlations obtained can help to shed light on the current components that contribute in the Low Resistance State (LRS) of the technology considered. In addition, a screening method for the Quantum Point Contact (QPC) current component is presented. For this purpose, the second derivative of the current has been obtained using a novel numerical method which allows determining the QPC model parameters. Once the procedure is completed, a whole Resistive Switching (RS) series of thousands of curves is studied by means of a genetic algorithm. The extracted QPC parameter distributions are characterized in depth to get information about the filamentary pathways associated with LRS in the low voltage conduction regime.

  8. A compact Acousto-Optic Lens for 2D and 3D femtosecond based 2-photon microscopy.

    PubMed

    Kirkby, Paul A; Srinivas Nadella, K M Naga; Silver, R Angus

    2010-06-21

    We describe a high speed 3D Acousto-Optic Lens Microscope (AOLM) for femtosecond 2-photon imaging. By optimizing the design of the 4 AO Deflectors (AODs) and by deriving new control algorithms, we have developed a compact spherical AOL with a low temporal dispersion that enables 2-photon imaging at 10-fold lower power than previously reported. We show that the AOLM can perform high speed 2D raster-scan imaging (>150 Hz) without scan rate dependent astigmatism. It can deflect and focus a laser beam in a 3D random access sequence at 30 kHz and has an extended focusing range (>137 mum; 40X 0.8NA objective). These features are likely to make the AOLM a useful tool for studying fast physiological processes distributed in 3D space.

  9. Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.

    PubMed

    Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R

    2015-01-05

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.

  10. Conditional Random Field-Based Offline Map Matching for Indoor Environments

    PubMed Central

    Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram

    2016-01-01

    In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892

  11. Conditional Random Field-Based Offline Map Matching for Indoor Environments.

    PubMed

    Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram

    2016-08-16

    In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.

  12. Robust local search for spacecraft operations using adaptive noise

    NASA Technical Reports Server (NTRS)

    Fukunaga, Alex S.; Rabideau, Gregg; Chien, Steve

    2004-01-01

    Randomization is a standard technique for improving the performance of local search algorithms for constraint satisfaction. However, it is well-known that local search algorithms are constraints satisfaction. However, it is well-known that local search algorithms are to the noise values selected. We investigate the use of an adaptive noise mechanism in an iterative repair-based planner/scheduler for spacecraft operations. Preliminary results indicate that adaptive noise makes the use of randomized repair moves safe and robust; that is, using adaptive noise makes it possible to consistently achieve, performance comparable with the best tuned noise setting without the need for manually tuning the noise parameter.

  13. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things

    PubMed Central

    Yi, Meng; Chen, Qingkui; Xiong, Neal N.

    2016-01-01

    This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878

  14. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  15. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  16. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  17. Research on electricity consumption forecast based on mutual information and random forests algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu

    2018-02-01

    Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.

  18. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  19. LP-LPA: A link influence-based label propagation algorithm for discovering community structures in networks

    NASA Astrophysics Data System (ADS)

    Berahmand, Kamal; Bouyer, Asgarali

    2018-03-01

    Community detection is an essential approach for analyzing the structural and functional properties of complex networks. Although many community detection algorithms have been recently presented, most of them are weak and limited in different ways. Label Propagation Algorithm (LPA) is a well-known and efficient community detection technique which is characterized by the merits of nearly-linear running time and easy implementation. However, LPA has some significant problems such as instability, randomness, and monster community detection. In this paper, an algorithm, namely node’s label influence policy for label propagation algorithm (LP-LPA) was proposed for detecting efficient community structures. LP-LPA measures link strength value for edges and nodes’ label influence value for nodes in a new label propagation strategy with preference on link strength and for initial nodes selection, avoid of random behavior in tiebreak states, and efficient updating order and rule update. These procedures can sort out the randomness issue in an original LPA and stabilize the discovered communities in all runs of the same network. Experiments on synthetic networks and a wide range of real-world social networks indicated that the proposed method achieves significant accuracy and high stability. Indeed, it can obviously solve monster community problem with regard to detecting communities in networks.

  20. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  1. Efficient Learning Algorithms with Limited Information

    ERIC Educational Resources Information Center

    De, Anindya

    2013-01-01

    The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…

  2. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    PubMed

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections). A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilisation. The methodology can also be applied to other areas of clinical care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. A Hybrid Search Algorithm for Swarm Robots Searching in an Unknown Environment

    PubMed Central

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency. PMID:25386855

  4. A hybrid search algorithm for swarm robots searching in an unknown environment.

    PubMed

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.

  5. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    PubMed

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  7. Personalized Risk Prediction in Clinical Oncology Research: Applications and Practical Issues Using Survival Trees and Random Forests.

    PubMed

    Hu, Chen; Steingrimsson, Jon Arni

    2018-01-01

    A crucial component of making individualized treatment decisions is to accurately predict each patient's disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.

  8. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  9. Rotorcraft Blade Mode Damping Identification from Random Responses Using a Recursive Maximum Likelihood Algorithm

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.

    1982-01-01

    An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.

  10. Improving Hip-Worn Accelerometer Estimates of Sitting Using Machine Learning Methods.

    PubMed

    Kerr, Jacqueline; Carlson, Jordan; Godbole, Suneeta; Cadmus-Bertram, Lisa; Bellettiere, John; Hartman, Sheri

    2018-02-13

    To improve estimates of sitting time from hip worn accelerometers used in large cohort studies by employing machine learning methods developed on free living activPAL data. Thirty breast cancer survivors concurrently wore a hip worn accelerometer and a thigh worn activPAL for 7 days. A random forest classifier, trained on the activPAL data, was employed to detect sitting, standing and sit-stand transitions in 5 second windows in the hip worn accelerometer. The classifier estimates were compared to the standard accelerometer cut point and significant differences across different bout lengths were investigated using mixed effect models. Overall, the algorithm predicted the postures with moderate accuracy (stepping 77%, standing 63%, sitting 67%, sit to stand 52% and stand to sit 51%). Daily level analyses indicated that errors in transition estimates were only occurring during sitting bouts of 2 minutes or less. The standard cut point was significantly different from the activPAL across all bout lengths, overestimating short bouts and underestimating long bouts. This is among the first algorithms for sitting and standing for hip worn accelerometer data to be trained from entirely free living activPAL data. The new algorithm detected prolonged sitting which has been shown to be most detrimental to health. Further validation and training in larger cohorts is warranted.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  11. Software for Generating Strip Maps from SAR Data

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Michel, Thierry; Madsen, Soren; Chapin, Elaine; Rodriguez, Ernesto

    2004-01-01

    Jurassicprok is a computer program that generates strip-map digital elevation models and other data products from raw data acquired by an airborne synthetic-aperture radar (SAR) system. This software can process data from a variety of airborne SAR systems but is designed especially for the GeoSAR system, which is a dual-frequency (P- and X-band), single-pass interferometric SAR system for measuring elevation both at the bare ground surface and top of the vegetation canopy. Jurassicprok is a modified version of software developed previously for airborne-interferometric- SAR applications. The modifications were made to accommodate P-band interferometric processing, remove approximations that are not generally valid, and reduce processor-induced mapping errors to the centimeter level. Major additions and other improvements over the prior software include the following: a) A new, highly efficient multi-stage-modified wave-domain processing algorithm for accurately motion compensating ultra-wideband data; b) Adaptive regridding algorithms based on estimated noise and actual measured topography to reduce noise while maintaining spatial resolution; c) Exact expressions for height determination from interferogram data; d) Fully calibrated volumetric correlation data based on rigorous removal of geometric and signal-to-noise decorrelation terms; e) Strip range-Doppler image output in user-specified Doppler coordinates; f) An improved phase-unwrapping and absolute-phase-determination algorithm; g) A more flexible user interface with many additional processing options; h) Increased interferogram filtering options; and i) Ability to use disk space instead of random- access memory for some processing steps.

  12. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  13. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches

    PubMed Central

    Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736

  14. Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.

    PubMed

    Rajan, K; Deo, N

    1999-09-01

    Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.

  15. An algorithm of adaptive scale object tracking in occlusion

    NASA Astrophysics Data System (ADS)

    Zhao, Congmei

    2017-05-01

    Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.

  16. Near-optimal matrix recovery from random linear measurements.

    PubMed

    Romanov, Elad; Gavish, Matan

    2018-06-25

    In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.

  17. Garnet Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1995-01-01

    Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.

  18. Development of Curie point switching for thin film, random access, memory device

    NASA Technical Reports Server (NTRS)

    Lewicki, G. W.; Tchernev, D. I.

    1967-01-01

    Managanese bismuthide films are used in the development of a random access memory device of high packing density and nondestructive readout capability. Memory entry is by Curie point switching using a laser beam. Readout is accomplished by microoptical or micromagnetic scanning.

  19. Probability machines: consistent probability estimation using nonparametric learning machines.

    PubMed

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  20. Optimal Quantum Spatial Search on Random Temporal Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser

    2017-12-01

    To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.

  1. A random forest algorithm for nowcasting of intense precipitation events

    NASA Astrophysics Data System (ADS)

    Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh

    2017-09-01

    Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.

  2. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    DOE PAGES

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less

  4. Modal identification of structures from the responses and random decrement signatures

    NASA Technical Reports Server (NTRS)

    Brahim, S. R.; Goglia, G. L.

    1977-01-01

    The theory and application of a method which utilizes the free response of a structure to determine its vibration parameters is described. The time-domain free response is digitized and used in a digital computer program to determine the number of modes excited, the natural frequencies, the damping factors, and the modal vectors. The technique is applied to a complex generalized payload model previously tested using sine sweep method and analyzed by NASTRAN. Ten modes of the payload model are identified. In case free decay response is not readily available, an algorithm is developed to obtain the free responses of a structure from its random responses, due to some unknown or known random input or inputs, using the random decrement technique without changing time correlation between signals. The algorithm is tested using random responses from a generalized payload model and from the space shuttle model.

  5. Dynamic defense and network randomization for computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Adrian R.; Stout, William M. S.; Hamlet, Jason R.

    The various technologies presented herein relate to determining a network attack is taking place, and further to adjust one or more network parameters such that the network becomes dynamically configured. A plurality of machine learning algorithms are configured to recognize an active attack pattern. Notification of the attack can be generated, and knowledge gained from the detected attack pattern can be utilized to improve the knowledge of the algorithms to detect a subsequent attack vector(s). Further, network settings and application communications can be dynamically randomized, wherein artificial diversity converts control systems into moving targets that help mitigate the early reconnaissancemore » stages of an attack. An attack(s) based upon a known static address(es) of a critical infrastructure network device(s) can be mitigated by the dynamic randomization. Network parameters that can be randomized include IP addresses, application port numbers, paths data packets navigate through the network, application randomization, etc.« less

  6. Variable disparity estimation based intermediate view reconstruction in dynamic flow allocation over EPON-based access networks

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo

    2008-06-01

    In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.

  7. Show Me My Health Plans: a study protocol of a randomized trial testing a decision support tool for the federal health insurance marketplace in Missouri.

    PubMed

    Politi, Mary C; Barker, Abigail R; Kaphingst, Kimberly A; McBride, Timothy; Shacham, Enbal; Kebodeaux, Carey S

    2016-02-16

    The implementation of the ACA has improved access to quality health insurance, a necessary first step to improving health outcomes. However, access must be supplemented by education to help individuals make informed choices for plans that meet their individual financial and health needs. Drawing on a model of information processing and on prior research, we developed a health insurance decision support tool called Show Me My Health Plans. Developed with extensive stakeholder input, the current tool (1) simplifies information through plain language and graphics in an educational component; (2) assesses and reviews knowledge interactively to ensure comprehension of key material; (3) incorporates individual and/or family health status to personalize out-of-pocket cost estimates; (4) assesses preferences for plan features; and (5) helps individuals weigh information appropriate to their interests and needs through a summary page with "good fit" plans generated from a tailored algorithm. The current study will evaluate whether the online decision support tool improves health insurance decisions compared to a usual care condition (the healthcare.gov marketplace website). The trial will include 362 individuals (181 in each group) from rural, suburban, and urban settings within a 90 mile radius around St. Louis. Eligibility criteria includes English-speaking individuals 18-64 years old who are eligible for the ACA marketplace plans. They will be computer randomized to view the intervention or usual care condition. Presenting individuals with options that they can understand tailored to their needs and preferences could help improve decision quality. By helping individuals narrow down the complexity of health insurance plan options, decision support tools such as this one could prepare individuals to better navigate enrollment in a plan that meets their individual needs. The randomized trial was registered in clinicaltrials.gov (NCT02522624) on August 6, 2015.

  8. A comparison of Percutaneous femoral access in Endovascular Repair versus Open femoral access (PiERO): study protocol for a randomized controlled trial.

    PubMed

    Vierhout, Bastiaan P; Saleem, Ben R; Ott, Alewijn; van Dijl, Jan Maarten; de Kempenaer, Ties D van Andringa; Pierie, Maurice E N; Bottema, Jan T; Zeebregts, Clark J

    2015-09-14

    Access for endovascular repair of abdominal aortic aneurysms (EVAR) is obtained through surgical cutdown or percutaneously. The only devices suitable for percutaneous closure of the 20 French arteriotomies of the common femoral artery (CFA) are the Prostar(™) and Proglide(™) devices (Abbott Vascular). Positive effects of these devices seem to consist of a lower infection rate, and shorter operation time and hospital stay. This conclusion was published in previous reports comparing techniques in patients in two different groups (cohort or randomized). Access techniques were never compared in one and the same patient; this research simplifies comparison because patient characteristics will be similar in both groups. Percutaneous access of the CFA is compared to surgical cutdown in a single patient; in EVAR surgery, access is necessary in both groins in each patient. Randomization is performed on the introduction site of the larger main device of the endoprosthesis. The contralateral device of the endoprosthesis is smaller. When we use this type of randomization, both groups will contain a similar number of main and contralateral devices. Preoperative nose cultures and perineal cultures are obtained, to compare colonization with postoperative wound cultures (in case of a surgical site infection). Furthermore, patient comfort will be considered, using VAS-scores (Visual analog scale). Punch biopsies of the groin will be harvested to retrospectively compare skin of patients who suffered a surgical site infection (SSI) to patients who did not have an SSI. The PiERO trial is a multicenter randomized controlled clinical trial designed to show the consequences of using percutaneous access in EVAR surgery and focuses on the occurrence of surgical site infections. NTR4257 10 November 2013, NL44578.042.13.

  9. PACE: Power-Aware Computing Engines

    DTIC Science & Technology

    2005-02-01

    more costly than compu- tation on our test platform, and it is memory access that dominates most lossless data compression algorithms . In fact, even...Performance and implementation concerns A compression algorithm may be implemented with many different, yet reasonable, data structures (including...Related work This section discusses data compression for low- bandwidth devices and optimizing algorithms for low energy. Though much work has gone

  10. Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation

    NASA Astrophysics Data System (ADS)

    Kim, Sunwoo

    This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.

  11. Is It Ethical for Patents to Be Issued for the Computer Algorithms that Affect Course Management Systems for Distance Learning?

    ERIC Educational Resources Information Center

    Moreau, Nancy

    2008-01-01

    This article discusses the impact of patents for computer algorithms in course management systems. Referring to historical documents and court cases, the positive and negative aspects of software patents are presented. The key argument is the accessibility to algorithms comprising a course management software program such as Blackboard. The…

  12. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  13. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  14. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.; Lin, Shu

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  15. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  16. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  17. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  18. A compressed sensing X-ray camera with a multilayer architecture

    DOE PAGES

    Wang, Zhehui; Laroshenko, O.; Li, S.; ...

    2018-01-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  19. A compressed sensing X-ray camera with a multilayer architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Laroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  20. Design and implementation of encrypted and decrypted file system based on USBKey and hardware code

    NASA Astrophysics Data System (ADS)

    Wu, Kehe; Zhang, Yakun; Cui, Wenchao; Jiang, Ting

    2017-05-01

    To protect the privacy of sensitive data, an encrypted and decrypted file system based on USBKey and hardware code is designed and implemented in this paper. This system uses USBKey and hardware code to authenticate a user. We use random key to encrypt file with symmetric encryption algorithm and USBKey to encrypt random key with asymmetric encryption algorithm. At the same time, we use the MD5 algorithm to calculate the hash of file to verify its integrity. Experiment results show that large files can be encrypted and decrypted in a very short time. The system has high efficiency and ensures the security of documents.

  1. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE PAGES

    Graf, Peter A.; Billups, Stephen

    2017-07-24

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  2. MDTri: robust and efficient global mixed integer search of spaces of multiple ternary alloys: A DIRECT-inspired optimization algorithm for experimentally accessible computational material design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Billups, Stephen

    Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less

  3. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX - RAdiation Dose study (RAD-MATRIX).

    PubMed

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-06-01

    Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wu, Shaochuan; Tan, Xuezhi

    2007-11-01

    By analyzing all kinds of address configuration algorithms, this paper provides a new pseudo-random dynamic address configuration (PRDAC) algorithm for mobile ad hoc networks. Based on PRDAC, the first node that initials this network randomly chooses a nonlinear shift register that can generates an m-sequence. When another node joins this network, the initial node will act as an IP address configuration sever to compute an IP address according to this nonlinear shift register, and then allocates this address and tell the generator polynomial of this shift register to this new node. By this means, when other node joins this network, any node that has obtained an IP address can act as a server to allocate address to this new node. PRDAC can also efficiently avoid IP conflicts and deal with network partition and merge as same as prophet address (PA) allocation and dynamic configuration and distribution protocol (DCDP). Furthermore, PRDAC has less algorithm complexity, less computational complexity and more sufficient assumption than PA. In addition, PRDAC radically avoids address conflicts and maximizes the utilization rate of IP addresses. Analysis and simulation results show that PRDAC has rapid convergence, low overhead and immune from topological structures.

  5. Randomness Testing of the Advanced Encryption Standard Finalist Candidates

    DTIC Science & Technology

    2000-03-28

    Excursions Variant 18 168-185 Rank 1 7 Serial 2 186-187 Spectral DFT 1 8 Lempel - Ziv Compression 1 188 Aperiodic Templates 148 9-156 Linear Complexity...256 bits) for each of the algorithms , for a total of 80 different data sets10. These data sets were selected based on the belief that they would be...useful in evaluating the randomness of cryptographic algorithms . Table 2 lists the eight data types. For a description of the data types, see Appendix

  6. An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.

    PubMed

    Yoon, Yourim; Kim, Yong-Hyuk

    2013-10-01

    Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.

  7. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less

  8. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  9. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  10. A simplified analytical random walk model for proton dose calculation

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    We propose an analytical random walk model for proton dose calculation in a laterally homogeneous medium. A formula for the spatial fluence distribution of primary protons is derived. The variance of the spatial distribution is in the form of a distance-squared law of the angular distribution. To improve the accuracy of dose calculation in the Bragg peak region, the energy spectrum of the protons is used. The accuracy is validated against Monte Carlo simulation in water phantoms with either air gaps or a slab of bone inserted. The algorithm accurately reflects the dose dependence on the depth of the bone and can deal with small-field dosimetry. We further applied the algorithm to patients’ cases in the highly heterogeneous head and pelvis sites and used a gamma test to show the reasonable accuracy of the algorithm in these sites. Our algorithm is fast for clinical use.

  11. Identification of Genes Involved in Breast Cancer Metastasis by Integrating Protein-Protein Interaction Information with Expression Data.

    PubMed

    Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran

    2017-02-01

    The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.

  12. Comparison of genetic algorithms with conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  13. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  14. The SAPHIRE server: a new algorithm and implementation.

    PubMed Central

    Hersh, W.; Leone, T. J.

    1995-01-01

    SAPHIRE is an experimental information retrieval system implemented to test new approaches to automated indexing and retrieval of medical documents. Due to limitations in its original concept-matching algorithm, a modified algorithm has been implemented which allows greater flexibility in partial matching and different word order within concepts. With the concomitant growth in client-server applications and the Internet in general, the new algorithm has been implemented as a server that can be accessed via other applications on the Internet. PMID:8563413

  15. Deep learning improves prediction of CRISPR-Cpf1 guide RNA activity.

    PubMed

    Kim, Hui Kwon; Min, Seonwoo; Song, Myungjae; Jung, Soobin; Choi, Jae Woo; Kim, Younggwang; Lee, Sangeun; Yoon, Sungroh; Kim, Hyongbum Henry

    2018-03-01

    We present two algorithms to predict the activity of AsCpf1 guide RNAs. Indel frequencies for 15,000 target sequences were used in a deep-learning framework based on a convolutional neural network to train Seq-deepCpf1. We then incorporated chromatin accessibility information to create the better-performing DeepCpf1 algorithm for cell lines for which such information is available and show that both algorithms outperform previous machine learning algorithms on our own and published data sets.

  16. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  17. A compact acousto-optic lens for 2D and 3D femtosecond based 2-photon microscopy

    PubMed Central

    Kirkby, Paul A.; Naga Srinivas, N.K.M.; Silver, R. Angus

    2010-01-01

    We describe a high speed 3D Acousto-Optic Lens Microscope (AOLM) for femtosecond 2-photon imaging. By optimizing the design of the 4 AO Deflectors (AODs) and by deriving new control algorithms, we have developed a compact spherical AOL with a low temporal dispersion that enables 2-photon imaging at 10-fold lower power than previously reported. We show that the AOLM can perform high speed 2D raster-scan imaging (>150 Hz) without scan rate dependent astigmatism. It can deflect and focus a laser beam in a 3D random access sequence at 30 kHz and has an extended focusing range (>137 μm; 40X 0.8NA objective). These features are likely to make the AOLM a useful tool for studying fast physiological processes distributed in 3D space PMID:20588506

  18. Communication overhead on the Intel Paragon, IBM SP2 and Meiko CS-2

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Interprocessor communication overhead is a crucial measure of the power of parallel computing systems-its impact can severely limit the performance of parallel programs. This report presents measurements of communication overhead on three contemporary commercial multicomputer systems: the Intel Paragon, the IBM SP2 and the Meiko CS-2. In each case the time to communicate between processors is presented as a function of message length. The time for global synchronization and memory access is discussed. The performance of these machines in emulating hypercubes and executing random pairwise exchanges is also investigated. It is shown that the interprocessor communication time depends heavily on the specific communication pattern required. These observations contradict the commonly held belief that communication overhead on contemporary machines is independent of the placement of tasks on processors. The information presented in this report permits the evaluation of the efficiency of parallel algorithm implementations against standard baselines.

  19. Cooperative multi-user detection and ranging based on pseudo-random codes

    NASA Astrophysics Data System (ADS)

    Morhart, C.; Biebl, E. M.

    2009-05-01

    We present an improved approach for a Round Trip Time of Flight distance measurement system. The system is intended for the usage in a cooperative localisation system for automotive applications. Therefore, it is designed to address a large number of communication partners per measurement cycle. By using coded signals in a time divison multiple access order, we can detect a large number of pedestrian sensors with just one car sensor. We achieve this by using very short transmit bursts in combination with a real time correlation algorithm. Futhermore, the correlation approach offers real time data, concerning the time of arrival, that can serve as a trigger impulse for other comunication systems. The distance accuracy of the correlation result was further increased by adding a fourier interpolation filter. The system performance was checked with a prototype at 2.4 GHz. We reached a distance measurement accuracy of 12 cm at a range up to 450 m.

  20. Making working memory work: The effects of extended practice on focus capacity and the processes of updating, forward access, and random access

    PubMed Central

    Price, John M.; Colflesh, Gregory J. H.; Cerella, John; Verhaeghen, Paul

    2014-01-01

    We investigated the effects of 10 hours of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. PMID:24486803

  1. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  2. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  3. Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a

    NASA Technical Reports Server (NTRS)

    Klute, A.

    1979-01-01

    The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.

  4. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  5. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE PAGES

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...

    2018-03-06

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  6. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  7. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  8. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  9. Dual operation characteristics of resistance random access memory in indium-gallium-zinc-oxide thin film transistors

    NASA Astrophysics Data System (ADS)

    Yang, Jyun-Bao; Chang, Ting-Chang; Huang, Jheng-Jie; Chen, Yu-Chun; Chen, Yu-Ting; Tseng, Hsueh-Chih; Chu, Ann-Kuo; Sze, Simon M.

    2014-04-01

    In this study, indium-gallium-zinc-oxide thin film transistors can be operated either as transistors or resistance random access memory devices. Before the forming process, current-voltage curve transfer characteristics are observed, and resistance switching characteristics are measured after a forming process. These resistance switching characteristics exhibit two behaviors, and are dominated by different mechanisms. The mode 1 resistance switching behavior is due to oxygen vacancies, while mode 2 is dominated by the formation of an oxygen-rich layer. Furthermore, an easy approach is proposed to reduce power consumption when using these resistance random access memory devices with the amorphous indium-gallium-zinc-oxide thin film transistor.

  10. Security under Uncertainty: Adaptive Attackers Are More Challenging to Human Defenders than Random Attackers

    PubMed Central

    Moisan, Frédéric; Gonzalez, Cleotilde

    2017-01-01

    Game Theory is a common approach used to understand attacker and defender motives, strategies, and allocation of limited security resources. For example, many defense algorithms are based on game-theoretic solutions that conclude that randomization of defense actions assures unpredictability, creating difficulties for a human attacker. However, many game-theoretic solutions often rely on idealized assumptions of decision making that underplay the role of human cognition and information uncertainty. The consequence is that we know little about how effective these algorithms are against human players. Using a simplified security game, we study the type of attack strategy and the uncertainty about an attacker's strategy in a laboratory experiment where participants play the role of defenders against a simulated attacker. Our goal is to compare a human defender's behavior in three levels of uncertainty (Information Level: Certain, Risky, Uncertain) and three types of attacker's strategy (Attacker's strategy: Minimax, Random, Adaptive) in a between-subjects experimental design. Best defense performance is achieved when defenders play against a minimax and a random attack strategy compared to an adaptive strategy. Furthermore, when payoffs are certain, defenders are as efficient against random attack strategy as they are against an adaptive strategy, but when payoffs are uncertain, defenders have most difficulties defending against an adaptive attacker compared to a random attacker. We conclude that given conditions of uncertainty in many security problems, defense algorithms would be more efficient if they are adaptive to the attacker actions, taking advantage of the attacker's human inefficiencies. PMID:28690557

  11. On efficient randomized algorithms for finding the PageRank vector

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  12. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Underwater image enhancement through depth estimation based on random forest

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han

    2017-11-01

    Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.

  14. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  15. Plated wire random access memories

    NASA Technical Reports Server (NTRS)

    Gouldin, L. D.

    1975-01-01

    A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.

  16. Multi-component access to a commercially available weight loss program: A randomized controlled trial

    USDA-ARS?s Scientific Manuscript database

    This study examined weight loss between a community-based, intensive behavioral counseling program (Weight Watchers PointsPlus that included three treatment access modes and a self-help condition. A total of 292 participants were randomized to a Weight Watchers (WW; n=147) or a self-help condition (...

  17. Complementarity between entanglement-assisted and quantum distributed random access code

    NASA Astrophysics Data System (ADS)

    Hameedi, Alley; Saha, Debashis; Mironowicz, Piotr; Pawłowski, Marcin; Bourennane, Mohamed

    2017-05-01

    Collaborative communication tasks such as random access codes (RACs) employing quantum resources have manifested great potential in enhancing information processing capabilities beyond the classical limitations. The two quantum variants of RACs, namely, quantum random access code (QRAC) and the entanglement-assisted random access code (EARAC), have demonstrated equal prowess for a number of tasks. However, there do exist specific cases where one outperforms the other. In this article, we study a family of 3 →1 distributed RACs [J. Bowles, N. Brunner, and M. Pawłowski, Phys. Rev. A 92, 022351 (2015), 10.1103/PhysRevA.92.022351] and present its general construction of both the QRAC and the EARAC. We demonstrate that, depending on the function of inputs that is sought, if QRAC achieves the maximal success probability then EARAC fails to do so and vice versa. Moreover, a tripartite Bell-type inequality associated with the EARAC variants reveals the genuine multipartite nonlocality exhibited by our protocol. We conclude with an experimental realization of the 3 →1 distributed QRAC that achieves higher success probabilities than the maximum possible with EARACs for a number of tasks.

  18. Portable and Error-Free DNA-Based Data Storage.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Gabrys, Ryan; Milenkovic, Olgica

    2017-07-10

    DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.

  19. Automatic vetting of planet candidates from ground based surveys: Machine learning with NGTS

    NASA Astrophysics Data System (ADS)

    Armstrong, David J.; Günther, Maximilian N.; McCormac, James; Smith, Alexis M. S.; Bayliss, Daniel; Bouchy, François; Burleigh, Matthew R.; Casewell, Sarah; Eigmüller, Philipp; Gillen, Edward; Goad, Michael R.; Hodgkin, Simon T.; Jenkins, James S.; Louden, Tom; Metrailler, Lionel; Pollacco, Don; Poppenhaeger, Katja; Queloz, Didier; Raynard, Liam; Rauer, Heike; Udry, Stéphane; Walker, Simon R.; Watson, Christopher A.; West, Richard G.; Wheatley, Peter J.

    2018-05-01

    State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new automated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the autovet code used to implement the algorithm publicly accessible. autovet is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.

  20. Mortality risk score prediction in an elderly population using machine learning.

    PubMed

    Rose, Sherri

    2013-03-01

    Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.

  1. The use of the multi-cumulant tensor analysis for the algorithmic optimisation of investment portfolios

    NASA Astrophysics Data System (ADS)

    Domino, Krzysztof

    2017-02-01

    The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent. Remark that the algorithm is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable, what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.

  2. Algorithmic Approaches for Place Recognition in Featureless, Walled Environments

    DTIC Science & Technology

    2015-01-01

    inertial measurement unit LIDAR light detection and ranging RANSAC random sample consensus SLAM simultaneous localization and mapping SUSAN smallest...algorithm 38 21 Typical input image for general junction based algorithm 39 22 Short exposure image of hallway junction taken by LIDAR 40 23...discipline of simultaneous localization and mapping ( SLAM ) has been studied intensively over the past several years. Many technical approaches

  3. FD/DAMA Scheme For Mobile/Satellite Communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Wang, Charles C.; Cheng, Unjeng; Rafferty, William; Dessouky, Khaled I.

    1992-01-01

    Integrated-Adaptive Mobile Access Protocol (I-AMAP) proposed to allocate communication channels to subscribers in first-generation MSAT-X mobile/satellite communication network. Based on concept of frequency-division/demand-assigned multiple access (FD/DAMA) where partition of available spectrum adapted to subscribers' demands for service. Requests processed, and competing requests resolved according to channel-access protocol, or free-access tree algorithm described in "Connection Protocol for Mobile/Satellite Communications" (NPO-17735). Assigned spectrum utilized efficiently.

  4. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  5. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  6. Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?

    PubMed

    Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend

    2011-10-11

    In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.

  7. A Pilot Randomized Controlled Trial of the ACCESS Program: A Group Intervention to Improve Social, Adaptive Functioning, Stress Coping, and Self-Determination Outcomes in Young Adults with Autism Spectrum Disorder.

    PubMed

    Oswald, Tasha M; Winder-Patel, Breanna; Ruder, Steven; Xing, Guibo; Stahmer, Aubyn; Solomon, Marjorie

    2018-05-01

    The purpose of this pilot randomized controlled trial was to investigate the acceptability and efficacy of the Acquiring Career, Coping, Executive control, Social Skills (ACCESS) Program, a group intervention tailored for young adults with autism spectrum disorder (ASD) to enhance critical skills and beliefs that promote adult functioning, including social and adaptive skills, self-determination skills, and coping self-efficacy. Forty-four adults with ASD (ages 18-38; 13 females) and their caregivers were randomly assigned to treatment or waitlist control. Compared to controls, adults in treatment significantly improved in adaptive and self-determination skills, per caregiver report, and self-reported greater belief in their ability to access social support to cope with stressors. Results provide evidence for the acceptability and efficacy of the ACCESS Program.

  8. AVE-SESAME program for the REEDA System

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.

    1981-01-01

    The REEDA system software was modified and improved to process the AVE-SESAME severe storm data. A random access file system for the AVE storm data was designed, tested, and implemented. The AVE/SESAME software was modified to incorporate the random access file input and to interface with new graphics hardware/software now available on the REEDA system. Software was developed to graphically display the AVE/SESAME data in the convention normally used by severe storm researchers. Software was converted to AVE/SESAME software systems and interfaced with existing graphics hardware/software available on the REEDA System. Software documentation was provided for existing AVE/SESAME programs underlining functional flow charts and interacting questions. All AVE/SESAME data sets in random access format was processed to allow developed software to access the entire AVE/SESAME data base. The existing software was modified to allow for processing of different AVE/SESAME data set types including satellite surface and radar data.

  9. Overview of emerging nonvolatile memory technologies

    PubMed Central

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices. PMID:25278820

  10. Overview of emerging nonvolatile memory technologies.

    PubMed

    Meena, Jagan Singh; Sze, Simon Min; Chand, Umesh; Tseng, Tseung-Yuen

    2014-01-01

    Nonvolatile memory technologies in Si-based electronics date back to the 1990s. Ferroelectric field-effect transistor (FeFET) was one of the most promising devices replacing the conventional Flash memory facing physical scaling limitations at those times. A variant of charge storage memory referred to as Flash memory is widely used in consumer electronic products such as cell phones and music players while NAND Flash-based solid-state disks (SSDs) are increasingly displacing hard disk drives as the primary storage device in laptops, desktops, and even data centers. The integration limit of Flash memories is approaching, and many new types of memory to replace conventional Flash memories have been proposed. Emerging memory technologies promise new memories to store more data at less cost than the expensive-to-build silicon chips used by popular consumer gadgets including digital cameras, cell phones and portable music players. They are being investigated and lead to the future as potential alternatives to existing memories in future computing systems. Emerging nonvolatile memory technologies such as magnetic random-access memory (MRAM), spin-transfer torque random-access memory (STT-RAM), ferroelectric random-access memory (FeRAM), phase-change memory (PCM), and resistive random-access memory (RRAM) combine the speed of static random-access memory (SRAM), the density of dynamic random-access memory (DRAM), and the nonvolatility of Flash memory and so become very attractive as another possibility for future memory hierarchies. Many other new classes of emerging memory technologies such as transparent and plastic, three-dimensional (3-D), and quantum dot memory technologies have also gained tremendous popularity in recent years. Subsequently, not an exaggeration to say that computer memory could soon earn the ultimate commercial validation for commercial scale-up and production the cheap plastic knockoff. Therefore, this review is devoted to the rapidly developing new class of memory technologies and scaling of scientific procedures based on an investigation of recent progress in advanced Flash memory devices.

  11. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  12. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  13. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  14. Novel approach for image skeleton and distance transformation parallel algorithms

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Means, Robert W.

    1994-05-01

    Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.

  15. A Novel Dynamic Spectrum Access Framework Based on Reinforcement Learning for Cognitive Radio Sensor Networks.

    PubMed

    Lin, Yun; Wang, Chao; Wang, Jiaxing; Dou, Zheng

    2016-10-12

    Cognitive radio sensor networks are one of the kinds of application where cognitive techniques can be adopted and have many potential applications, challenges and future research trends. According to the research surveys, dynamic spectrum access is an important and necessary technology for future cognitive sensor networks. Traditional methods of dynamic spectrum access are based on spectrum holes and they have some drawbacks, such as low accessibility and high interruptibility, which negatively affect the transmission performance of the sensor networks. To address this problem, in this paper a new initialization mechanism is proposed to establish a communication link and set up a sensor network without adopting spectrum holes to convey control information. Specifically, firstly a transmission channel model for analyzing the maximum accessible capacity for three different polices in a fading environment is discussed. Secondly, a hybrid spectrum access algorithm based on a reinforcement learning model is proposed for the power allocation problem of both the transmission channel and the control channel. Finally, extensive simulations have been conducted and simulation results show that this new algorithm provides a significant improvement in terms of the tradeoff between the control channel reliability and the efficiency of the transmission channel.

  16. A Novel Dynamic Spectrum Access Framework Based on Reinforcement Learning for Cognitive Radio Sensor Networks

    PubMed Central

    Lin, Yun; Wang, Chao; Wang, Jiaxing; Dou, Zheng

    2016-01-01

    Cognitive radio sensor networks are one of the kinds of application where cognitive techniques can be adopted and have many potential applications, challenges and future research trends. According to the research surveys, dynamic spectrum access is an important and necessary technology for future cognitive sensor networks. Traditional methods of dynamic spectrum access are based on spectrum holes and they have some drawbacks, such as low accessibility and high interruptibility, which negatively affect the transmission performance of the sensor networks. To address this problem, in this paper a new initialization mechanism is proposed to establish a communication link and set up a sensor network without adopting spectrum holes to convey control information. Specifically, firstly a transmission channel model for analyzing the maximum accessible capacity for three different polices in a fading environment is discussed. Secondly, a hybrid spectrum access algorithm based on a reinforcement learning model is proposed for the power allocation problem of both the transmission channel and the control channel. Finally, extensive simulations have been conducted and simulation results show that this new algorithm provides a significant improvement in terms of the tradeoff between the control channel reliability and the efficiency of the transmission channel. PMID:27754316

  17. Soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  18. Trans-algorithmic nature of learning in biological systems.

    PubMed

    Shimansky, Yury P

    2018-05-02

    Learning ability is a vitally important, distinctive property of biological systems, which provides dynamic stability in non-stationary environments. Although several different types of learning have been successfully modeled using a universal computer, in general, learning cannot be described by an algorithm. In other words, algorithmic approach to describing the functioning of biological systems is not sufficient for adequate grasping of what is life. Since biosystems are parts of the physical world, one might hope that adding some physical mechanisms and principles to the concept of algorithm could provide extra possibilities for describing learning in its full generality. However, a straightforward approach to that through the so-called physical hypercomputation so far has not been successful. Here an alternative approach is proposed. Biosystems are described as achieving enumeration of possible physical compositions though random incremental modifications inflicted on them by active operating resources (AORs) in the environment. Biosystems learn through algorithmic regulation of the intensity of the above modifications according to a specific optimality criterion. From the perspective of external observers, biosystems move in the space of different algorithms driven by random modifications imposed by the environmental AORs. A particular algorithm is only a snapshot of that motion, while the motion itself is essentially trans-algorithmic. In this conceptual framework, death of unfit members of a population, for example, is viewed as a trans-algorithmic modification made in the population as a biosystem by environmental AORs. Numerous examples of AOR utilization in biosystems of different complexity, from viruses to multicellular organisms, are provided.

  19. Blood flow measurement using digital subtraction angiography for assessing hemodialysis access function

    NASA Astrophysics Data System (ADS)

    Koirala, Nischal; Setser, Randolph M.; Bullen, Jennifer; McLennan, Gordon

    2017-03-01

    Blood flow rate is a critical parameter for diagnosing dialysis access function during fistulography where a flow rate of 600 ml/min in arteriovenous graft or 400-500 ml/min in arteriovenous fistula is considered the clinical threshold for fully functioning access. In this study, a flow rate computational model for calculating intra-access flow to evaluate dialysis access patency was developed and validated in an in vitro set up using digital subtraction angiography. Flow rates were computed by tracking the bolus through two regions of interest using cross correlation (XCOR) and mean arrival time (MAT) algorithms, and correlated versus an in-line transonic flow meter measurement. The mean difference (mean +/- standard deviation) between XCOR and in-line flow measurements for in vitro setup at 3, 6, 7.5 and 10 frames/s was 118+/-63 37+/-59 31+/-31 and 46+/-57 ml/min respectively while for MAT method it was 86+/-56 57+/-72 35+/-85 and 19+/-129 ml/min respectively. The result of this investigation will be helpful for selecting candidate algorithms while blood flow computational tool is developed for clinical application.

  20. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  1. Removal of Stationary Sinusoidal Noise from Random Vibration Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian; Cap, Jerome S.

    In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less

  2. A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun

    2014-11-01

    In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.

  3. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  4. A Comparison of One Time Pad Random Key Generation using Linear Congruential Generator and Quadratic Congruential Generator

    NASA Astrophysics Data System (ADS)

    Apdilah, D.; Harahap, M. K.; Khairina, N.; Husein, A. M.; Harahap, M.

    2018-04-01

    One Time Pad algorithm always requires a pairing of the key for plaintext. If the length of keys less than a length of the plaintext, the key will be repeated until the length of the plaintext same with the length of the key. In this research, we use Linear Congruential Generator and Quadratic Congruential Generator for generating a random number. One Time Pad use a random number as a key for encryption and decryption process. Key will generate the first letter from the plaintext, we compare these two algorithms in terms of time speed encryption, and the result is a combination of OTP with LCG faster than the combination of OTP with QCG.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    A Randon Geometric Graph (RGG) is constructed by distributing n nodes uniformly at random in the unit square and connecting two nodes if their Euclidean distance is at most r, for some prescribed r. They analyze the following randomized broadcast algorithm on RGGs. At the beginning, there is only one informed node. Then in each round, each informed node chooses a neighbor uniformly at random and informs it. They prove that this algorithm informs every node in the largest component of a RGG in {Omicron}({radical}n/r) rounds with high probability. This holds for any value of r larger than the criticalmore » value for the emergence of a giant component. In particular, the result implies that the diameter of the giant component is {Theta}({radical}n/r).« less

  6. Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)

    PubMed Central

    Meyer, Karin

    2008-01-01

    Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112

  7. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  8. A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.

    PubMed

    Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin

    2017-02-16

    The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.

  9. VNIR hyperspectral background characterization methods in adverse weather conditions

    NASA Astrophysics Data System (ADS)

    Romano, João M.; Rosario, Dalton; Roth, Luz

    2009-05-01

    Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.

  10. Collective Influence Algorithm to find influencers via optimal percolation in massively large social media

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A.

    2016-07-01

    We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N2), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task.

  11. Collective Influence Algorithm to find influencers via optimal percolation in massively large social media.

    PubMed

    Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A

    2016-07-26

    We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N(2)), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task.

  12. Analysis and Reduction of Complex Networks Under Uncertainty.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger G

    2014-07-31

    This effort was a collaboration with Youssef Marzouk of MIT, Omar Knio of Duke University (at the time at Johns Hopkins University) and Habib Najm of Sandia National Laboratories. The objective of this effort was to develop the mathematical and algorithmic capacity to analyze complex networks under uncertainty. Of interest were chemical reaction networks and smart grid networks. The statements of work for USC focused on the development of stochastic reduced models for uncertain networks. The USC team was led by Professor Roger Ghanem and consisted of one graduate student and a postdoc. The contributions completed by the USC teammore » consisted of 1) methodology and algorithms to address the eigenvalue problem, a problem of significance in the stability of networks under stochastic perturbations, 2) methodology and algorithms to characterize probability measures on graph structures with random flows. This is an important problem in characterizing random demand (encountered in smart grid) and random degradation (encountered in infrastructure systems), as well as modeling errors in Markov Chains (with ubiquitous relevance !). 3) methodology and algorithms for treating inequalities in uncertain systems. This is an important problem in the context of models for material failure and network flows under uncertainty where conditions of failure or flow are described in the form of inequalities between the state variables.« less

  13. Collective Influence Algorithm to find influencers via optimal percolation in massively large social media

    PubMed Central

    Morone, Flaviano; Min, Byungjoon; Bo, Lin; Mari, Romain; Makse, Hernán A.

    2016-01-01

    We elaborate on a linear-time implementation of Collective-Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in networks via optimal percolation. The computational complexity of CI is O(N log N) when removing nodes one-by-one, made possible through an appropriate data structure to process CI. We introduce two Belief-Propagation (BP) variants of CI that consider global optimization via message-passing: CI propagation (CIP) and Collective-Immunization-Belief-Propagation algorithm (CIBP) based on optimal immunization. Both identify a slightly smaller fraction of influencers than CI and, remarkably, reproduce the exact analytical optimal percolation threshold obtained in Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little room for improvement for random graphs. However, the small augmented performance comes at the expense of increasing running time to O(N2), rendering BP prohibitive for modern-day big-data. For instance, for big-data social networks of 200 million users (e.g., Twitter users sending 500 million tweets/day), CI finds influencers in 2.5 hours on a single CPU, while all BP algorithms (CIP, CIBP and BDP) would take more than 3,000 years to accomplish the same task. PMID:27455878

  14. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  15. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  16. Fourier Transform Infrared (FT-IR) and Laser Ablation Inductively Coupled Plasma-Mass Spectrometry (LA-ICP-MS) Imaging of Cerebral Ischemia: Combined Analysis of Rat Brain Thin Cuts Toward Improved Tissue Classification.

    PubMed

    Balbekova, Anna; Lohninger, Hans; van Tilborg, Geralda A F; Dijkhuizen, Rick M; Bonta, Maximilian; Limbeck, Andreas; Lendl, Bernhard; Al-Saad, Khalid A; Ali, Mohamed; Celikic, Minja; Ofner, Johannes

    2018-02-01

    Microspectroscopic techniques are widely used to complement histological studies. Due to recent developments in the field of chemical imaging, combined chemical analysis has become attractive. This technique facilitates a deepened analysis compared to single techniques or side-by-side analysis. In this study, rat brains harvested one week after induction of photothrombotic stroke were investigated. Adjacent thin cuts from rats' brains were imaged using Fourier transform infrared (FT-IR) microspectroscopy and laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The LA-ICP-MS data were normalized using an internal standard (a thin gold layer). The acquired hyperspectral data cubes were fused and subjected to multivariate analysis. Brain regions affected by stroke as well as unaffected gray and white matter were identified and classified using a model based on either partial least squares discriminant analysis (PLS-DA) or random decision forest (RDF) algorithms. The RDF algorithm demonstrated the best results for classification. Improved classification was observed in the case of fused data in comparison to individual data sets (either FT-IR or LA-ICP-MS). Variable importance analysis demonstrated that both molecular and elemental content contribute to the improved RDF classification. Univariate spectral analysis identified biochemical properties of the assigned tissue types. Classification of multisensor hyperspectral data sets using an RDF algorithm allows access to a novel and in-depth understanding of biochemical processes and solid chemical allocation of different brain regions.

  17. GCOM-W soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  18. A Novel Color Image Encryption Algorithm Based on Quantum Chaos Sequence

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Jin, Cong

    2017-03-01

    In this paper, a novel algorithm of image encryption based on quantum chaotic is proposed. The keystreams are generated by the two-dimensional logistic map as initial conditions and parameters. And then general Arnold scrambling algorithm with keys is exploited to permute the pixels of color components. In diffusion process, a novel encryption algorithm, folding algorithm, is proposed to modify the value of diffused pixels. In order to get the high randomness and complexity, the two-dimensional logistic map and quantum chaotic map are coupled with nearest-neighboring coupled-map lattices. Theoretical analyses and computer simulations confirm that the proposed algorithm has high level of security.

  19. Fast Algorithms for Estimating Mixture Parameters

    DTIC Science & Technology

    1989-08-30

    The investigation is a two year project with the first year sponsored by the Army Research Office and the second year by the National Science Foundation (Grant... Science Foundation during the coming year. Keywords: Fast algorithms; Algorithms Mixture Distribution Random Variables. (KR)...numerical testing of the accelerated fixed-point method was completed. The work on relaxation methods will be done under the sponsorship of the National

  20. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  1. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  2. Empirical study of seven data mining algorithms on different characteristics of datasets for biomedical classification applications.

    PubMed

    Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi

    2017-11-02

    Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.

  3. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  4. Determining the Number of Clusters in a Data Set Without Graphical Interpretation

    NASA Technical Reports Server (NTRS)

    Aguirre, Nathan S.; Davies, Misty D.

    2011-01-01

    Cluster analysis is a data mining technique that is meant ot simplify the process of classifying data points. The basic clustering process requires an input of data points and the number of clusters wanted. The clustering algorithm will then pick starting C points for the clusters, which can be either random spatial points or random data points. It then assigns each data point to the nearest C point where "nearest usually means Euclidean distance, but some algorithms use another criterion. The next step is determining whether the clustering arrangement this found is within a certain tolerance. If it falls within this tolerance, the process ends. Otherwise the C points are adjusted based on how many data points are in each cluster, and the steps repeat until the algorithm converges,

  5. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Chengguang; Drinkwater, Bruce W.

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less

  6. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  7. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  8. Enforcing Memory Policy Specifications in Reconfigurable Hardware

    DTIC Science & Technology

    2008-10-01

    we explain the algorithms behind our reference monitor design flow. In Section 4, we describe our access policy language including several example...NFA from this regular expression using Thompson’s Algorithm [1] as implemented by Gerzic [19]. Figure 4 shows the NFA for our policy. Notice that the... Algorithm [1] as implemented by Grail [49] to minimize the DFA. Figure 5 shows the minimized DFA for our policy. Processing the Ranges Before we can

  9. Making working memory work: the effects of extended practice on focus capacity and the processes of updating, forward access, and random access.

    PubMed

    Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul

    2014-05-01

    We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Identification of cultivars and validation of genetic relationships in Mangifera indica L. using RAPD markers.

    PubMed

    Schnell, R J; Ronning, C M; Knight, R J

    1995-02-01

    Twenty-five accessions of mango were examined for random amplified polymorphic DNA (RAPD) genetic markers with 80 10-mer random primers. Of the 80 primers screened, 33 did not amplify, 19 were monomorphic, and 28 gave reproducible, polymorphic DNA amplification patterns. Eleven primers were selected from the 28 for the study. The number of bands generated was primer- and genotype-dependent, and ranged from 1 to 10. No primer gave unique banding patterns for each of the 25 accessions; however, ten different combinations of 2 primer banding patterns produced unique fingerprints for each accession. A maternal half-sib (MHS) family was included among the 25 accessions to see if genetic relationships could be detected. RAPD data were used to generate simple matching coefficients, which were analyzed phenetically and by means of principal coordinate analysis (PCA). The MHS clustered together in both the phenetic and the PCA while the randomly selected accessions were scattered with no apparent pattern. The uses of RAPD analysis for Mangifera germ plasm classification and clonal identification are discussed.

  11. Using Geographic Information Systems and Spatial Analysis Methods to Assess Household Water Access and Sanitation Coverage in the SHINE Trial.

    PubMed

    Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I

    2015-12-15

    Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.

  12. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service

    PubMed Central

    Grude, Nils; Lindbaek, Morten

    2015-01-01

    Objective. To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Design. Randomized controlled trial. Setting. Out-of-hours service, Oslo, Norway. Intervention. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010–November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Subjects. Women (n = 441) aged 16–55 years. Mean age in both groups 27 years. Main outcome measures. Number of days until symptomatic resolution. Results. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Conclusion. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder. PMID:25961367

  13. A randomized controlled trial of a diagnostic algorithm for symptoms of uncomplicated cystitis at an out-of-hours service.

    PubMed

    Bollestad, Marianne; Grude, Nils; Lindbaek, Morten

    2015-06-01

    To compare the clinical outcome of patients presenting with symptoms of uncomplicated cystitis who were seen by a doctor, with patients who were given treatment following a diagnostic algorithm. Randomized controlled trial. Out-of-hours service, Oslo, Norway. Women with typical symptoms of uncomplicated cystitis were included in the trial in the time period September 2010-November 2011. They were randomized into two groups. One group received standard treatment according to the diagnostic algorithm, the other group received treatment after a regular consultation by a doctor. Women (n = 441) aged 16-55 years. Mean age in both groups 27 years. Number of days until symptomatic resolution. No significant differences were found between the groups in the basic patient demographics, severity of symptoms, or percentage of urine samples with single culture growth. A median of three days until symptomatic resolution was found in both groups. By day four 79% in the algorithm group and 72% in the regular consultation group were free of symptoms (p = 0.09). The number of patients who contacted a doctor again in the follow-up period and received alternative antibiotic treatment was insignificantly higher (p = 0.08) after regular consultation than after treatment according to the diagnostic algorithm. There were no cases of severe pyelonephritis or hospital admissions during the follow-up period. Using a diagnostic algorithm is a safe and efficient method for treating women with symptoms of uncomplicated cystitis at an out-of-hours service. This simplification of treatment strategy can lead to a more rational use of consultation time and a stricter adherence to National Antibiotic Guidelines for a common disorder.

  14. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial

    PubMed Central

    Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-01-01

    Abstract Background Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Methods Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Results Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). Conclusions A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. PMID:28645191

  15. How Effective Is Algorithm-Guided Treatment for Depressed Inpatients? Results from the Randomized Controlled Multicenter German Algorithm Project 3 Trial.

    PubMed

    Adli, Mazda; Wiethoff, Katja; Baghai, Thomas C; Fisher, Robert; Seemüller, Florian; Laakmann, Gregor; Brieger, Peter; Cordes, Joachim; Malevani, Jaroslav; Laux, Gerd; Hauth, Iris; Möller, Hans-Jürgen; Kronmüller, Klaus-Thomas; Smolka, Michael N; Schlattmann, Peter; Berger, Maximilian; Ricken, Roland; Stamm, Thomas J; Heinz, Andreas; Bauer, Michael

    2017-09-01

    Treatment algorithms are considered as key to improve outcomes by enhancing the quality of care. This is the first randomized controlled study to evaluate the clinical effect of algorithm-guided treatment in inpatients with major depressive disorder. Inpatients, aged 18 to 70 years with major depressive disorder from 10 German psychiatric departments were randomized to 5 different treatment arms (from 2000 to 2005), 3 of which were standardized stepwise drug treatment algorithms (ALGO). The fourth arm proposed medications and provided less specific recommendations based on a computerized documentation and expert system (CDES), the fifth arm received treatment as usual (TAU). ALGO included 3 different second-step strategies: lithium augmentation (ALGO LA), antidepressant dose-escalation (ALGO DE), and switch to a different antidepressant (ALGO SW). Time to remission (21-item Hamilton Depression Rating Scale ≤9) was the primary outcome. Time to remission was significantly shorter for ALGO DE (n=91) compared with both TAU (n=84) (HR=1.67; P=.014) and CDES (n=79) (HR=1.59; P=.031) and ALGO SW (n=89) compared with both TAU (HR=1.64; P=.018) and CDES (HR=1.56; P=.038). For both ALGO LA (n=86) and ALGO DE, fewer antidepressant medications were needed to achieve remission than for CDES or TAU (P<.001). Remission rates at discharge differed across groups; ALGO DE had the highest (89.2%) and TAU the lowest rates (66.2%). A highly structured algorithm-guided treatment is associated with shorter times and fewer medication changes to achieve remission with depressed inpatients than treatment as usual or computerized medication choice guidance. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  16. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  17. Online CBT life skills programme for low mood and anxiety: study protocol for a pilot randomized controlled trial.

    PubMed

    Williams, Christopher; McClay, Carrie-Anne; Martinez, Rebeca; Morrison, Jill; Haig, Caroline; Jones, Ray; Farrand, Paul

    2016-04-27

    Low mood is a common mental health problem with significant health consequences. Studies have shown that cognitive behavioural therapy (CBT) is an effective treatment for low mood and anxiety when delivered one-to-one by an expert practitioner. However, access to this talking therapy is often limited and waiting lists can be long, although a range of low-intensity interventions that can increase access to services are available. These include guided self-help materials delivered via books, classes and online packages. This project aims to pilot a randomized controlled trial of an online CBT-based life skills course with community-based individuals experiencing low mood and anxiety. Individuals with elevated symptoms of depression will be recruited directly from the community via online and newspaper advertisements. Participants will be remotely randomized to receive either immediate access or delayed access to the Living Life to the Full guided online CBT-based life skills package, with telephone or email support provided whilst they use the online intervention. The primary end point will be at 3 months post-randomization, at which point the delayed-access group will be offered the intervention. Levels of depression, anxiety, social functioning and satisfaction will be assessed. This pilot study will test the trial design, and ability to recruit and deliver the intervention. Drop-out rates will be assessed and the completion and acceptability of the package will be investigated. The study will also inform a sample size power calculation for a subsequent substantive randomized controlled trial. ISRCTN ISRCTN12890709.

  18. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  19. Simulation of the mechanical behavior of random fiber networks with different microstructure.

    PubMed

    Hatami-Marbini, H

    2018-05-24

    Filamentous protein networks are broadly encountered in biological systems such as cytoskeleton and extracellular matrix. Many numerical studies have been conducted to better understand the fundamental mechanisms behind the striking mechanical properties of these networks. In most of these previous numerical models, the Mikado algorithm has been used to represent the network microstructure. Here, a different algorithm is used to create random fiber networks in order to investigate possible roles of architecture on the elastic behavior of filamentous networks. In particular, random fibrous structures are generated from the growth of individual fibers from random nucleation points. We use computer simulations to determine the mechanical behavior of these networks in terms of their model parameters. The findings are presented and discussed along with the response of Mikado fiber networks. We demonstrate that these alternative networks and Mikado networks show a qualitatively similar response. Nevertheless, the overall elasticity of Mikado networks is stiffer compared to that of the networks created using the alternative algorithm. We describe the effective elasticity of both network types as a function of their line density and of the material properties of the filaments. We also characterize the ratio of bending and axial energy and discuss the behavior of these networks in terms of their fiber density distribution and coordination number.

  20. Longitudinal data analysis with non-ignorable missing data.

    PubMed

    Tseng, Chi-hong; Elashoff, Robert; Li, Ning; Li, Gang

    2016-02-01

    A common problem in the longitudinal data analysis is the missing data problem. Two types of missing patterns are generally considered in statistical literature: monotone and non-monotone missing data. Nonmonotone missing data occur when study participants intermittently miss scheduled visits, while monotone missing data can be from discontinued participation, loss to follow-up, and mortality. Although many novel statistical approaches have been developed to handle missing data in recent years, few methods are available to provide inferences to handle both types of missing data simultaneously. In this article, a latent random effects model is proposed to analyze longitudinal outcomes with both monotone and non-monotone missingness in the context of missing not at random. Another significant contribution of this article is to propose a new computational algorithm for latent random effects models. To reduce the computational burden of high-dimensional integration problem in latent random effects models, we develop a new computational algorithm that uses a new adaptive quadrature approach in conjunction with the Taylor series approximation for the likelihood function to simplify the E-step computation in the expectation-maximization algorithm. Simulation study is performed and the data from the scleroderma lung study are used to demonstrate the effectiveness of this method. © The Author(s) 2012.

  1. Adaptive mechanism-based congestion control for networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhang, Yun; Chen, C. L. Philip

    2013-03-01

    In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.

  2. Topology-selective jamming of fully-connected, code-division random-access networks

    NASA Technical Reports Server (NTRS)

    Polydoros, Andreas; Cheng, Unjeng

    1990-01-01

    The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

  3. The Use of a Computer-Controlled Random Access Slide Projector for Rapid Information Display.

    ERIC Educational Resources Information Center

    Muller, Mark T.

    A 35mm random access slide projector operated in conjunction with a computer terminal was adapted to meet the need for a more rapid and complex graphic display mechanism than is currently available with teletypewriter terminals. The model projector can be operated manually to provide for a maintenance checkout of the electromechanical system.…

  4. 76 FR 45295 - In the Matter of Certain Static Random Access Memories and Products Containing Same; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-28

    ... supplementing the amended complaint was filed on June 28, 2011. A second amended complaint was filed on July 13... of certain static random access memories and products containing same by reason of infringement of... 13 of the `937 patent, and whether an industry in the United States exists as required by subsection...

  5. An efficient randomized algorithm for contact-based NMR backbone resonance assignment.

    PubMed

    Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal

    2006-01-15

    Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.

  6. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.

  7. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  8. Efficiency and effectiveness of the use of an acenocoumarol pharmacogenetic dosing algorithm versus usual care in patients with venous thromboembolic disease initiating oral anticoagulation: study protocol for a randomized controlled trial

    PubMed Central

    2012-01-01

    Background Hemorrhagic events are frequent in patients on treatment with antivitamin-K oral anticoagulants due to their narrow therapeutic margin. Studies performed with acenocoumarol have shown the relationship between demographic, clinical and genotypic variants and the response to these drugs. Once the influence of these genetic and clinical factors on the dose of acenocoumarol needed to maintain a stable international normalized ratio (INR) has been demonstrated, new strategies need to be developed to predict the appropriate doses of this drug. Several pharmacogenetic algorithms have been developed for warfarin, but only three have been developed for acenocoumarol. After the development of a pharmacogenetic algorithm, the obvious next step is to demonstrate its effectiveness and utility by means of a randomized controlled trial. The aim of this study is to evaluate the effectiveness and efficiency of an acenocoumarol dosing algorithm developed by our group which includes demographic, clinical and pharmacogenetic variables (VKORC1, CYP2C9, CYP4F2 and ApoE) in patients with venous thromboembolism (VTE). Methods and design This is a multicenter, single blind, randomized controlled clinical trial. The protocol has been approved by La Paz University Hospital Research Ethics Committee and by the Spanish Drug Agency. Two hundred and forty patients with VTE in which oral anticoagulant therapy is indicated will be included. Randomization (case/control 1:1) will be stratified by center. Acenocoumarol dose in the control group will be scheduled and adjusted following common clinical practice; in the experimental arm dosing will be following an individualized algorithm developed and validated by our group. Patients will be followed for three months. The main endpoints are: 1) Percentage of patients with INR within the therapeutic range on day seven after initiation of oral anticoagulant therapy; 2) Time from the start of oral anticoagulant treatment to achievement of a stable INR within the therapeutic range; 3) Number of INR determinations within the therapeutic range in the first six weeks of treatment. Discussion To date, there are no clinical trials comparing pharmacogenetic acenocoumarol dosing algorithm versus routine clinical practice in VTE. Implementation of this pharmacogenetic algorithm in the clinical practice routine could reduce side effects and improve patient safety. Trial registration Eudra CT. Identifier: 2009-016643-18. PMID:23237631

  9. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  10. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    PubMed

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  11. Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments

    DTIC Science & Technology

    2013-12-11

    positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for

  12. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  13. Design of nucleic acid sequences for DNA computing based on a thermodynamic approach

    PubMed Central

    Tanaka, Fumiaki; Kameda, Atsushi; Yamamoto, Masahito; Ohuchi, Azuma

    2005-01-01

    We have developed an algorithm for designing multiple sequences of nucleic acids that have a uniform melting temperature between the sequence and its complement and that do not hybridize non-specifically with each other based on the minimum free energy (ΔGmin). Sequences that satisfy these constraints can be utilized in computations, various engineering applications such as microarrays, and nano-fabrications. Our algorithm is a random generate-and-test algorithm: it generates a candidate sequence randomly and tests whether the sequence satisfies the constraints. The novelty of our algorithm is that the filtering method uses a greedy search to calculate ΔGmin. This effectively excludes inappropriate sequences before ΔGmin is calculated, thereby reducing computation time drastically when compared with an algorithm without the filtering. Experimental results in silico showed the superiority of the greedy search over the traditional approach based on the hamming distance. In addition, experimental results in vitro demonstrated that the experimental free energy (ΔGexp) of 126 sequences correlated well with ΔGmin (|R| = 0.90) than with the hamming distance (|R| = 0.80). These results validate the rationality of a thermodynamic approach. We implemented our algorithm in a graphic user interface-based program written in Java. PMID:15701762

  14. What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms

    PubMed Central

    Pilly, Praveen K.; Seitz, Aaron R.

    2009-01-01

    Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240

  15. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  16. Spatial versus sequential correlations for random access coding

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed

    2016-03-01

    Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.

  17. The Ettention software package.

    PubMed

    Dahmen, Tim; Marsalek, Lukas; Marniok, Nico; Turoňová, Beata; Bogachev, Sviatoslav; Trampert, Patrick; Nickels, Stefan; Slusallek, Philipp

    2016-02-01

    We present a novel software package for the problem "reconstruction from projections" in electron microscopy. The Ettention framework consists of a set of modular building-blocks for tomographic reconstruction algorithms. The well-known block iterative reconstruction method based on Kaczmarz algorithm is implemented using these building-blocks, including adaptations specific to electron tomography. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and eTomo user interface. We also provide developers with a clean and well-structured application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Web-accessible cervigram automatic segmentation tool

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  19. Dialysis-associated steal syndrome (DASS).

    PubMed

    Mohamed, Ahmed S; Peden, Eric K

    2017-03-06

    In this article, we will review the clinical symptoms of dialysis access steal syndrome (DASS), evaluation, treatment options, and our approach and treatment algorithm. We reviewed the literature discussing different aspects of DASS including its epidemiology, pathogenesis, clinical presentation, evaluation and management options. DASS is the most dreaded complication of access surgery. Although the incidence is low, all providers caring for dialysis patients should be aware of this problem. Symptoms can range from mild to limb threatening. Although various tests are available, the diagnosis of DASS remains a clinical one and requires thoughtful management to have the best outcomes. Multiple treatment options exist for steal. We present diagnostic evaluation and management algorithm.

  20. pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2014-01-01

    pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A Reinforcement Sensor Embedded Vertical Handoff Controller for Vehicular Heterogeneous Wireless Networks

    PubMed Central

    Li, Limin; Xu, Yubin; Soong, Boon-Hee; Ma, Lin

    2013-01-01

    Vehicular communication platforms that provide real-time access to wireless networks have drawn more and more attention in recent years. IEEE 802.11p is the main radio access technology that supports communication for high mobility terminals, however, due to its limited coverage, IEEE 802.11p is usually deployed by coupling with cellular networks to achieve seamless mobility. In a heterogeneous cellular/802.11p network, vehicular communication is characterized by its short time span in association with a wireless local area network (WLAN). Moreover, for the media access control (MAC) scheme used for WLAN, the network throughput dramatically decreases with increasing user quantity. In response to these compelling problems, we propose a reinforcement sensor (RFS) embedded vertical handoff control strategy to support mobility management. The RFS has online learning capability and can provide optimal handoff decisions in an adaptive fashion without prior knowledge. The algorithm integrates considerations including vehicular mobility, traffic load, handoff latency, and network status. Simulation results verify that the proposed algorithm can adaptively adjust the handoff strategy, allowing users to stay connected to the best network. Furthermore, the algorithm can ensure that RSUs are adequate, thereby guaranteeing a high quality user experience. PMID:24193101

  2. Link Scheduling Algorithm with Interference Prediction for Multiple Mobile WBANs

    PubMed Central

    Le, Thien T. T.

    2017-01-01

    As wireless body area networks (WBANs) become a key element in electronic healthcare (e-healthcare) systems, the coexistence of multiple mobile WBANs is becoming an issue. The network performance is negatively affected by the unpredictable movement of the human body. In such an environment, inter-WBAN interference can be caused by the overlapping transmission range of nearby WBANs. We propose a link scheduling algorithm with interference prediction (LSIP) for multiple mobile WBANs, which allows multiple mobile WBANs to transmit at the same time without causing inter-WBAN interference. In the LSIP, a superframe includes the contention access phase using carrier sense multiple access with collision avoidance (CSMA/CA) and the scheduled phase using time division multiple access (TDMA) for non-interfering nodes and interfering nodes, respectively. For interference prediction, we define a parameter called interference duration as the duration during which disparate WBANs interfere with each other. The Bayesian model is used to estimate and classify the interference using a signal to interference plus noise ratio (SINR) and the number of neighboring WBANs. The simulation results show that the proposed LSIP algorithm improves the packet delivery ratio and throughput significantly with acceptable delay. PMID:28956827

  3. Design and implementation of a random neural network routing engine.

    PubMed

    Kocak, T; Seeber, J; Terzioglu, H

    2003-01-01

    Random neural network (RNN) is an analytically tractable spiked neural network model that has been implemented in software for a wide range of applications for over a decade. This paper presents the hardware implementation of the RNN model. Recently, cognitive packet networks (CPN) is proposed as an alternative packet network architecture where there is no routing table, instead the RNN based reinforcement learning is used to route packets. Particularly, we describe implementation details for the RNN based routing engine of a CPN network processor chip: the smart packet processor (SPP). The SPP is a dual port device that stores, modifies, and interprets the defining characteristics of multiple RNN models. In addition to hardware design improvements over the software implementation such as the dual access memory, output calculation step, and reduced output calculation module, this paper introduces a major modification to the reinforcement learning algorithm used in the original CPN specification such that the number of weight terms are reduced from 2n/sup 2/ to 2n. This not only yields significant memory savings, but it also simplifies the calculations for the steady state probabilities (neuron outputs in RNN). Simulations have been conducted to confirm the proper functionality for the isolated SPP design as well as for the multiple SPP's in a networked environment.

  4. Quality, language, subdiscipline and promotion were associated with article accesses on Physiotherapy Evidence Database (PEDro).

    PubMed

    Yamato, Tiê P; Arora, Mohit; Stevens, Matthew L; Elkins, Mark R; Moseley, Anne M

    2018-03-01

    To quantify the relationship between the number of times articles are accessed on the Physiotherapy Evidence Database (PEDro) and the article characteristics. A secondary aim was to examine the relationship between accesses and the number of citations of articles. The study was conducted to derive prediction models for the number of accesses of articles indexed on PEDro from factors that may influence an article's accesses. All articles available on PEDro from August 2014 to January 2015 were included. We extracted variables relating to the algorithm used to present PEDro search results (research design, year of publication, PEDro score, source of systematic review (Cochrane or non-Cochrane)) plus language, subdiscipline of physiotherapy, and whether articles were promoted to PEDro users. Three predictive models were examined using multiple regression analysis. Citation and journal impact factor were downloaded. There were 29,313 articles indexed in this period. We identified seven factors that predicted the number of accesses. More accesses were noted for factors related to the algorithm used to present PEDro search results (synthesis research (i.e., guidelines and reviews), recent articles, Cochrane reviews, and higher PEDro score) plus publication in English and being promoted to PEDro users. The musculoskeletal, neurology, orthopaedics, sports, and paediatrics subdisciplines were associated with more accesses. We also found that there was no association between number of accesses and citations. The number of times an article is accessed on PEDro is partly predicted by how condensed and high quality the evidence it contains is. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  5. Validation of the GCOM-W SCA and JAXA soil moisture algorithms

    USDA-ARS?s Scientific Manuscript database

    Satellite-based remote sensing of soil moisture has matured over the past decade as a result of the Global Climate Observing Mission-Water (GCOM-W) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  6. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)

  7. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  8. Dinucleotide controlled null models for comparative RNA gene prediction.

    PubMed

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.

  9. Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements.

    PubMed

    Xiong, Chunbao; Lu, Huali; Zhu, Jinsong

    2017-02-23

    Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation.

  10. Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements

    PubMed Central

    Xiong, Chunbao; Lu, Huali; Zhu, Jinsong

    2017-01-01

    Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation. PMID:28241472

  11. An Architecture for Enabling Migration of Tactical Networks to Future Flexible Ad Hoc WBWF

    DTIC Science & Technology

    2010-09-01

    Requirements Several multiple access schemes TDMA OFDMA SC-OFDMA, FH- CDMA , DS - CDMA , hybrid access schemes, transitions between them Dynamic...parameters algorithms depend on the multiple access scheme If DS - CDMA : handling of macro-diversity (linked to cooperative routing) TDMA and/of OFDMA...Transport format Ciphering @MAC/RLC level : SCM Physical layer (PHY) : signal processing (mod, FEC, etc) CDMA : macro-diversity CDMA , OFDMA

  12. Performance Analysis of Different Backoff Algorithms for WBAN-Based Emerging Sensor Networks

    PubMed Central

    Khan, Pervez; Ullah, Niamat; Ali, Farman; Ullah, Sana; Hong, Youn-Sik; Lee, Ki-Young; Kim, Hoon

    2017-01-01

    The Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedure of IEEE 802.15.6 Medium Access Control (MAC) protocols for the Wireless Body Area Network (WBAN) use an Alternative Binary Exponential Backoff (ABEB) procedure. The backoff algorithm plays an important role to avoid collision in wireless networks. The Binary Exponential Backoff (BEB) algorithm used in different standards does not obtain the optimum performance due to enormous Contention Window (CW) gaps induced from packet collisions. Therefore, The IEEE 802.15.6 CSMA/CA has developed the ABEB procedure to avoid the large CW gaps upon each collision. However, the ABEB algorithm may lead to a high collision rate (as the CW size is incremented on every alternative collision) and poor utilization of the channel due to the gap between the subsequent CW. To minimize the gap between subsequent CW sizes, we adopted the Prioritized Fibonacci Backoff (PFB) procedure. This procedure leads to a smooth and gradual increase in the CW size, after each collision, which eventually decreases the waiting time, and the contending node can access the channel promptly with little delay; while ABEB leads to irregular and fluctuated CW values, which eventually increase collision and waiting time before a re-transmission attempt. We analytically approach this problem by employing a Markov chain to design the PFB scheme for the CSMA/CA procedure of the IEEE 80.15.6 standard. The performance of the PFB algorithm is compared against the ABEB function of WBAN CSMA/CA. The results show that the PFB procedure adopted for IEEE 802.15.6 CSMA/CA outperforms the ABEB procedure. PMID:28257112

  13. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    NASA Astrophysics Data System (ADS)

    Adame, J.; Warzel, S.

    2015-11-01

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.

  14. Security authentication using phase-encoded nanoparticle structures and polarized light.

    PubMed

    Carnicer, Artur; Hassanfiroozi, Amir; Latorre-Carmona, Pedro; Huang, Yi-Pai; Javidi, Bahram

    2015-01-15

    Phase-encoded nanostructures such as quick response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase-encoded QR codes. The system is illuminated using polarized light, and the QR code is encoded using a phase-only random mask. Using classification algorithms, it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase-encoded QR codes using polarimetric signatures.

  15. Exponential vanishing of the ground-state gap of the quantum random energy model via adiabatic quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de

    In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.

  16. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  17. Multisource passive acoustic tracking: an application of random finite set data fusion

    NASA Astrophysics Data System (ADS)

    Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung

    2010-04-01

    Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.

  18. Standard random number generation for MBASIC

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  19. Using Chaotic System in Encryption

    NASA Astrophysics Data System (ADS)

    Findik, Oğuz; Kahramanli, Şirzat

    In this paper chaotic systems and RSA encryption algorithm are combined in order to develop an encryption algorithm which accomplishes the modern standards. E.Lorenz's weather forecast' equations which are used to simulate non-linear systems are utilized to create chaotic map. This equation can be used to generate random numbers. In order to achieve up-to-date standards and use online and offline status, a new encryption technique that combines chaotic systems and RSA encryption algorithm has been developed. The combination of RSA algorithm and chaotic systems makes encryption system.

  20. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    NASA Astrophysics Data System (ADS)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

Top