31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...
31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...
Algorithm for Detecting Significant Locations from Raw GPS Data
NASA Astrophysics Data System (ADS)
Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi
We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.
Pyrolysis of wastewater biosolids significantly reduces estrogenicity.
Hoffman, T C; Zitomer, D H; McNamara, P J
2016-11-01
Most wastewater treatment processes are not specifically designed to remove micropollutants. Many micropollutants are hydrophobic so they remain in the biosolids and are discharged to the environment through land-application of biosolids. Micropollutants encompass a broad range of organic chemicals, including estrogenic compounds (natural and synthetic) that reside in the environment, a.k.a. environmental estrogens. Public concern over land application of biosolids stemming from the occurrence of micropollutants hampers the value of biosolids which are important to wastewater treatment plants as a valuable by-product. This research evaluated pyrolysis, the partial decomposition of organic material in an oxygen-deprived system under high temperatures, as a biosolids treatment process that could remove estrogenic compounds from solids while producing a less hormonally active biochar for soil amendment. The estrogenicity, measured in estradiol equivalents (EEQ) by the yeast estrogen screen (YES) assay, of pyrolyzed biosolids was compared to primary and anaerobically digested biosolids. The estrogenic responses from primary solids and anaerobically digested solids were not statistically significantly different, but pyrolysis of anaerobically digested solids resulted in a significant reduction in EEQ; increasing pyrolysis temperature from 100°C to 500°C increased the removal of EEQ with greater than 95% removal occurring at or above 400°C. This research demonstrates that biosolids treatment with pyrolysis would substantially decrease (removal>95%) the estrogens associated with this biosolids product. Thus, pyrolysis of biosolids can be used to produce a valuable soil amendment product, biochar, that minimizes discharge of estrogens to the environment. PMID:27344259
Discovering simple DNA sequences by the algorithmic significance method.
Milosavljević, A; Jurka, J
1993-08-01
A new method, 'algorithmic significance', is proposed as a tool for discovery of patterns in DNA sequences. The main idea is that patterns can be discovered by finding ways to encode the observed data concisely. In this sense, the method can be viewed as a formal version of the Occam's Razor principle. In this paper the method is applied to discover significantly simple DNA sequences. We define DNA sequences to be simple if they contain repeated occurrences of certain 'words' and thus can be encoded in a small number of bits. Such definition includes minisatellites and microsatellites. A standard dynamic programming algorithm for data compression is applied to compute the minimal encoding lengths of sequences in linear time. An electronic mail server for identification of simple sequences based on the proposed method has been installed at the Internet address pythia/anl.gov. PMID:8402207
Algorithms for Detecting Significantly Mutated Pathways in Cancer
NASA Astrophysics Data System (ADS)
Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.
Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We
Bacteriophage cocktail significantly reduces Escherichia coli O157
Carter, Chandi D.; Parks, Adam; Abuladze, Tamar; Li, Manrong; Woolston, Joelle; Magnone, Joshua; Senecal, Andre; Kropinski, Andrew M.; Sulakvelidze, Alexander
2012-01-01
Foods contaminated with Escherichia coli O157:H7 cause more than 63,000 foodborne illnesses in the United States every year, resulting in a significant economic impact on medical costs and product liabilities. Efforts to reduce contamination with E. coli O157:H7 have largely focused on washing, application of various antibacterial chemicals, and gamma-irradiation, each of which has practical and environmental drawbacks. A relatively recent, environmentally-friendly approach proposed for eliminating or significantly reducing E. coli O157:H7 contamination of foods is the use of lytic bacteriophages as biocontrol agents. We found that EcoShield™, a commercially available preparation composed of three lytic bacteriophages specific for E. coli O157:H7, significantly (p < 0.05) reduced the levels of the bacterium in experimentally contaminated beef by ≥ 94% and in lettuce by 87% after a five minute contact time. The reduced levels of bacteria were maintained for at least one week at refrigerated temperatures. However, the one-time application of EcoShield™ did not protect the foods from recontamination with E. coli O157:H7. Our results demonstrate that EcoShield™ is effective in significantly reducing contamination of beef and lettuce with E. coli O157:H7, but does not protect against potential later contamination due to, for example, unsanitary handling of the foods post processing. PMID:23275869
Diagnostic Significance of Reduced IgA in Children
Nurkic, Jasmina; Numanovic, Fatima; Arnautalic, Lejla; Tihic, Nijaz; Halilovic, Dzenan; Jahic, Mahira
2015-01-01
Introduction: The finding of reduced value of immunoglobulin A (IgA) in children is frequent in daily medical practice. It is important to correctly interpret the findings as adequate further diagnostic evaluation of the patient in order to make the determination on the significance of such findings. In children younger than 4 years always consider the transient impairment of immunoglobulins, maturation of child and his immune system can lead to an improvement in the clinical picture. In older children decreased IgA may lead to serious illnesses that need to be recognize and acknowledge through the appropriate diagnostic methods. At the University Clinical Center Tuzla, children with suspected deficient immune response due to reduced values of IgA, goes through further diagnostic evaluation at the Polyclinic for Laboratory Medicine, Department of Immunology and Department of Microbiology, as well as the Clinic of Radiology. Material and methods: Our study followed 91 patients, for the year 2013, through their medical charts and made evaluation of diagnostic and screening tests. Conclusion: The significance of this paper is to draw attention to the importance of diagnostic approach to IgA deficient pediatric patient and relevance of knowledge of individual diagnostic methods as well as to the proper interpretation of the results thereof. PMID:26543309
A genetic algorithm to reduce stream channel cross section data
Berenbrock, C.
2006-01-01
A genetic algorithm (GA) was used to reduce cross section data for a hypothetical example consisting of 41 data points and for 10 cross sections on the Kootenai River. The number of data points for the Kootenai River cross sections ranged from about 500 to more than 2,500. The GA was applied to reduce the number of data points to a manageable dataset because most models and other software require fewer than 100 data points for management, manipulation, and analysis. Results indicated that the program successfully reduced the data. Fitness values from the genetic algorithm were lower (better) than those in a previous study that used standard procedures of reducing the cross section data. On average, fitnesses were 29 percent lower, and several were about 50 percent lower. Results also showed that cross sections produced by the genetic algorithm were representative of the original section and that near-optimal results could be obtained in a single run, even for large problems. Other data also can be reduced in a method similar to that for cross section data.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
The significance of sensory appeal for reduced meat consumption.
Tucker, Corrina A
2014-10-01
Reducing meat (over-)consumption as a way to help address environmental deterioration will require a range of strategies, and any such strategies will benefit from understanding how individuals might respond to various meat consumption practices. To investigate how New Zealanders perceive such a range of practices, in this instance in vitro meat, eating nose-to-tail, entomophagy and reducing meat consumption, focus groups involving a total of 69 participants were held around the country. While it is the damaging environmental implications of intensive farming practices and the projected continuation of increasing global consumer demand for meat products that has propelled this research, when asked to consider variations on the conventional meat-centric diet common to many New Zealanders, it was the sensory appeal of the areas considered that was deemed most problematic. While an ecological rationale for considering these 'meat' alternatives was recognised and considered important by most, transforming this value into action looks far less promising given the recurrent sensory objections to consuming different protein-based foods or of reducing meat consumption. This article considers the responses of focus group participants in relation to each of the dietary practices outlined, and offers suggestions on ways to encourage a more environmentally viable diet. PMID:24953197
Tadalafil significantly reduces ischemia reperfusion injury in skin island flaps
Kayiran, Oguz; Cuzdan, Suat S.; Uysal, Afsin; Kocer, Ugur
2013-01-01
Introduction: Numerous pharmacological agents have been used to enhance the viability of flaps. Ischemia reperfusion (I/R) injury is an unwanted, sometimes devastating complication in reconstructive microsurgery. Tadalafil, a specific inhibitor of phosphodiesterase type 5 is mainly used for erectile dysfunction, and acts on vascular smooth muscles, platelets and leukocytes. Herein, the protective and therapeutical effect of tadalafil in I/R injury in rat skin flap model is evaluated. Materials and Methods: Sixty epigastric island flaps were used to create I/R model in 60 Wistar rats (non-ischemic group, ischemic group, medication group). Biochemical markers including total nitrite, malondialdehyde (MDA) and myeloperoxidase (MPO) were analysed. Necrosis rates were calculated and histopathologic evaluation was carried out. Results: MDA, MPO and total nitrite values were found elevated in the ischemic group, however there was an evident drop in the medication group. Histological results revealed that early inflammatory findings (oedema, neutrophil infiltration, necrosis rate) were observed lower with tadalafil administration. Moreover, statistical significance (P < 0.05) was recorded. Conclusions: We conclude that tadalafil has beneficial effects on epigastric island flaps against I/R injury. PMID:23960309
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
El Hag, Imad A.; Elsiddig, Kamal E.; Elsafi, Mohamed E.M.O; Elfaki, Mona E.E.; Musa, Ahmed M.; Musa, Brima Y.; Elhassan, Ahmed M.
2013-01-01
Abstract Background Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective To develop a reliable clinical algorithm for primary care settings to triage cases of non-specific, tuberculous and malignant lymphadenopathies. Methods Calculation of the odd ratios (OR) of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (Receiver Operator Characteristic) curve. Results Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR), nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of 3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area under the ROC curve was 0.964 (95% CI, 0.949 – 0.980) and -0.856 (95% CI, 0.787 - 0.925) for scores A and B respectively, indicating good performance. Conclusion The developed algorithm can efficiently triage cases with tuberculous and malignant lymphadenopathies for treatment or referral to specialised centres for further work-up.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
Reducing the Time Requirement of k-Means Algorithm
Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou
2012-01-01
Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space Rd and an integer k. The problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARIHA). We found that when k is close to d, the quality is good (ARIHA>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARIHA>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974
NASA Astrophysics Data System (ADS)
Arevalillo-Herráez, Miguel; Gdeisat, Munther; Lilley, Francis; Burton, David R.
2016-07-01
In this paper, we present a novel algorithm to reduce the number of phase wraps in two dimensional signals in fringe projection profilometry. The technique operates in the spatial domain, and achieves a significant computational saving with regard to existing methods based on frequency shifting. The method works by estimating the modes of the first differences distribution in each axial direction. These are used to generate a tilted plane, which is subtracted from the entire phase map. Finally, the result is re-wrapped to obtain a phase map with fewer wraps. The method may be able to completely eliminate the phase wraps in many cases, or can achieve a significant phase wrap reduction that helps the subsequent unwrapping of the signal. The algorithm has been exhaustively tested across a large number of real and simulated signals, showing similar results compared to approaches operating in the frequency domain, but at significantly lower running times.
ALGORITHM FOR THE EVALUATION OF REDUCED WIGNER MATRICES
Prezeau, G.; Reinecke, M.
2010-10-15
Algorithms for the fast and exact computation of Wigner matrices are described and their application to a fast and massively parallel 4{pi} convolution code between a beam and a sky is also presented.
Reducing the time requirement of k-means algorithm.
Osamor, Victor Chukwudi; Adebiyi, Ezekiel Femi; Oyelade, Jelilli Olarenwaju; Doumbia, Seydou
2012-01-01
Traditional k-means and most k-means variants are still computationally expensive for large datasets, such as microarray data, which have large datasets with large dimension size d. In k-means clustering, we are given a set of n data points in d-dimensional space R(d) and an integer k. The problem is to determine a set of k points in R(d), called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this work, we develop a novel k-means algorithm, which is simple but more efficient than the traditional k-means and the recent enhanced k-means. Our new algorithm is based on the recently established relationship between principal component analysis and the k-means clustering. We provided the correctness proof for this algorithm. Results obtained from testing the algorithm on three biological data and six non-biological data (three of these data are real, while the other three are simulated) also indicate that our algorithm is empirically faster than other known k-means algorithms. We assessed the quality of our algorithm clusters against the clusters of a known structure using the Hubert-Arabie Adjusted Rand index (ARI(HA)). We found that when k is close to d, the quality is good (ARI(HA)>0.8) and when k is not close to d, the quality of our new k-means algorithm is excellent (ARI(HA)>0.9). In this paper, emphases are on the reduction of the time requirement of the k-means algorithm and its application to microarray data due to the desire to create a tool for clustering and malaria research. However, the new clustering algorithm can be used for other clustering needs as long as an appropriate measure of distance between the centroids and the members is used. This has been demonstrated in this work on six non-biological data. PMID:23239974
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In many cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.
Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.
Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S
2013-01-01
The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms. PMID:24010024
A comparison of updating algorithms for large N reduced models
NASA Astrophysics Data System (ADS)
Pérez, Margarita Ga´ıa; González-Arroyo, Antonio; Keegan, Liam; Okawa, Masanori; Ramos, Alberto
2015-06-01
We investigate Monte Carlo updating algorithms for simulating SU( N ) YangMills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU( N ) matrix at once, or iterating through SU(2) subgroups of the SU( N ) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
Banish, R Michael; Albert, Lyle B J; Pourpoint, Timothee L; Alexander, J Iwan D; Sekerka, Robert F
2002-10-01
Mass and thermal diffusivity measurements conducted on Earth are prone to contamination by uncontrollable convective contributions to the overall transport. Previous studies of mass and thermal diffusivities conducted on spacecraft have demonstration the gain in precision, and lower absolute values, resulting from the reduced convective transport possible in a low-gravity environment. We have developed and extensively tested real-time techniques for diffusivity measurements, where several measurements may be obtained on a single sample. This is particularly advantageous for low gravity research were there is limited experiment time. The mass diffusivity methodology uses a cylindrical sample geometry. A radiotracer, initially located at one end of the host is used as the diffusant. The sample is positioned in a concentric isothermal radiation shield with collimation bores located at defined positions along its axis. The intensity of the radiation emitted through the collimators is measured versus time with solid-state detectors and associated energy discrimination electronics. For the mathematical algorithm that we use, only a single pair of collimation bores and detectors are necessary for single temperature measurements. However, by employing a second, offset, pair of collimation holes and radiation detectors, diffusivities can be determined at several temperatures per sample. For thermal diffusivity measurements a disk geometry is used. A heat pulse is applied in the center of the sample and the temperature response of the sample is measured at several locations. Thus, several values of the diffusivity are measured versus time. The exact analytic solution to a heat pulse in the disk geometry leads to a unique heated area and measurement locations. Knowledge of the starting time and duration of he heating pulse is not used in the data evaluation. Thus, this methodology represents an experimentally simpler and more robust scheme. PMID:12446321
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.
Amudha, P; Karthik, S; Sivakumari, S
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Dobeli, K L; Lewis, S J; Meikle, S R; Thiele, D L; Brennan, P C
2013-01-01
Objective: To compare the dose-optimisation potential of a smoothing filtered backprojection (FBP) and a hybrid FBP/iterative algorithm to that of a standard FBP algorithm at three slice thicknesses for hepatic lesion detection with multidetector CT. Methods: A liver phantom containing a 9.5-mm opacity with a density of 10 HU below background was scanned at 125, 100, 75, 50 and 25 mAs. Data were reconstructed with standard FBP (B), smoothing FBP (A) and hybrid FBP/iterative (iDose4) algorithms at 5-, 3- and 1-mm collimation. 10 observers marked opacities using a four-point confidence scale. Jackknife alternative free-response receiver operating characteristic figure of merit (FOM), sensitivity and noise were calculated. Results: Compared with the 125-mAs/5-mm setting for each algorithm, significant reductions in FOM (p<0.05) and sensitivity (p<0.05) were found for all three algorithms for all exposures at 1-mm thickness and for all slice thicknesses at 25 mAs, with the exception of the 25-mAs/5-mm setting for the B algorithm. Sensitivity was also significantly reduced for all exposures at 3-mm thickness for the A algorithm (p<0.05). Noise for the A and iDose4 algorithms was approximately 13% and 21% lower, respectively, than for the B algorithm. Conclusion: Superior performance for hepatic lesion detection was not shown with either a smoothing FBP algorithm or a hybrid FBP/iterative algorithm compared with a standard FBP technique, even though noise reduction with thinner slices was demonstrated with the alternative approaches. Advances in knowledge: Reductions in image noise with non-standard CT algorithms do not necessarily translate to an improvement in low-contrast object detection. PMID:23392194
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Reducing aerodynamic vibration with piezoelectric actuators: a genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Hu, Zhenning; Jakiela, Mark; Pitt, Dale M.; Burnham, Jay K.
2004-07-01
Modern high performance aircraft fly at high speeds and high angles of attack. This can result in "buffet" aerodynamics, an unsteady turbulent flow that causes vibrations of the wings, tails, and body of the aircraft. This can result in decreased performance and ride quality, and fatigue failures. We are experimenting with controlling these vibrations by using piezoceramic actuators attached to the inner and outer skin of the aircraft. In this project, a tail or wing is investigated. A "generic" tail finite element model is studied in which individual actuators are assumed to exactly cover individual finite elements. Various optimizations of the orientations and power consumed by these actuators are then performed. Real coded genetic algorithms are used to perform the optimizations and a design space approximation technique is used to minimize costly finite element runs. An important result is the identification of a power consumption threshold for the entire system. Below the threshold, vibration control performance of optimized systems decreases with decreasing values of power supplied to the entire system.
Constant Modulus Algorithm with Reduced Complexity Employing DFT Domain Fast Filtering
NASA Astrophysics Data System (ADS)
Yang, Yoon Gi; Lee, Chang Su; Yang, Soo Mi
In this paper, a novel CMA (constant modulus algorithm) algorithm employing fast convolution in the DFT (discrete Fourier transform) domain is proposed. We propose a non-linear adaptation algorithm that minimizes CMA cost function in the DFT domain. The proposed algorithm is completely new one as compared to the recently introduced similar DFT domain CMA algorithm in that, the original CMA cost function has not been changed to develop DFT domain algorithm, resulting improved convergence properties. Using the proposed approach, we can reduce the number of multiplications to O(N log 2 N), whereas the conventional CMA has the computation order of O(N2). Simulation results show that the proposed algorithm provides a comparable performance to the conventional CMA.
D'Azevedo, E.F.; Romine, C.H.
1992-09-01
The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. In a distributed memory parallel environment, the computation and subsequent distribution of these two values requires two separate communication and synchronization phases. In this paper, we present a mathematically equivalent rearrangement of the standard algorithm that reduces the number of communication phases. We give a second derivation of the modified conjugate gradient algorithm in terms of the natural relationship with the underlying Lanczos process. We also present empirical evidence of the stability of this modified algorithm.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
[Parallel PLS algorithm using MapReduce and its aplication in spectral modeling].
Yang, Hui-Hua; Du, Ling-Ling; Li, Ling-Qiao; Tang, Tian-Biao; Guo, Tuo; Liang, Qiong-Lin; Wang, Yi-Ming; Luo, Guo-An
2012-09-01
Partial least squares (PLS) has been widely used in spectral analysis and modeling, and it is computation-intensive and time-demanding when dealing with massive data To solve this problem effectively, a novel parallel PLS using MapReduce is proposed, which consists of two procedures, the parallelization of data standardizing and the parallelization of principal component computing. Using NIR spectral modeling as an example, experiments were conducted on a Hadoop cluster, which is a collection of ordinary computers. The experimental results demonstrate that the parallel PLS algorithm proposed can handle massive spectra, can significantly cut down the modeling time, and gains a basically linear speedup, and can be easily scaled up. PMID:23240405
NASA Astrophysics Data System (ADS)
Williams, Arnold C.; Pachowicz, Peter W.
2004-09-01
Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.
Utilization of UV Curing Technology to Significantly Reduce the Manufacturing Cost of LIB Electrodes
Voelker, Gary; Arnold, John
2015-11-30
Previously identified novel binders and associated UV curing technology have been shown to reduce the time required to apply and finish electrode coatings from tens of minutes to less than one second. This revolutionary approach can result in dramatic increases in process speeds, significantly reduced capital (a factor of 10 to 20) and operating costs, reduced energy requirements, and reduced environmental concerns and costs due to the virtual elimination of harmful volatile organic solvents and associated solvent dryers and recovery systems. The accumulated advantages of higher speed, lower capital and operating costs, reduced footprint, lack of VOC recovery, and reduced energy cost is a reduction of 90% in the manufacturing cost of cathodes. When commercialized, the resulting cost reduction in Lithium batteries will allow storage device manufacturers to expand their sales in the market and thereby accrue the energy savings of broader utilization of HEVs, PHEVs and EVs in the U.S., and a broad technology export market is also envisioned.
Minimum-variance reduced-order estimation algorithms from Pontrygin's minimum principle
NASA Technical Reports Server (NTRS)
Ebrahimi, Yaghoob S.
1989-01-01
A uniform derivation of minimum-variance reduced-order (MVRO) filter-smoother algorithms from Pontrygin's Minimum Principle is presented. An appropriate performance index for a general class of reduced order estimation problem is formulated herein to yield optimal results over the entire time interval of estimation. These results provide quantitative criteria for measuring the performance of certain classes of heuristically designed, suboptimal reduced-order estimators as well as explicit guidance to the suboptimal filter design process with both continuous and discrete filter-smoother algorithms being considered. By the duality principle, the algorithms of reduced-order estimation can be easily extended to the deterministic problems of optimal control (i.e., the regulator and linear tracking problem).
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). PMID:25442399
An explicit algebraic reduced order algorithm for lithium ion cell voltage prediction
NASA Astrophysics Data System (ADS)
Senthil Kumar, V.; Gambhire, Priya; Hariharan, Krishnan S.; Khandelwal, Ashish; Kolake, Subramanya Mayya; Oh, Dukjin; Doo, Seokgwang
2014-02-01
The detailed isothermal electrochemical model for a lithium ion cell has ten coupled partial differential equations to describe the cell behavior. In an earlier publication [Journal of Power Sources, 222, 426 (2013)], a reduced order model (ROM) was developed by reducing the detailed model to a set of five linear ordinary differential equations and nonlinear algebraic expressions, using uniform reaction rate, volume averaging and profile based approximations. An arbitrary current profile, involving charge, rest and discharge, is broken down into constant current and linearly varying current periods. The linearly varying current period results are generic, since it includes the constant current period results as well. Hence, the linear ordinary differential equations in ROM are solved for a linearly varying current period and an explicit algebraic algorithm is developed for lithium ion cell voltage prediction. While the existing battery management system (BMS) algorithms are equivalent circuit based and ordinary differential equations, the proposed algorithm is an explicit algebraic algorithm. These results are useful to develop a BMS algorithm for on-board applications in electric or hybrid vehicles, smart phones etc. This algorithm is simple enough for a spread-sheet implementation and is useful for rapid analysis of laboratory data.
Dell'acqua, Flavio; Scifo, Paola; Rizzo, Giovanna; Catani, Marco; Simmons, Andrew; Scotti, Giuseppe; Fazio, Ferruccio
2010-01-15
Spherical deconvolution methods have been applied to diffusion MRI to improve diffusion tensor tractography results in brain regions with multiple fibre crossing. Recent developments, such as the introduction of non-negative constraints on the solution, allow a more accurate estimation of fibre orientations by reducing instability effects due to noise robustness. Standard convolution methods do not, however, adequately model the effects of partial volume from isotropic tissue, such as gray matter, or cerebrospinal fluid, which may degrade spherical deconvolution results. Here we use a newly developed spherical deconvolution algorithm based on an adaptive regularization (damped version of the Richardson-Lucy algorithm) to reduce isotropic partial volume effects. Results from both simulated and in vivo datasets show that, compared to a standard non-negative constrained algorithm, the damped Richardson-Lucy algorithm reduces spurious fibre orientations and preserves angular resolution of the main fibre orientations. These findings suggest that, in some brain regions, non-negative constraints alone may not be sufficient to reduce spurious fibre orientations. Considering both the speed of processing and the scan time required, this new method has the potential for better characterizing white matter anatomy and the integrity of pathological tissue. PMID:19781650
Zong, Wei; Nielsen, Larry; Gross, Brian; Brea, Juan; Frassica, Joseph
2016-08-01
There has been a high rate of false alarms for the critical electrocardiogram (ECG) arrhythmia events in intensive care units (ICUs), from which the 'crying-wolf' syndrome may be resulted and patient safety may be jeopardized. This article presents an algorithm to reduce false critical arrhythmia alarms using arterial blood pressure (ABP) and/or photoplethysmogram (PPG) waveform features. We established long duration reference alarm datasets which consist of 573 ICU waveform-alarm records (283 for development set and 290 for test set) with total length of 551 patent days. Each record has continuous recordings of ECGs, ABP and/or PPG signals and contains one or multiple critical ECG alarms. The average length of a record is 23 h. There are totally 2408 critical ECG alarms (1414 in the development set and 994 in the test set), each of which was manually annotated by experts. The algorithm extracts ABP/PPG pulse features on a beat-by-beat basis. For each pulse, five event feature indicators (EFIs), which correspond to the five critical ECG alarms, are generated. At the time of a critical ECG alarm, the corresponding EFI values of those ABP/PPG pulses around the alarm time are checked for adjudicating (accept/reject) this alarm. The algorithm retains all (100%) the true alarms and significantly reduces the false alarms. Our results suggest that the algorithm is effective and practical on account of its real-time dynamic processing mechanism and computational efficiency. PMID:27455375
Convergence and stability properties of minimal polynomial and reduced rank extrapolation algorithms
NASA Technical Reports Server (NTRS)
Sidi, A.
1983-01-01
The minimal polynomial and reduced rank extrapolation algorithms are two acceleration of convergence methods for sequences of vectors. In a recent survey these methods were tested and compared with the scalar, vector, topological epsilon algorithms, and were observed to be more efficient than the latter. It was also observed that the two methods have similar convergence properties. The convergence and stability properties of these methods are analyzed and the performance of the acceleration methods when applied to a class of vector sequences that includes those sequences obtained from systems of linear equations by using matrix iterative methods is discussed.
NASA Astrophysics Data System (ADS)
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.
2016-02-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
NASA Astrophysics Data System (ADS)
Andersson, Magnus; Marashi, Seyedeh Sepideh; Karlsson, Matts
2012-11-01
In the present study, aerodynamic drag (AD) has been estimated for an empty and a fully loaded conceptual timber truck (TT) using Computational Fluid Dynamics (CFD). The increasing fuel prices have challenged heavy duty vehicle (HDV) manufactures to strive for better fuel economy, by e.g. utilizing drag reducing external devices. Despite this knowledge, the TT fleets seem to be left in the dark. Like HDV aerodynamics, similarities can be observed as a large low pressure wake is formed behind the tractor (unloaded) and downstream of the trailer (full load) thus generating AD. As TTs travel half the time without any cargo, focus on drag reduction is important. The full scaled TTs where simulated using the realizable k-epsilon model with grid adaption techniques for mesh independence. Our results indicate that a loaded TT reduces the AD significantly as both wake size and turbulence kinetic energy are lowered. In contrast to HDV the unloaded TTs have a much larger design space available for possible drag reducing devices, e.g. plastic wrapping and/or flaps. This conceptual CFD study has given an indication of the large AD difference between the unloaded and fully loaded TT, showing the potential for significant AD improvements.
Significantly reduced expression of the proteoglycan decorin in Alzheimer's disease fibroblasts
Brandan, Enrique; Melo, Francisco; García, María; Contreras, Maribel
1996-01-01
Aims—To investigate whether proteoglycan synthesis is altered in skin fibroblasts in patients with Alzheimer's disease compared with normal subjects. Methods—Cell lines obtained from donors with Alzheimer's disease and healthy controls were incubated with radioactive sulphate. The proteoglycans synthesised were determined and analysed by chromatographic, sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and glycosaminoglycans-lyase treatment. The amount of decorin synthesised by each cell line was quantified using western blot analysis. Transcripts for human decorin were determined using northern blot analysis. Results—No significant changes in total sulphate incorporation and glycos-aminoglycan (GAG) composition were detected in the incubation media of these cells. However, chromatographic and SDS-PAGE analysis of the proteoglycans secreted by the cell lines showed that a dermatan sulphate proteoglycan of 150-125 kilodaltons was substantially reduced in Alzheimer's disease fibroblasts. The molecular characteristics of this proteoglycan correspond to decorin. Western blot analysis indicated that decorin was reduced in Alzheimer's disease incubation medium compared with normal medium. Northern blotting indicated that in Alzheimer's disease fibroblasts decorin transcripts were significantly reduced compared with normal fibroblasts. Glypican concentrations, a cell surface heparan sulphate proteoglycan, remained the same. Conclusions—These results strongly suggest that the expression and synthesis of decorin is affected in Alzheimer's disease skin fibroblasts. Images PMID:16696102
Hussain, Salik; Ji, Zhaoxia; Taylor, Alexia J; DeGraff, Laura M; George, Margaret; Tucker, Charles J; Chang, Chong Hyun; Li, Ruibin; Bonner, James C; Garantziotis, Stavros
2016-08-23
Commercialization of multiwalled carbon nanotubes (MWCNT)-based applications has been hampered by concerns regarding their lung toxicity potential. Hyaluronic acid (HA) is a ubiquitously found polysaccharide, which is anti-inflammatory in its native high molecular weight form. HA-functionalized smart MWCNTs have shown promise as tumor-targeting drug delivery agents and can enhance bone repair and regeneration. However, it is unclear whether HA functionalization could reduce the pulmonary toxicity potential of MWCNTs. Using in vivo and in vitro approaches, we investigated the effectiveness of MWCNT functionalization with HA in increasing nanotube biocompatibility and reducing lung inflammatory and fibrotic effects. We utilized three-dimensional cultures of differentiated primary human bronchial epithelia to translate findings from rodent assays to humans. We found that HA functionalization increased stability and dispersion of MWCNTs and reduced postexposure lung inflammation, fibrosis, and mucus cell metaplasia compared with nonfunctionalized MWCNTs. Cocultures of fully differentiated bronchial epithelial cells (cultivated at air-liquid interface) and human lung fibroblasts (submerged) displayed significant reduction in injury, oxidative stress, as well as pro-inflammatory gene and protein expression after exposure to HA-functionalized MWCNTs compared with MWCNTs alone. In contrast, neither type of nanotubes stimulated cytokine production in primary human alveolar macrophages. In aggregate, our results demonstrate the effectiveness of HA functionalization as a safer design approach to eliminate MWCNT-induced lung injury and suggest that HA functionalization works by reducing MWCNT-induced epithelial injury. PMID:27459049
Heimbauer, Lisa A.; Beran, Michael J.; Owren, Michael J.
2011-01-01
Summary A long-standing debate concerns whether humans are specialized for speech perception [1–7], which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content [2–4,7]. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words [8,9], asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuo-graphic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users [10]. Experiment 2 tested “impossibly unspeechlike” [3] sine-wave (SW) synthesis, which reduces speech to just three moving tones [11]. Although receiving only intermittent and non-contingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate, but improved in Experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human [12–14]. PMID:21723125
Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J.
1996-12-31
Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.
[SKLOF: a new algorithm to reduce the range of supernova candidates].
Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng
2015-01-01
Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection. PMID:25993860
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
Reducing PAPR of optical OFDM system based on PTS and companding joint algorithm
NASA Astrophysics Data System (ADS)
Jia, Yangjing; Li, Ping; Lei, Dongming; Chen, Ailin; Wang, Jinpeng; Zou, Nianyu
2015-10-01
Optical orthogonal frequency division multiplexing (OFDM) system combines the advantages of both wireless OFDM and optical fiber technology, thus has high spectral efficiency and can effectively resist polarization mode dispersion and chromatic dispersion in fiber link. However, high peak-to-average power ratio (PAPR) is one of the important shortcomings of optical OFDM system, which requires not only amplifiers with a greater dynamic range, but also leads to serious fiber nonlinear effect. So how to reduce PAPR of optical OFDM system is a crucial issue. This work, aiming to reduce PAPR and improving system BER, analyzes suppression technology of PAPR based on optical OFDM system. Firstly, to improve BER, we utilize Partial Transmit Sequence (PTS) algorithm which introduces phase factors b(v) multiplying IFFT converted signals and searches a b(v) which will make PAPR minimum. But this method needs much calculation. Then we exploit companding which can compress amplitude of big OFDM signals and expand small signals. Secondly, simulating the two algorithms respectively and finding two algorithms can suppress PAPR, but the effect has room for improvement. Therefore, an implementation of PTS and companding joint algorithm is proposed, then simulating this method and adding it into optical OFDM system. A system was set up, fiber length setting as 10km, utilizing a MZM modulator and a distributed feedback laser, taking 4QAM and 512points IFFT. The results show that, joint algorithm can reduce PAPR from about 12dB to 8dB, improving the problem of high PAPR, constellation convergence, enhances optical OFDM system transmission performance.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Verster, Joris C; Roth, Thomas
2014-10-01
Lapses of attention are characteristic for attention-deficit/hyperactivity disorder (ADHD) and as such may impair performance of daily activities. Data from an on-road driving study were reanalyzed to determine lapses in patients with ADHD after treatment with methylphenidate and placebo.A total of 18 adult ADHD patients performed a 100-km on-road driving test and were instructed to drive with a steady lateral position and constant speed. The SD of lateral position (SDLP), that is, the weaving of the car, lapses, and alertness, was assessed.Driving was significantly better (P = 0.006) with methylphenidate (SDLP, 18.8 cm) when compared with placebo (SDLP, 21.2 cm). Both the reduction in SDLP and the number of lapses (P = 0.003) confirm this significant improvement, which is further supported by subjective assessments of perceived driving performance. Although lapses were common in the placebo condition (11/18 patients), they were much less frequently observed (5/18 patients) after treatment with methylphenidate. Postdriving assessments suggest that lapses often go unnoticed by drivers.In conclusion, methylphenidate significantly improves driving of patients with ADHD by significantly reducing the number of lapses. PMID:24978156
Neu, Alicia M; Richardson, Troy; Lawlor, John; Stuart, Jayne; Newland, Jason; McAfee, Nancy; Warady, Bradley A
2016-06-01
The Standardizing Care to improve Outcomes in Pediatric End stage renal disease (SCOPE) Collaborative aims to reduce peritonitis rates in pediatric chronic peritoneal dialysis patients by increasing implementation of standardized care practices. To assess this, monthly care bundle compliance and annualized monthly peritonitis rates were evaluated from 24 SCOPE centers that were participating at collaborative launch and that provided peritonitis rates for the 13 months prior to launch. Changes in bundle compliance were assessed using either a logistic regression model or a generalized linear mixed model. Changes in average annualized peritonitis rates over time were illustrated using the latter model. In the first 36 months of the collaborative, 644 patients with 7977 follow-up encounters were included. The likelihood of compliance with follow-up care practices increased significantly (odds ratio 1.15, 95% confidence interval 1.10, 1.19). Mean monthly peritonitis rates significantly decreased from 0.63 episodes per patient year (95% confidence interval 0.43, 0.92) prelaunch to 0.42 (95% confidence interval 0.31, 0.57) at 36 months postlaunch. A sensitivity analysis confirmed that as mean follow-up compliance increased, peritonitis rates decreased, reaching statistical significance at 80% at which point the prelaunch rate was 42% higher than the rate in the months following achievement of 80% compliance. In its first 3 years, the SCOPE Collaborative has increased the implementation of standardized follow-up care and demonstrated a significant reduction in average monthly peritonitis rates. PMID:27165827
A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports
Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N
2009-01-01
Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.
Code of Federal Regulations, 2014 CFR
2014-04-01
... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... significantly reducing the rate of future benefit accrual. The following questions and answers concern the... a plan amendment of an applicable pension plan that significantly reduces the rate of future...
Gardiner, Eleanor J; Gillet, Valerie J; Willett, Peter; Cosgrove, David A
2007-01-01
Chemical databases are routinely clustered, with the aim of grouping molecules which share similar structural features. Ideally, medicinal chemists are then able to browse a few representatives of the cluster in order to interpret the shared activity of the cluster members. However, when molecules are clustered using fingerprints, it may be difficult to decipher the structural commonalities which are present. Here, we seek to represent a cluster by means of a maximum common substructure based on the shared functionality of the cluster members. Previously, we have used reduced graphs, where each node corresponds to a generalized functional group, as topological molecular descriptors for virtual screening. In this work, we precluster a database using any clustering method. We then represent the molecules in a cluster as reduced graphs. By repeated application of a maximum common edge substructure (MCES) algorithm, we obtain one or more reduced graph cluster representatives. The sparsity of the reduced graphs means that the MCES calculations can be performed in real time. The reduced graph cluster representatives are readily interpretable in terms of functional activity and can be mapped directly back to the molecules to which they correspond, giving the chemist a rapid means of assessing potential activities contained within the cluster. Clusters of interest are then subject to a detailed R-group analysis using the same iterated MCES algorithm applied to the molecular graphs. PMID:17309248
2014-01-01
Background Percutaneous vertebroplasy (PVP) might lead to significant radiation exposure to patients, operators, and operating room personnel. Therefore, radiaton exposure is a concern. The aim of this study was to present a remote control cement delivery device and study whether it can reduce dose exposue to operators. Methods After meticulous preoperative preparation, a series of 40 osteoporosis patients were treated with unilateral approach PVP using the new cement delivery divice. We compared levels of fluoroscopic exposure to operator standing on different places during operation. group A: operator stood about 4 meters away from X-ray tube behind the lead sheet. group B: operator stood adjacent to patient as using conventional manual cement delivery device. Results During whole operation process, radiation dose to the operator (group A) was 0.10 ± 0.03 (0.07-0.15) μSv, group B was 12.09 ± 4.67 (10–20) μSv. a difference that was found to be statistically significant (P < 0.001) between group A and group B. Conclusion New cement delivery device plus meticulous preoperative preparation can significantly decrease radiation dose to operators. PMID:25084860
Long-term stable polymer solar cells with significantly reduced burn-in loss.
Kong, Jaemin; Song, Suhee; Yoo, Minji; Lee, Ga Young; Kwon, Obum; Park, Jin Kuen; Back, Hyungcheol; Kim, Geunjin; Lee, Seoung Ho; Suh, Hongsuk; Lee, Kwanghee
2014-01-01
The inferior long-term stability of polymer-based solar cells needs to be overcome for their commercialization to be viable. In particular, an abrupt decrease in performance during initial device operation, the so-called 'burn-in' loss, has been a major contributor to the short lifetime of polymer solar cells, fundamentally impeding polymer-based photovoltaic technology. In this study, we demonstrate polymer solar cells with significantly improved lifetime, in which an initial burn-in loss is substantially reduced. By isolating trap-embedded components from pristine photoactive polymers based on the unimodality of molecular weight distributions, we are able to selectively extract a trap-free, high-molecular-weight component. The resulting polymer component exhibits enhanced power conversion efficiency and long-term stability without abrupt initial burn-in degradation. Our discovery suggests a promising possibility for commercial viability of polymer-based photovoltaics towards real solar cell applications. PMID:25483206
Huffman, Gerald P.
2012-11-13
A new method of producing liquid transportation fuels from coal and other hydrocarbons that significantly reduces carbon dioxide emissions by combining Fischer-Tropsch synthesis with catalytic dehydrogenation is claimed. Catalytic dehydrogenation (CDH) of the gaseous products (C1-C4) of Fischer-Tropsch synthesis (FTS) can produce large quantities of hydrogen while converting the carbon to multi-walled carbon nanotubes (MWCNT). Incorporation of CDH into a FTS-CDH plant converting coal to liquid fuels can eliminate all or most of the CO.sub.2 emissions from the water-gas shift (WGS) reaction that is currently used to elevate the H.sub.2 level of coal-derived syngas for FTS. Additionally, the FTS-CDH process saves large amounts of water used by the WGS reaction and produces a valuable by-product, MWCNT.
Lichter, David I.; Di Bacco, Alessandra; Blakemore, Stephen J.; Berger, Allison; Koenig, Erik; Bernard, Hugues; Trepicchio, William; Li, Bin; Neuwirth, Rachel; Chattopadhyay, Nibedita; Bolen, Joseph B.; Dorner, Andrew J.; van de Velde, Helgi; Ricci, Deborah; Jagannath, Sundar; Berenson, James R.; Richardson, Paul G.; Stadtmauer, Edward A.; Orlowski, Robert Z.; Lonial, Sagar; Anderson, Kenneth C.; Sonneveld, Pieter; San Miguel, Jesús F.; Esseltine, Dixie-Lee; Schu, Matthew
2014-01-01
Various translocations and mutations have been identified in myeloma, and certain aberrations, such as t(4;14) and del17, are linked with disease prognosis. To investigate mutational prevalence in myeloma and associations between mutations and patient outcomes, we tested a panel of 41 known oncogenes and tumor suppressor genes in tumor samples from 133 relapsed myeloma patients participating in phase 2 or 3 clinical trials of bortezomib. DNA mutations were identified in 14 genes. BRAF as well as RAS genes were mutated in a large proportion of cases (45.9%) and these mutations were mutually exclusive. New recurrent mutations were also identified, including in the PDGFRA and JAK3 genes. NRAS mutations were associated with a significantly lower response rate to single-agent bortezomib (7% vs 53% in patients with mutant vs wild-type NRAS, P = .00116, Bonferroni-corrected P = .016), as well as shorter time to progression in bortezomib-treated patients (P = .0058, Bonferroni-corrected P = .012). However, NRAS mutation did not impact outcome in patients treated with high-dose dexamethasone. KRAS mutation did not reduce sensitivity to bortezomib or dexamethasone. These findings identify a significant clinical impact of NRAS mutation in myeloma and demonstrate a clear example of functional differences between the KRAS and NRAS oncogenes. PMID:24335104
2014-01-01
Background Neoadjuvant chemotherapy (NC) is an established therapy in breast cancer, able to downstage positive axillary lymph nodes, but might hamper their detectibility. Even if clinical observations suggest lower lymph node yield (LNY) after NC, data are inconclusive and it is unclear whether NC dependent parameters influence detection rates by axillary lymph node dissection (ALND). Methods We analyzed retrospectively the LNY in 182 patients with ALND after NC and 351 patients with primary ALND. Impact of surgery or pathological examination and specific histomorphological alterations were evaluated. Outcome analyses regarding recurrence rates, disease free (DFS) and overall survival (OS) were performed. Results Axillary LNY was significantly lower in the NC in comparison to the primary surgery group (median 13 vs. 16; p < 0.0001). The likelihood of incomplete axillary staging was four times higher in the NC group (14.8% vs. 3.4%, p < 0.0001). Multivariate analyses excluded any influence by surgeon or pathologist. However, the chemotherapy dependent histological feature lymphoid depletion was an independent predictive factor for a lower LNY. Outcome analyses revealed no significant impact of the LNY on local and regional recurrence rates as well as DFS and OS, respectively. Conclusion NC significantly reduces the LNY by ALND and has profound effects on the histomorphological appearance of lymph nodes. The current recommendations for a minimum removal of 10 lymph nodes by ALND are clearly compromised by the clinically already established concept of NC. The LNY of less than 10 by ALND after NC might not be indicative for an insufficient axillary staging. PMID:24386929
Rojewska, Ewelina; Piotrowska, Anna; Makuch, Wioletta; Przewlocka, Barbara; Mika, Joanna
2016-03-01
Recent studies have highlighted the involvement of the kynurenine pathway in the pathology of neurodegenerative diseases, but the role of this system in neuropathic pain requires further extensive research. Therefore, the aim of our study was to examine the role of kynurenine 3-monooxygenase (Kmo), an enzyme that is important in this pathway, in a rat model of neuropathy after chronic constriction injury (CCI) to the sciatic nerve. For the first time, we demonstrated that the injury-induced increase in the Kmo mRNA levels in the spinal cord and the dorsal root ganglia (DRG) was reduced by chronic administration of the microglial inhibitor minocycline and that this effect paralleled a decrease in the intensity of neuropathy. Further, minocycline administration alleviated the lipopolysaccharide (LPS)-induced upregulation of Kmo mRNA expression in microglial cell cultures. Moreover, we demonstrated that not only indirect inhibition of Kmo using minocycline but also direct inhibition using Kmo inhibitors (Ro61-6048 and JM6) decreased neuropathic pain intensity on the third and the seventh days after CCI. Chronic Ro61-6048 administration diminished the protein levels of IBA-1, IL-6, IL-1beta and NOS2 in the spinal cord and/or the DRG. Both Kmo inhibitors potentiated the analgesic properties of morphine. In summary, our data suggest that in neuropathic pain model, inhibiting Kmo function significantly reduces pain symptoms and enhances the effectiveness of morphine. The results of our studies show that the kynurenine pathway is an important mediator of neuropathic pain pathology and indicate that Kmo represents a novel pharmacological target for the treatment of neuropathy. PMID:26524415
Sulakvelidze, Alexander
2013-10-01
Bacteriophages (also called 'phages') are viruses that kill bacteria. They are arguably the oldest (3 billion years old, by some estimates) and most ubiquitous (total number estimated to be 10(30) -10(32) ) known organisms on Earth. Phages play a key role in maintaining microbial balance in every ecosystem where bacteria exist, and they are part of the normal microflora of all fresh, unprocessed foods. Interest in various practical applications of bacteriophages has been gaining momentum recently, with perhaps the most attention focused on using them to improve food safety. That approach, called 'phage biocontrol', typically includes three main types of applications: (i) using phages to treat domesticated livestock in order to reduce their intestinal colonization with, and shedding of, specific bacterial pathogens; (ii) treatments for decontaminating inanimate surfaces in food-processing facilities and other food establishments, so that foods processed on those surfaces are not cross-contaminated with the targeted pathogens; and (iii) post-harvest treatments involving direct applications of phages onto the harvested foods. This mini-review primarily focuses on the last type of intervention, which has been gaining the most momentum recently. Indeed, the results of recent studies dealing with improving food safety, and several recent regulatory approvals of various commercial phage preparations developed for post-harvest food safety applications, strongly support the idea that lytic phages may provide a safe, environmentally-friendly, and effective approach for significantly reducing contamination of various foods with foodborne bacterial pathogens. However, some important technical and nontechnical problems may need to be addressed before phage biocontrol protocols can become an integral part of routine food safety intervention strategies implemented by food industries in the USA. PMID:23670852
Bavishi, Chirag; Ather, Sameer; Bambhroliya, Arvind; Jneid, Hani; Virani, Salim S; Bozkurt, Biykem; Deswal, Anita
2014-06-01
Hyponatremia in heart failure (HF) is an established predictor of adverse outcomes in hospitalized patients with reduced ejection fraction (EF). However, there is a paucity of data in ambulatory patients with HF with preserved ejection fraction (HFpEF). We examined the prevalence, risk factors, and long-term outcomes of hyponatremia (serum sodium ≤135 mEq/L) in ambulatory HFpEF and HF with reduced EF (HFrEF) in a national cohort of 8,862 veterans treated in Veterans Affairs clinics. Multivariable logistic regression models were used to identify factors associated with hyponatremia, and multivariable Cox proportional hazard models were used for analysis of outcomes. The cohort consisted of 6,185 patients with HFrEF and 2,704 patients with HFpEF with a 2-year follow-up. Hyponatremia was present in 13.8% and 12.9% patients in HFrEF and HFpEF, respectively. Hyponatremia was independently associated with younger age, diabetes, lower systolic blood pressure, anemia, body mass index <30 kg/m(2), and spironolactone use, whereas African-American race and statins were inversely associated. In multivariate analysis, hyponatremia remained a significant predictor of all-cause mortality in both HFrEF (hazards ratio [HR] 1.26, 95% confidence interval [CI] 1.11 to 1.44, p <0.001) and HFpEF (HR 1.40, 95% CI 1.12 to 1.75, p = 0.004) and a significant predictor of all-cause hospitalization in patients with HFrEF (HR 1.18, 95% CI 1.07 to 1.31, p = 0.001) but not in HFpEF (HR 1.08, 95% CI 0.92 to 1.27, p = 0.33). In conclusion, hyponatremia is prevalent at a similar frequency of over 10% in ambulatory patients with HFpEF and HFrEF. Hyponatremia is an independent prognostic marker of mortality across the spectrum of patients with HFpEF and HFrEF. In contrast, it is an independent predictor for hospitalization in patients with HFrEF but not in patients with HFpEF. PMID:24837261
Upton, L. M.; Brock, P. M.; Churcher, T. S.; Ghani, A. C.; Gething, P. W.; Delves, M. J.; Sala, K. A.; Leroy, D.; Sinden, R. E.
2014-01-01
To achieve malarial elimination, we must employ interventions that reduce the exposure of human populations to infectious mosquitoes. To this end, numerous antimalarial drugs are under assessment in a variety of transmission-blocking assays which fail to measure the single crucial criteria of a successful intervention, namely impact on case incidence within a vertebrate population (reduction in reproductive number/effect size). Consequently, any reduction in new infections due to drug treatment (and how this may be influenced by differing transmission settings) is not currently examined, limiting the translation of any findings. We describe the use of a laboratory population model to assess how individual antimalarial drugs can impact the number of secondary Plasmodium berghei infections over a cycle of transmission. We examine the impact of multiple clinical and preclinical drugs on both insect and vertebrate populations at multiple transmission settings. Both primaquine (>6 mg/kg of body weight) and NITD609 (8.1 mg/kg) have significant impacts across multiple transmission settings, but artemether and lumefantrine (57 and 11.8 mg/kg), OZ439 (6.5 mg/kg), and primaquine (<1.25 mg/kg) demonstrated potent efficacy only at lower-transmission settings. While directly demonstrating the impact of antimalarial drug treatment on vertebrate populations, we additionally calculate effect size for each treatment, allowing for head-to-head comparison of the potential impact of individual drugs within epidemiologically relevant settings, supporting their usage within elimination campaigns. PMID:25385107
Goudra, Basavana Gouda; Singh, Preet Mohinder; Penugonda, Lakshmi C; Speck, Rebecca M; Sinha, Ashish C
2014-01-01
Background: Providing anesthesia for gastrointestinal (GI) endoscopy procedures in morbidly obese patients is a challenge for a variety of reasons. The negative impact of obesity on the respiratory system combined with a need to share the upper airway and necessity to preserve the spontaneous ventilation, together add to difficulties. Materials and Methods: This retrospective cohort study included patients with a body mass index (BMI) >40 kg/m2 that underwent out-patient GI endoscopy between September 2010 and February 2011. Patient data was analyzed for procedure, airway management technique as well as hypoxemic and cardiovascular events. Results: A total of 119 patients met the inclusion criteria. Our innovative airway management technique resulted in a lower rate of intraoperative hypoxemic events compared with any published data available. Frequency of desaturation episodes showed statistically significant relation to previous history of obstructive sleep apnea (OSA). These desaturation episodes were found to be statistically independent of increasing BMI of patients. Conclusion: Pre-operative history of OSA irrespective of associated BMI values can be potentially used as a predictor of intra-procedural desaturation. With suitable modification of anesthesia technique, it is possible to reduce the incidence of adverse respiratory events in morbidly obese patients undergoing GI endoscopy procedures, thereby avoiding the need for endotracheal intubation. PMID:24574597
Magnitude and significance of the higher-order reduced density matrix cumulants
NASA Astrophysics Data System (ADS)
Herbert, John M.
Using full configuration interaction wave functions for Be and LiH, in both minimal and extended basis sets, we examine the absolute magnitude and energetic significance of various contributions to the three-electron reduced density matrix (3-RDM) and its connected (size-consistent) component, the 3-RDM cumulant (3-RDMC). Minimal basis sets are shown to suppress the magnitude of the 3-RDMC in an artificial manner, whereas in extended basis sets, 3-RDMC matrix elements are often comparable in magnitude to the corresponding 3-RDM elements, even in cases where this result is not required by spin angular momentum coupling. Formal considerations suggest that these observations should generalize to higher-order p-RDMs and p-RDMCs (p > 3). This result is discussed within the context of electronic structure methods based on the contracted Schrödinger equation (CSE), as solution of the CSE relies on 3- and 4-RDM ?reconstruction functionals? that neglect the 3-RDMC, the 4-RDMC, or both. Although the 3-RDMC is responsible for at most 0.2% of the total electronic energy in Be and LiH, it accounts for up to 70% of the correlation energy, raising questions regarding whether (and how) the CSE can offer a useful computational methodology.
Reduced-cost sparsity-exploiting algorithm for solving coupled-cluster equations.
Brabec, Jiri; Yang, Chao; Epifanovsky, Evgeny; Krylov, Anna I; Ng, Esmond
2016-05-01
We present an algorithm for reducing the computational work involved in coupled-cluster (CC) calculations by sparsifying the amplitude correction within a CC amplitude update procedure. We provide a theoretical justification for this approach, which is based on the convergence theory of inexact Newton iterations. We demonstrate by numerical examples that, in the simplest case of the CCD equations, we can sparsify the amplitude correction by setting, on average, roughly 90% nonzero elements to zeros without a major effect on the convergence of the inexact Newton iterations. PMID:26804120
Sulfide-driven autotrophic denitrification significantly reduces N2O emissions.
Yang, Weiming; Zhao, Qing; Lu, Hui; Ding, Zhi; Meng, Liao; Chen, Guang-Hao
2016-03-01
The Sulfate reduction-Autotrophic denitrification-Nitrification Integrated (SANI) process build on anaerobic carbon conversion through biological sulfate reduction and autotrophic denitrification by using the sulfide byproduct from the previous reaction. This study confirmed extra decreases in N2O emissions from the sulfide-driven autotrophic denitrification by investigating N2O reduction, accumulation, and emission in the presence of different sulfide/nitrate (S/N) mass ratios at pH 7 in a long-term laboratory-scale granular sludge autotrophic denitrification reactor. The N2O reduction rate was linearly proportional to the sulfide concentration, which confirmed that no sulfide inhibition of N2O reductase occurred. At S/N = 5.0 g-S/g-N, this rate resulted by sulfide-driven autotrophic denitrifying granular sludge (average granule size = 701 μm) was 27.7 mg-N/g-VSS/h (i.e., 2 and 4 times greater than those at 2.5 and 0.8 g-S/g-N, respectively). Sulfide actually stimulates rather than inhibits N2O reduction no matter what granule size of sulfide-driven autotrophic denitrifying sludge engaged. The accumulations of N2O, nitrite and free nitrous acid (FNA) with average granule size 701 μm of sulfide-driven autotrophic denitrifying granular sludge engaged at S/N = 5.0 g-S/g-N were 4.7%, 11.4% and 4.2% relative to those at 3.0 g-S/g-N, respectively. The accumulation of FNA can inhibit N2O reduction and increase N2O accumulation during sulfide-driven autotrophic denitrification. In addition, the N2O gas emission level from the reactor significantly increased from 14.1 ± 0.5 ppmv (0.002% of the N load) to 3707.4 ± 36.7 ppmv (0.405% of the N load) as the S/N mass ratio in the influent decreased from 2.1 to 1.4 g-S/g-N over the course of the 120-day continuous monitoring period. Sulfide-driven autotrophic denitrification may significantly reduce greenhouse gas emissions from biological nutrient removal when sulfur conversion processes are applied. PMID
Guo, Li; Xu, Yan; Xu, Zhengfu; Jiang, Jingfeng
2015-10-01
Obtaining accurate ultrasonically estimated displacements along both axial (parallel to the acoustic beam) and lateral (perpendicular to the beam) directions is an important task for various clinical elastography applications (e.g., modulus reconstruction and temperature imaging). In this study, a partial differential equation (PDE)-based regularization algorithm was proposed to enhance motion tracking accuracy. More specifically, the proposed PDE-based algorithm, utilizing two-dimensional (2D) displacement estimates from a conventional elastography system, attempted to iteratively reduce noise contained in the original displacement estimates by mathematical regularization. In this study, tissue incompressibility was the physical constraint used by the above-mentioned mathematical regularization. This proposed algorithm was tested using computer-simulated data, a tissue-mimicking phantom, and in vivo breast lesion data. Computer simulation results demonstrated that the method significantly improved the accuracy of lateral tracking (e.g., a factor of 17 at 0.5% compression). From in vivo breast lesion data investigated, we have found that, as compared with the conventional method, higher quality axial and lateral strain images (e.g., at least 78% improvements among the estimated contrast-to-noise ratios of lateral strain images) were obtained. Our initial results demonstrated that this conceptually and computationally simple method could be useful for improving the image quality of ultrasound elastography with current clinical equipment as a post-processing tool. PMID:25452434
Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1991-01-01
The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.
NASA Astrophysics Data System (ADS)
Ritter, Axel; Muñoz-Carpena, Rafael
2013-02-01
SummarySuccess in the use of computer models for simulating environmental variables and processes requires objective model calibration and verification procedures. Several methods for quantifying the goodness-of-fit of observations against model-calculated values have been proposed but none of them is free of limitations and are often ambiguous. When a single indicator is used it may lead to incorrect verification of the model. Instead, a combination of graphical results, absolute value error statistics (i.e. root mean square error), and normalized goodness-of-fit statistics (i.e. Nash-Sutcliffe Efficiency coefficient, NSE) is currently recommended. Interpretation of NSE values is often subjective, and may be biased by the magnitude and number of data points, data outliers and repeated data. The statistical significance of the performance statistics is an aspect generally ignored that helps in reducing subjectivity in the proper interpretation of the model performance. In this work, approximated probability distributions for two common indicators (NSE and root mean square error) are derived with bootstrapping (block bootstrapping when dealing with time series), followed by bias corrected and accelerated calculation of confidence intervals. Hypothesis testing of the indicators exceeding threshold values is proposed in a unified framework for statistically accepting or rejecting the model performance. It is illustrated how model performance is not linearly related with NSE, which is critical for its proper interpretation. Additionally, the sensitivity of the indicators to model bias, outliers and repeated data is evaluated. The potential of the difference between root mean square error and mean absolute error for detecting outliers is explored, showing that this may be considered a necessary but not a sufficient condition of outlier presence. The usefulness of the approach for the evaluation of model performance is illustrated with case studies including those with
Code of Federal Regulations, 2012 CFR
2012-04-01
... accrual or that eliminates or significantly reduces an early retirement benefit or retirement-type subsidy... or reduces an early retirement benefit or retirement-type subsidy for purposes of determining whether... retirement-type subsidy is significant for purposes of section 4980F and section 204(h)? Q-9. When...
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error
Sahoo, Prasan Kumar; Hwang, I-Shyan
2011-01-01
Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738
Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M.; Daniell, Tim J.
2012-01-01
The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate (NO3−) and production of the potent greenhouse gas, nitrous oxide (N2O). A number of factors are known to control these processes, including O2 concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N2O production from soils. PMID:23264770
Montgomery, Christopher J.; Yang, Chongguan; Parkinson, Alan R.; Chen, J.-Y.
2006-01-01
A genetic optimization algorithm has been applied to the selection of quasi-steady-state (QSS) species in reduced chemical kinetic mechanisms. The algorithm seeks to minimize the error between reduced and detailed chemistry for simple reactor calculations approximating conditions of interest for a computational fluid dynamics simulation. The genetic algorithm does not guarantee that the global optimum will be found, but much greater accuracy can be obtained than by choosing QSS species through a simple kinetic criterion or by human trial and error. The algorithm is demonstrated for methane-air combustion over a range of temperatures and stoichiometries and for homogeneous charge compression ignition engine combustion. The results are in excellent agreement with those predicted by the baseline mechanism. A factor of two reduction in the number of species was obtained for a skeletal mechanism that had already been greatly reduced from the parent detailed mechanism.
Stevens, Andrew J.; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D.
2014-02-11
The use of high resolution imaging methods in the scanning transmission electron microscope (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example in the study of organic systems, in tomography and during in-situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high resolution STEM images. These experiments successively reduce the number of pixels in the image (thereby reducing the overall dose while maintaining the high resolution information) and show promising results for reconstructing images from this reduced set of randomly collected measurements. We show that this approach is valid for both atomic resolution images and nanometer resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these post acquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or alignment of the microscope itself.
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945
Does maintaining a bottle of adhesive without the lid significantly reduce the solvent content?
Santana, Márcia Luciana Carregosa; Sousa Júnior, José Aginaldo de; Leal, Pollyana Caldeira; Faria-e-Silva, André Luis
2014-01-01
This study aimed to evaluate the effect of maintaining a bottle of adhesive without its lid on the solvent loss of the etch-and-rinse adhesive systems. Three 2-step etch-and-rinse adhesives with different solvents (acetone, ethanol or butanol) were used in this study. Drops of each adhesive were placed on an analytical balance and the adhesive mass was recorded until equilibrium was achieved (no significant mass alteration within time). The solvent content of each adhesive and evaporation rate of solvents were measured (n=3). Two bottles of each adhesive were weighted. The bottles were maintained without their lids for 8 h in a stove at 37 ºC, after which the mass loss was measured. Based on mass alteration of drops, acetone-based adhesive showed the highest solvent content (46.5%, CI 95%: 35.8-54.7) and evaporation rate (1.11 %/s, CI95%: 0.63-1.60), whereas ethanol-based adhesive had the lowest values (10.1%, CI95%: 4.3-16.0; 0.03 %/s CI95%: 0.01-0.05). However, none of the adhesives bottles exhibited significant mass loss after sitting for 8 h without their lids (% from initial content; acetone - 96.5, CI 95%: 91.8-101.5; ethanol - 99.4, CI 95%: 98.4-100.4; and butanol - 99.3, CI 95%: 98.1-100.5). In conclusion, maintaining the adhesive bottle without lid did not induce significant solvent loss, irrespective the concentration and evaporation rate of solvent. PMID:25590203
Inviting consumers to downsize fast-food portions significantly reduces calorie consumption.
Schwartz, Janet; Riis, Jason; Elbel, Brian; Ariely, Dan
2012-02-01
Policies that mandate calorie labeling in fast-food and chain restaurants have had little or no observable impact on calorie consumption to date. In three field experiments, we tested an alternative approach: activating consumers' self-control by having servers ask customers if they wanted to downsize portions of three starchy side dishes at a Chinese fast-food restaurant. We consistently found that 14-33 percent of customers accepted the downsizing offer, and they did so whether or not they were given a nominal twenty-five-cent discount. Overall, those who accepted smaller portions did not compensate by ordering more calories in their entrées, and the total calories served to them were, on average, reduced by more than 200. We also found that accepting the downsizing offer did not change the amount of uneaten food left at the end of the meal, so the calorie savings during purchasing translated into calorie savings during consumption. Labeling the calorie content of food during one of the experiments had no measurable impact on ordering behavior. If anything, the downsizing offer was less effective in changing customers' ordering patterns with the calorie labeling present. These findings highlight the potential importance of portion-control interventions that specifically activate consumers' self-control. PMID:22323171
Can remotely sensed meteorological data significantly contribute to reduce costs of tsetse surveys?
Hendrickx, G; Napala, A; Rogers, D; Bastiaensen, P; Slingenbergh, J
1999-01-01
A 0.125 degree raster or grid-based Geographic Information System with data on tsetse, trypanosomiasis animal production, agriculturerkina> and land use has recently been developed in Togo. This paper addresses the problem of generating tsetse distribution and abundance maps from remotely sensed data, using a restricted amount of field data. A discriminant analysis model is tested using contemporary tsetse data and remotely sensed, low resolution data acquired from the National Oceanographic and Atmospheric Administration and Meteosat platforms. A split sample technique is adopted where a randomly selected part of the field measured data (training set) serves to predict the other part (predicted set). The obtained results are then compared with field measured data per corresponding grid-square. Depending on the size of the training set the percentage of concording predictions varies from 80 to 95 for distribution figures and from 63 to 74 for abundance. These results confirm the potential of satellite data application and multivariate analysis for the prediction, not only of the tsetse distribution, but more importantly of their abundance. This opens up new avenues because satellite predictions and field data may be combined to strengthen or substitute one another and thus reduce costs of field surveys. PMID:10224542
Cleanroom Maintenance Significantly Reduces Abundance but Not Diversity of Indoor Microbiomes
Mahnert, Alexander; Vaishampayan, Parag; Probst, Alexander J.; Auerbach, Anna; Moissl-Eichinger, Christine; Venkateswaran, Kasthuri; Berg, Gabriele
2015-01-01
Cleanrooms have been considered microbially-reduced environments and are used to protect human health and industrial product assembly. However, recent analyses have deciphered a rather broad diversity of microbes in cleanrooms, whose origin as well as physiological status has not been fully understood. Here, we examined the input of intact microbial cells from a surrounding built environment into a spacecraft assembly cleanroom by applying a molecular viability assay based on propidium monoazide (PMA). The controlled cleanroom (CCR) was characterized by ~6.2*103 16S rRNA gene copies of intact bacterial cells per m2 floor surface, which only represented 1% of the total community that could be captured via molecular assays without viability marker. This was in contrast to the uncontrolled adjoining facility (UAF) that had 12 times more living bacteria. Regarding diversity measures retrieved from 16S rRNA Illumina-tag analyzes, we observed, however, only a minor drop in the cleanroom facility allowing the conclusion that the number but not the diversity of microbes is strongly affected by cleaning procedures. Network analyses allowed tracking a substantial input of living microbes to the cleanroom and a potential enrichment of survival specialists like bacterial spore formers and archaeal halophiles and mesophiles. Moreover, the cleanroom harbored a unique community including 11 exclusive genera, e.g., Haloferax and Sporosarcina, which are herein suggested as indicators of cleanroom environments. In sum, our findings provide evidence that archaea are alive in cleanrooms and that cleaning efforts and cleanroom maintenance substantially decrease the number but not the diversity of indoor microbiomes. PMID:26273838
Rifampicin and rifapentine significantly reduce concentrations of bedaquiline, a new anti-TB drug
Svensson, Elin M.; Murray, Stephen; Karlsson, Mats O.; Dooley, Kelly E.
2015-01-01
Objectives Bedaquiline is the first drug of a new class approved for the treatment of TB in decades. Bedaquiline is metabolized by cytochrome P450 (CYP) 3A4 to a less-active M2 metabolite. Its terminal half-life is extremely long (5–6 months), complicating evaluations of drug–drug interactions. Rifampicin and rifapentine, two anti-TB drugs now being optimized to shorten TB treatment duration, are potent inducers of CYP3A4. This analysis aimed to predict the effect of repeated doses of rifampicin or rifapentine on the steady-state pharmacokinetics of bedaquiline and its M2 metabolite from single-dose data using a model-based approach. Methods Pharmacokinetic data for bedaquiline and M2 were obtained from a Phase I study involving 32 individuals each receiving two doses of bedaquiline, alone or together with multiple-dose rifampicin or rifapentine. Sampling was performed over 14 days following each bedaquiline dose. Pharmacokinetic analyses were performed using non-linear mixed-effects modelling. Models were used to simulate potential dose adjustments. Results Rifamycin co-administration increased bedaquiline clearance substantially: 4.78-fold [relative standard error (RSE) 9.10%] with rifampicin and 3.96-fold (RSE 5.00%) with rifapentine. Induction of M2 clearance was equally strong. Average steady-state concentrations of bedaquiline and M2 are predicted to decrease by 79% and 75% when given with rifampicin or rifapentine, respectively. Simulations indicated that increasing the bedaquiline dosage to mitigate the interaction would yield elevated M2 concentrations during the first treatment weeks. Conclusions Rifamycin antibiotics reduce bedaquiline concentrations substantially. In line with current treatment guidelines for drug-susceptible TB, concomitant use is not recommended, even with dose adjustment. PMID:25535219
Nale, Janet Y.; Spencer, Janice; Hargreaves, Katherine R.; Buckley, Anthony M.; Trzepiński, Przemysław
2015-01-01
The microbiome dysbiosis caused by antibiotic treatment has been associated with both susceptibility to and relapse of Clostridium difficile infection (CDI). Bacteriophage (phage) therapy offers target specificity and dose amplification in situ, but few studies have focused on its use in CDI treatment. This mainly reflects the lack of strictly virulent phages that target this pathogen. While it is widely accepted that temperate phages are unsuitable for therapeutic purposes due to their transduction potential, analysis of seven C. difficile phages confirmed that this impact could be curtailed by the application of multiple phage types. Here, host range analysis of six myoviruses and one siphovirus was conducted on 80 strains representing 21 major epidemic and clinically severe ribotypes. The phages had complementary coverage, lysing 18 and 62 of the ribotypes and strains tested, respectively. Single-phage treatments of ribotype 076, 014/020, and 027 strains showed an initial reduction in the bacterial load followed by the emergence of phage-resistant colonies. However, these colonies remained susceptible to infection with an unrelated phage. In contrast, specific phage combinations caused the complete lysis of C. difficile in vitro and prevented the appearance of resistant/lysogenic clones. Using a hamster model, the oral delivery of optimized phage combinations resulted in reduced C. difficile colonization at 36 h postinfection. Interestingly, free phages were recovered from the bowel at this time. In a challenge model of the disease, phage treatment delayed the onset of symptoms by 33 h compared to the time of onset of symptoms in untreated animals. These data demonstrate the therapeutic potential of phage combinations to treat CDI. PMID:26643348
Stevens, Andrew; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D
2014-02-01
The use of high-resolution imaging methods in scanning transmission electron microscopy (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example, in the study of organic systems, in tomography and during in situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high-resolution STEM images. These computational algorithms have been applied to a set of images with a reduced number of sampled pixels in the image. For a reduction in the number of pixels down to 5% of the original image, the algorithms can recover the original image from the reduced data set. We show that this approach is valid for both atomic-resolution images and nanometer-resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these postacquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or the alignment of the microscope itself. PMID:24151325
Significantly reduced thermal diffusivity of free-standing two-layer graphene in graphene foam.
Lin, Huan; Xu, Shen; Wang, Xinwei; Mei, Ning
2013-10-18
We report on a thermal diffusivity study of suspended graphene foam (GF) using the transient electro-thermal technique. Our Raman study confirms the GF is composed of two-layer graphene. By measuring GF of different lengths, we are able to exclude the radiation effect. Using Schuetz's model, the intrinsic thermal diffusivity of the free-standing two-layer graphene is determined with a high accuracy without using knowledge of the porosity of the GF. The intrinsic thermal diffusivity of the two-layer graphene is determined at 1.16-2.22 × 10(-4) m(2) s(-1). The corresponding intrinsic thermal conductivity is 182-349 W m(-1) K(-1), about one order of magnitude lower than those reported for single-layer graphene. Extensive surface impurity defects, wrinkles and rough edges are observed under a scanning electron microscope for the studied GF. These structural defects induce substantial phonon scattering and explain the observed significant thermal conductivity reduction. Our thermal diffusivity characterization of GF provides an advanced way to look into the thermal transport capacity of free-standing graphene with high accuracy and ease of experimental implementation. PMID:24060813
Wang, Yongbo; Gao, Xiang; Pedram, Pardis; Shahidi, Mariam; Du, Jianling; Yi, Yanqing; Gulliver, Wayne; Zhang, Hongwei; Sun, Guang
2016-01-01
Selenium (Se) is a trace element which plays an important role in adipocyte hypertrophy and adipogenesis. Some studies suggest that variations in serum Se may be associated with obesity. However, there are few studies examining the relationship between dietary Se and obesity, and findings are inconsistent. We aimed to investigate the association between dietary Se intake and a panel of obesity measurements with systematic control of major confounding factors. A total of 3214 subjects participated in the study. Dietary Se intake was determined from the Willett food frequency questionnaire. Body composition was measured using dual-energy X-ray absorptiometry. Obese men and women had the lowest dietary Se intake, being 24% to 31% lower than corresponding normal weight men and women, classified by both BMI and body fat percentage. Moreover, subjects with the highest dietary Se intake had the lowest BMI, waist circumference, and trunk, android, gynoid and total body fat percentages, with a clear dose-dependent inverse relationship observed in both gender groups. Furthermore, significant negative associations discovered between dietary Se intake and obesity measurements were independent of age, total dietary calorie intake, physical activity, smoking, alcohol, medication, and menopausal status. Dietary Se intake alone may account for 9%–27% of the observed variations in body fat percentage. The findings from this study strongly suggest that high dietary Se intake is associated with a beneficial body composition profile. PMID:26742059
Ashrafian, Hutan; Toma, Tania; Harling, Leanne; Kerr, Karen; Athanasiou, Thanos; Darzi, Ara
2014-09-01
The global epidemic of obesity continues to escalate. Obesity accounts for an increasing proportion of the international socioeconomic burden of noncommunicable disease. Online social networking services provide an effective medium through which information may be exchanged between obese and overweight patients and their health care providers, potentially contributing to superior weight-loss outcomes. We performed a systematic review and meta-analysis to assess the role of these services in modifying body mass index (BMI). Our analysis of twelve studies found that interventions using social networking services produced a modest but significant 0.64 percent reduction in BMI from baseline for the 941 people who participated in the studies' interventions. We recommend that social networking services that target obesity should be the subject of further clinical trials. Additionally, we recommend that policy makers adopt reforms that promote the use of anti-obesity social networking services, facilitate multistakeholder partnerships in such services, and create a supportive environment to confront obesity and its associated noncommunicable diseases. PMID:25201670
Colchicine Significantly Reduces Incident Cancer in Gout Male Patients: A 12-Year Cohort Study.
Kuo, Ming-Chun; Chang, Shun-Jen; Hsieh, Ming-Chia
2015-12-01
Patients with gout are more likely to develop most cancers than subjects without gout. Colchicine has been used for the treatment and prevention of gouty arthritis and has been reported to have an anticancer effect in vitro. However, to date no study has evaluated the relationship between colchicine use and incident cancers in patients with gout. This study enrolled male patients with gout identified in Taiwan's National Health Insurance Database for the years 1998 to 2011. Each gout patient was matched with 4 male controls by age and by month and year of first diagnosis, and was followed up until 2011. The study excluded those who were diagnosed with diabetes or any type of cancer within the year following enrollment. We calculated hazard ratio (HR), aged-adjusted standardized incidence ratio, and incidence of 1000 person-years analyses to evaluate cancer risk. A total of 24,050 male patients with gout and 76,129 male nongout controls were included. Patients with gout had a higher rate of incident all-cause cancers than controls (6.68% vs 6.43%, P = 0.006). A total of 13,679 patients with gout were defined as having been ever-users of colchicine and 10,371 patients with gout were defined as being never-users of colchicine. Ever-users of colchicine had a significantly lower HR of incident all-cause cancers than never-users of colchicine after adjustment for age (HR = 0.85, 95% CI = 0.77-0.94; P = 0.001). In conclusion, colchicine use was associated with a decreased risk of incident all-cause cancers in male Taiwanese patients with gout. PMID:26683907
Thyroid function appears to be significantly reduced in Space-borne MDS mice
NASA Astrophysics Data System (ADS)
Saverio Ambesi-Impiombato, Francesco; Curcio, Francesco; Fontanini, Elisabetta; Perrella, Giuseppina; Spelat, Renza; Zambito, Anna Maria; Damaskopoulou, Eleni; Peverini, Manola; Albi, Elisabetta
It is known that prolonged space flights induced changes in human cardiovascular, muscu-loskeletal and nervous systems whose function is regulated by the thyroid gland but, until now, no data were reported about thyroid damage during space missions. We have demonstrated in vitro that, during space missions (Italian Soyuz Mission "ENEIDE" in 2005, Shuttle STS-120 "ESPERIA" in 2007), thyroid in vitro cultured cells did not respond to thyroid stimulating hor-mone (TSH) treatment; they appeared healthy and alive, despite their being in a pro-apopotic state characterised by a variation of sphingomyelin metabolism and consequent increase in ce-ramide content. The insensitivity to TSH was largely due to a rearrangement of specific cell membrane microdomains, acting as platforms for TSH-receptor (TEXUS-44 mission in 2008). To study if these effects were present also in vivo, as part of the Mouse Drawer System (MDS) Tissue Sharing Program, we performed experiments in mice maintained onboard the Interna-tional Space Station during the long-duration (90 days) exploration mission STS-129. After return to earth, the thyroids isolated from the 3 animals were in part immediately frozen to study the morphological modification in space and in part immediately used to study the effect of TSH treatment. For this purpose small fragments of tissue were treated with 10-7 or 10-8 M TSH for 1 hour by using untreated fragments as controls. Then the fragments were fixed with absolute ethanol for 10 min at room temperature and centrifuged for 20 min. at 3000 x g. The supernatants were used for cAMP analysis whereas the pellet were used for protein amount determination and for immunoblotting analysis of TSH-receptor, sphingomyelinase and sphingomyelin-synthase. The results showed a modification of the thyroid structure and also the values of cAMP production after treatment with 10-7 M TSH for 1 hour were significantly lower than those obtained in Earth's gravity. The treatment with TSH
Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-08-01
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T). PMID:23927246
Using Dynamic Programming and Genetic Algorithms to Reduce Erosion Risks From Forest Roads
NASA Astrophysics Data System (ADS)
Madej, M.; Eschenbach, E.; Teasley, R.; Diaz, C.; Wartella, J.; Simi, J.
2002-12-01
Many anadromous fisheries streams in the Pacific Northwest have been damaged by various land use activities, including timber harvest and road construction. Unpaved forest roads can cause erosion and downstream sedimentation damage in anadromous fish-bearing streams. Although road decommissioning and road upgrading activities have been conducted on many of these roads, these activities have usually been implemented and evaluated on a site-specific basis without the benefit of a watershed perspective. Land managers still struggle with designing the most effective road treatment plan to minimize erosion while keeping costs reasonable across a large land base. Trade-offs between costs of different levels of treatment and the net effect on reducing sediment risks to streams need to be quantified. For example, which problems should be treated first, and by what treatment method? Is it better to fix one large problem or 100 small problems? If sediment reduction to anadromous fish-bearing streams is the desired outcome of road treatment activities, a more rigorous evaluation of risks and optimization of treatments is needed. Two approaches, Dynamic Programming (DP) and Genetic Algorithms (GA), were successfully used to determine the most effective treatment levels for roads and stream crossings in a pilot study basin with approximately 200 road segments and stream crossings and in an actual watershed with approximately 600 road segments and crossings. The optimization models determine the treatment levels for roads and crossings that maximize the total sediment saved within a watershed while maintaining the total treatment cost within the specified budget. The optimization models import GIS data on roads and crossings and export the optimal treatment level for each road and crossing to the GIS watershed model.
Zheng, Xing; Du, Xue-Lian; Jiang, Tao
2015-01-01
Objective: Previous studies which investigated the relationship between reduced E-cadherin and prognosis of endometrial cancer were ambiguous and conflicting. Therefore, the aim of the present study was to evaluate the relationship between reduced expression of E-cadherin and endometrial cancer by meta-analysis approach. Method: AfterPubmed and Embasewere deliberately searched via the internet, 8 pieces of literaturewere totally included in final meta-analysis. After the data had been abstracted, the pulled odds ratio (OR) and hazard ratio (HR) were calculated by STATA with random or fixed effect model depending on their heterogeneity. The publication bias of included literature were tested by Begg’s funnel plot and Egger’s test. Results: The pulled data showed that the reduced expression of E-cadherin was significantly associated with overall survival (OS), HR=2.42, 95% CI: 1.50-3.89. The clinical parameters such as lymph node metastasis (LNM), myometrial invasion (MI), International Federation of Gynecology and Obstetrics (FIGO) stage, histological type and pathological type were also significantly associated with reduced expression of E-cadherin. The results of publication biasshowed there were no significant publication bias. Conclusion: Endometrial cancer patients with reduced expression of E-cadherin may have a poorer prognosis than those with normal or higher expression of E-cadherin. PMID:26770483
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 17 2011-04-01 2011-04-01 false Notice requirements for certain pension plan... (CONTINUED) PENSION EXCISE TAXES § 54.4980F-1 Notice requirements for certain pension plan amendments... a plan amendment of an applicable pension plan that significantly reduces the rate of future...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 17 2013-04-01 2013-04-01 false Notice requirements for certain pension plan amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) MISCELLANEOUS EXCISE TAXES (CONTINUED) PENSION EXCISE TAXES §...
Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique.
An algorithm for reducing storage requirements in computer calculation of chemical equilibria.
Tripathi, V S
1983-01-01
An algorithm for storage of stoichiometric coefficients of the possible complexes and solids in a multi-component system of metals and ligands is described, along with FORTRAN code for its implementation. The proposed algorithm results in considerable saving in storage over the conventional use of a two-dimensional array. The saving in storage is especially useful for microcomputers, and for very large problems such as those encountered in geochemical calculations. PMID:18963319
Hromadka, T.V., II; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Angus, Simon D.; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost
Cardenas, Erick; Leigh, Mary Beth; Marsh, Terence; Tiedje, James M.; Wu, Wei-min; Luo, Jian; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Criddle, Craig; Carley, Jack M; Carroll, Sue L; Gentry, Terry J; Watson, David B; Gu, Baohua; Jardine, Philip M; Zhou, Jizhong
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 M and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared.
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K; Jardine, Philip M; Zhou, Jizhong; Criddle, Craig S; Marsh, Terence L; Tiedje, James M
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared. PMID:20729318
Cardenas, Erick; Wu, Wei-min; Leigh, Mary Beth; Carley, Jack M; Carroll, Sue L; Gentry, Terry; Luo, Jian; Watson, David B; Gu, Baohua; Ginder-Vogel, Matthew A.; Kitanidis, Peter K.; Jardine, Philip; Kelly, Shelly D; Zhou, Jizhong; Criddle, Craig; Marsh, Terence; Tiedje, James
2010-08-01
Massively parallel sequencing has provided a more affordable and high throughput method to study microbial communities, although it has been mostly used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium (VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee, USA. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 {micro}M, and created geochemical gradients in electron donors from the inner loop injection well towards the outer loop and down-gradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical created conditions. Castellaniella, and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity; while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. Abundance of these bacteria as well as the Fe(III)- and U(VI)-reducer Geobacter correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to the electron donor addition and by the groundwater flow path. A false discovery rate approach was implemented to discard false positives by chance given the large amount of data compared.
NASA Astrophysics Data System (ADS)
Chen, Peng; Quarteroni, Alfio
2015-10-01
In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.
Zeng, Yi; Chen, Huashuai; Ni, Ting; Ruan, Rongping; Nie, Chao; Liu, Xiaomin; Feng, Lei; Zhang, Fengyu; Lu, Jiehua; Li, Jianxin; Li, Yang; Tao, Wei; Gregory, Simon G; Gottschalk, William; Lutz, Michael W; Land, Kenneth C; Yashin, Anatoli; Tan, Qihua; Yang, Ze; Bolund, Lars; Ming, Qi; Yang, Huanming; Min, Junxia; Willcox, D Craig; Willcox, Bradley J; Gu, Jun; Hauser, Elizabeth; Tian, Xiao-Li; Vaupel, James W
2016-06-01
On the basis of the genotypic/phenotypic data from Chinese Longitudinal Healthy Longevity Survey (CLHLS) and Cox proportional hazard model, the present study demonstrates that interactions between carrying FOXO1A-209 genotypes and tea drinking are significantly associated with lower risk of mortality at advanced ages. Such a significant association is replicated in two independent Han Chinese CLHLS cohorts (p = 0.028-0.048 in the discovery and replication cohorts, and p = 0.003-0.016 in the combined dataset). We found the associations between tea drinking and reduced mortality are much stronger among carriers of the FOXO1A-209 genotype compared to non-carriers, and drinking tea is associated with a reversal of the negative effects of carrying FOXO1A-209 minor alleles, that is, from a substantially increased mortality risk to substantially reduced mortality risk at advanced ages. The impacts are considerably stronger among those who carry two copies of the FOXO1A minor allele than those who carry one copy. On the basis of previously reported experiments on human cell models concerning FOXO1A-by-tea-compounds interactions, we speculate that results in the present study indicate that tea drinking may inhibit FOXO1A-209 gene expression and its biological functions, which reduces the negative impacts of FOXO1A-209 gene on longevity (as reported in the literature) and offers protection against mortality risk at oldest-old ages. Our empirical findings imply that the health outcomes of particular nutritional interventions, including tea drinking, may, in part, depend upon individual genetic profiles, and the research on the effects of nutrigenomics interactions could potentially be useful for rejuvenation therapies in the clinic or associated healthy aging intervention programs. PMID:26414954
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2015-12-01
We develop an experimental design algorithm to select locations for a network of observation wells that provide the maximum robust information about unknown hydraulic conductivity in a confined, anisotropic aquifer. Since the information that a design provides is dependent on an aquifer's hydraulic conductivity, a robust design is one that provides the maximum information in the worst-case scenario. The design can be formulated as a max-min optimization problem. The problem is generally non-convex, non-differentiable, and contains integer variables. We use a Genetic Algorithm (GA) to perform the combinatorial search. We employ proper orthogonal decomposition (POD) to reduce the dimension of the groundwater model, thereby reducing the computational burden posed by employing a GA. The GA algorithm exhaustively searches for the robust design across a set of hydraulic conductivities and finds an approximate design (called the High Frequency Observation Well Design) through a Monte Carlo-type search. The results from a small-scale 1-D test case validate the proposed methodology. We then apply the methodology to a realistically-scaled 2-D test case.
Yu, Chunhao; Wen, Xiao-Dong; Zhang, Zhiyu; Zhang, Chun-Feng; Wu, Xiaohui; He, Xin; Liao, Yang; Wu, Ningning; Wang, Chong-Zhi; Du, Wei; He, Tong-Chuan; Yuan, Chun-Su
2015-01-01
Background Colorectal cancer (CRC) is a leading cause of death worldwide. Chronic gut inflammation is recognized as a risk factor for tumor development, including CRC. American ginseng is a very commonly used ginseng species in the West. Methods A genetically engineered ApcMin/+ mouse model was used in this study. We analyzed the saponin composition of American ginseng used in this project, and evaluated its effects on the progression of high-fat-diet-enhanced CRC carcinogenesis. Results After oral ginseng administration (10–20 mg/kg/d for up to 32 wk), experimental data showed that, compared with the untreated mice, ginseng very significantly reduced tumor initiation and progression in both the small intestine (including the proximal end, middle end, and distal end) and the colon (all p < 0.01). This tumor number reduction was more obvious in those mice treated with a low dose of ginseng. The tumor multiplicity data were supported by body weight changes and gut tissue histology examinations. In addition, quantitative real-time polymerase chain reaction analysis showed that compared with the untreated group, ginseng very significantly reduced the gene expression of inflammatory cytokines, including interleukin-1α (IL-1α), IL-1β, IL-6, tumor necrosis factor-α, granulocyte-colony stimulating factor, and granulocyte-macrophage colony-stimulating factor in both the small intestine and the colon (all p < 0.01). Conclusion Further studies are needed to link our observed effects to the actions of the gut microbiome in converting the parent ginsenosides to bioactive ginseng metabolites. Our data suggest that American ginseng may have potential value in CRC chemoprevention. PMID:26199554
Perera, Meenu N; Abuladze, Tamar; Li, Manrong; Woolston, Joelle; Sulakvelidze, Alexander
2015-12-01
ListShield™, a commercially available bacteriophage cocktail that specifically targets Listeria monocytogenes, was evaluated as a bio-control agent for L. monocytogenes in various Ready-To-Eat foods. ListShield™ treatment of experimentally contaminated lettuce, cheese, smoked salmon, and frozen entrèes significantly reduced (p < 0.05) L. monocytogenes contamination by 91% (1.1 log), 82% (0.7 log), 90% (1.0 log), and 99% (2.2 log), respectively. ListShield™ application, alone or combined with an antioxidant/anti-browning solution, resulted in a statistically significant (p < 0.001) 93% (1.1 log) reduction of L. monocytogenes contamination on apple slices after 24 h at 4 °C. Treatment of smoked salmon from a commercial processing facility with ListShield™ eliminated L. monocytogenes (no detectable L. monocytogenes) in both the naturally contaminated and experimentally contaminated salmon fillets. The organoleptic quality of foods was not affected by application of ListShield™, as no differences in the color, taste, or appearance were detectable. Bio-control of L. monocytogenes with lytic bacteriophage preparations such as ListShield™ can offer an environmentally-friendly, green approach for reducing the risk of listeriosis associated with the consumption of various foods that may be contaminated with L. monocytogenes. PMID:26338115
Hassan, Femeena; Geethalakshmi, V; Jeeva, J Charles; Babu, M Remya
2013-02-01
Combined effect of lime and drying on bacteria of public health significance in Edible Oyster (Crassostrea madrasensis) from Munambam coastal belt (Kerala, India) were studied (without depuration). Samples were examined for Total Plate Count (TPC), Staphylococcus aureus (hygiene indicator), Total coliforms, Faecal coliforms, Escherichia coli, (faecal indicator) Faecal Streptococci (faecal indicator), Salmonella, Vibrio cholera and Listeria monocytogenes. The fresh oyster meat though did not confirm to the specifications laid by National shellfish sanitation programme (NSSP), after treatment with lime with and without drying found to show significant reduction in counts and meet the required standards. Prevalence of faecal indicators in the fresh sample indicated faecal pollution in the area. The isolation of potentially pathogenic bacteria, V. parahaemolyticus in fresh sample indicates high risk of people consuming and handling oysters in raw and semi processed form and also it may lead to cross contamination. The present study indicates that treatment with natural organic product like lime and simple preservation technique, drying can effectively reduce the bacterial load. The study also revealed that TPC of water and soil collected from the site from where oysters were collected was less than from the meat. PMID:24425910
Hashimoto, Takeshi; Yokokawa, Takumi; Endo, Yuriko; Iwanaka, Nobumasa; Higashida, Kazuhiko; Taguchi, Sadayoshi
2013-10-11
Highlights: •Long-term hypoxia decreased the size of LDs and lipid storage in 3T3-L1 adipocytes. •Long-term hypoxia increased basal lipolysis in 3T3-L1 adipocytes. •Hypoxia decreased lipid-associated proteins in 3T3-L1 adipocytes. •Hypoxia decreased basal glucose uptake and lipogenic proteins in 3T3-L1 adipocytes. •Hypoxia-mediated lipogenesis may be an attractive therapeutic target against obesity. -- Abstract: Background: A previous study has demonstrated that endurance training under hypoxia results in a greater reduction in body fat mass compared to exercise under normoxia. However, the cellular and molecular mechanisms that underlie this hypoxia-mediated reduction in fat mass remain uncertain. Here, we examine the effects of modest hypoxia on adipocyte function. Methods: Differentiated 3T3-L1 adipocytes were incubated at 5% O{sub 2} for 1 week (long-term hypoxia, HL) or one day (short-term hypoxia, HS) and compared with a normoxia control (NC). Results: HL, but not HS, resulted in a significant reduction in lipid droplet size and triglyceride content (by 50%) compared to NC (p < 0.01). As estimated by glycerol release, isoproterenol-induced lipolysis was significantly lowered by hypoxia, whereas the release of free fatty acids under the basal condition was prominently enhanced with HL compared to NC or HS (p < 0.01). Lipolysis-associated proteins, such as perilipin 1 and hormone-sensitive lipase, were unchanged, whereas adipose triglyceride lipase and its activator protein CGI-58 were decreased with HL in comparison to NC. Interestingly, such lipogenic proteins as fatty acid synthase, lipin-1, and peroxisome proliferator-activated receptor gamma were decreased. Furthermore, the uptake of glucose, the major precursor of 3-glycerol phosphate for triglyceride synthesis, was significantly reduced in HL compared to NC or HS (p < 0.01). Conclusion: We conclude that hypoxia has a direct impact on reducing the triglyceride content and lipid droplet size via
Johnson, V.M.; Rogers, L.L.
1994-09-01
A goal common to both the environmental and petroleum industries is the reduction of costs and/or enhancement of profits by the optimal placement of extraction/production and injection wells. Formal optimization techniques facilitate this goal by searching among the potentially infinite number of possible well patterns for ones that best meet engineering and economic objectives. However, if a flow and transport model or reservoir simulator is being used to evaluate the effectiveness of each network of wells, the computational resources required to apply most optimization techniques to real field problems become prohibitively expensive. This paper describes a new approach to field-scale, nonlinear optimization of well patterns that is intended to make such searches tractable on conventional computer equipment. Artificial neural networks (ANNs) are trained to predict selected information that would normally be calculated by the simulator. The ANNs are then embedded in a variant of the genetic algorithm (GA), which drives the search for increasingly effective well patterns and uses the ANNs, rather than the original simulator, to evaluate the effectiveness of each pattern. Once the search is complete, the ANNs are reused in sensitivity studies to give additional information on the performance of individual or clusters of wells.
Chiou, Yih-Shyh; Tsai, Fuan
2014-06-01
This paper presents a low-complexity and high-accuracy algorithm to reduce the computational load of the traditional data-fusion algorithm with heterogeneous observations for location tracking. For the location-estimation technique with the data fusion of radio-based ranging measurement and speed-based sensing measurement, the proposed tracking scheme, based on the Bayesian filtering concept, is handled by a state space model. The location tracking problem is divided into many mutual-interaction local constraints with the inherent message- passing features of factor graphs. During each iteration cycle, the messages with reliable information are passed efficiently between the prediction phase and the correction phase to simplify the data-fusion implementation for tracking the location of the mobile terminal. Numerical simulations show that the proposed forward and one-step backward refining tracking approach that combines radio ranging with speed sensing measurements for data fusion not only can achieve an accurate location close to that of the traditional Kalman filtering data-fusion algorithm, but also has much lower computational complexity. PMID:24013831
NASA Astrophysics Data System (ADS)
Andò, Bruno; Carbone, Daniele
2004-05-01
Gravity measurements are utilized at active volcanoes to detect mass changes linked to magma transfer processes and thus to recognize forerunners to paroxysmal volcanic events. Continuous gravity measurements are now increasingly performed at sites very close to active craters, where there is the greatest chance to detect meaningful gravity changes. Unfortunately, especially when used against the adverse environmental conditions usually encountered at such places, gravimeters have been proved to be affected by meteorological parameters, mainly by changes in the atmospheric temperature. The pseudo-signal generated by these perturbations is often stronger than the signal generated by actual changes in the gravity field. Thus, the implementation of well-performing algorithms for reducing the gravity signal for the effect of meteorological parameters is vital to obtain sequences useful from the volcano surveillance standpoint. In the present paper, a Neuro-Fuzzy algorithm, which was already proved to accomplish the required task satisfactorily, is tested over a data set from three gravimeters which worked continuously for about 50 days at a site far away from active zones, where changes due to actual fluctuation of the gravity field are expected to be within a few microgal. After accomplishing the reduction of the gravity series, residuals are within about 15 μGal peak-to-peak, thus confirming the capabilities of the Neuro-Fuzzy algorithm under test of performing the required task satisfactorily.
Dostalek, Miroslav; Court, Michael H; Yan, Bingfang; Akhlaghi, Fatemeh
2011-01-01
BACKGROUND AND PURPOSE Patients with diabetes mellitus require pharmacotherapy with numerous medications. However, the effect of diabetes on drug biotransformation is not well understood. Our goal was to investigate the effect of diabetes on liver cytochrome P450 3As, the most abundant phase I drug-metabolizing enzymes in humans. EXPERIMENTAL APPROACH Human liver microsomal fractions (HLMs) were prepared from diabetic (n = 12) and demographically matched nondiabetic (n = 12) donors, genotyped for CYP3A4*1B and CYP3A5*3 polymorphisms. Cytochrome P450 3A4, 3A5 and 2E1 mRNA expression, protein level and enzymatic activity were compared between the two groups. KEY RESULTS Midazolam 1′- or 4-hydroxylation and testosterone 6β-hydroxylation, catalyzed by P450 3A, were markedly reduced in diabetic HLMs, irrespective of genotype. Significantly lower P450 3A4 protein and comparable mRNA levels were observed in diabetic HLMs. In contrast, neither P450 3A5 protein level nor mRNA expression differed significantly between the two groups. Concurrently, we have observed increased P450 2E1 protein level and higher chlorzoxazone 6-hydroxylation activity in diabetic HLMs. CONCLUSIONS AND IMPLICATIONS These studies indicate that diabetes is associated with a significant decrease in hepatic P450 3A4 enzymatic activity and protein level. This finding could be clinically relevant for diabetic patients who have additional comorbidities and are receiving multiple medications. To further characterize the effect of diabetes on P450 3A4 activity, a well-controlled clinical study in diabetic patients is warranted. PMID:21323901
Cho, Hanmin; Han, Seungwha; Hwang, Sun-Young
2013-01-01
We propose a real-time algorithm for recognition of speed limit signs from a moving vehicle. Linear Discriminant Analysis (LDA) required for classification is performed by using Discrete Cosine Transform (DCT) coefficients. To reduce feature dimension in LDA, DCT coefficients are selected by a devised discriminant function derived from information obtained by training. Binarization and thinning are performed on a Region of Interest (ROI) obtained by preprocessing a detected ROI prior to DCT for further reduction of computation time in DCT. This process is performed on a sequence of image frames to increase the hit rate of recognition. Experimental results show that arithmetic operations are reduced by about 60%, while hit rates reach about 100% compared to previous works. PMID:24453791
Rose, N; Andraud, M; Bigault, L; Jestin, A; Grasland, B
2016-07-19
Transmission characteristics of PCV2 have been compared between vaccinated and non-vaccinated pigs in experimental conditions. Twenty-four Specific Pathogen Free (SPF) piglets, vaccinated against PCV2 at 3weeks of age (PCV2a recombinant CAP protein-based vaccine), were inoculated at 15days post-vaccination with a PCV2b inoculum (6⋅10(5) TCID50), and put in contact with 24 vaccinated SPF piglets during 42days post-inoculation. Those piglets were shared in six replicates of a contact trial involving 4 inoculated piglets mingled with 4 susceptible SPF piglets. Two replicates of a similar contact trial were made with non-vaccinated pigs. Non vaccinated animals received a placebo at vaccination time and were inoculated the same way and at the same time as the vaccinated group. All the animals were monitored twice weekly using quantitative real-time PCR and ELISA for serology until 42days post-inoculation. The frequency of infection and the PCV2 genome load in sera of the vaccinated pigs were significantly reduced compared to the non-vaccinated animals. The duration of infectiousness was significantly different between vaccinated and non-vaccinated groups (16.6days [14.7;18.4] and 26.6days [22.9;30.4] respectively). The transmission rate was also considerably decreased in vaccinated pigs (β=0.09 [0.05-0.14] compared to β=0.19 [0.11-0.32] in non-vaccinated pigs). This led to an estimated reproduction ratio of 1.5 [95% CI 0.8 - 2.2] in vaccinated animals versus 5.1 [95% CI 2.5 - 8.2] in non-vaccinated pigs when merging data of this experiment with previous trials carried out in same conditions. PMID:27318416
Girmay, Gebrerufael; Arega, Bezna; Tesfaye, Dawit; Berkvens, Dirk; Muleta, Gadisa; Asefa, Getnet
2016-03-01
African animal trypanosomosis is a great obstacle to livestock production where tsetse flies play a major role. Metekel zone is among the tsetse-infested areas. Community-based tsetse fly and trypanosomosis control using targets was conducted from June 2011 to May 2012 in Metekel zone, Ethiopia, to decrease trypanosomosis and tsetse fly. Cloth screen targets were developed, impregnated with 0.1 % deltamethrin, and deployed alongside rivers by the research team together with the community animal health workers. Monthly parasitological and entomological data were collected, processed, and compared with similar data collected before control. Overall average tsetse fly (Glossina tachinoides) density decreased from 1.13 to 0.18 fly/trap/day after control. The density was decreased in all sites with no significant difference among the sites. However, higher decrements were observed in the dry and late dry seasons by more than 12 and 6 times, respectively. The reduction in overall apparent prevalence of trypanosomosis caused by Trypanosoma congolense, Trypanosoma brucei, and Trypanosoma vivax from 12.14 % before to 3.61 % after control coincides with the tsetse fly reduction. In all the study sites, significant reduction was observed before and after control. The highest decrement was observed in the late dry season when the apparent prevalence was reduced from 7.89 to 1.17 % before and after control, respectively. As this approach is simple, cost-effective, and appropriate for riverine tsetse species, we recommend to be scaled up to other similar places. PMID:26885985
Simon, Arne; Ammann, Roland A; Wiszniewsky, Gertrud; Bode, Udo; Fleischhack, Gudrun; Besuden, Mette M
2008-01-01
Background Taurolidin/Citrate (TauroLock™), a lock solution with broad spectrum antimicrobial activity, may prevent bloodstream infection (BSI) due to coagulase-negative staphylococci (CoNS or 'MRSE' in case of methicillin-resistant isolates) in pediatric cancer patients with a long term central venous access device (CVAD, Port- or/Broviac-/Hickman-catheter type). Methods In a single center prospective 48-months cohort study we compared all patients receiving anticancer chemotherapy from April 2003 to March 2005 (group 1, heparin lock with 200 IU/ml sterile normal saline 0.9%; Canusal® Wockhardt UK Ltd, Wrexham, Wales) and all patients from April 2005 to March 2007 (group 2; taurolidine 1.35%/Sodium Citrate 4%; TauroLock™, Tauropharm, Waldbüttelbrunn, Germany). Results In group 1 (heparin), 90 patients had 98 CVAD in use during the surveillance period. 14 of 30 (47%) BSI were 'primary Gram positive BSI due to CoNS (n = 4) or MRSE (n = 10)' [incidence density (ID); 2.30 per 1000 inpatient CVAD-utilization days]. In group 2 (TauroLock™), 89 patients had 95 CVAD in use during the surveillance period. 3 of 25 (12%) BSI were caused by CoNS. (ID, 0.45). The difference in the ID between the two groups was statistically significant (P = 0.004). Conclusion The use of Taurolidin/Citrate (TauroLock™) significantly reduced the number and incidence density of primary catheter-associated BSI due to CoNS and MRSE in pediatric cancer patients. PMID:18664278
Lintas, C; Sacco, R; Garbett, K; Mirnics, K; Militerni, R; Bravaccio, C; Curatolo, P; Manzi, B; Schneider, C; Melmed, R; Elia, M; Pascucci, T; Puglisi-Allegra, S; Reichelt, K-L; Persico, A M
2009-07-01
Protein kinase C enzymes play an important role in signal transduction, regulation of gene expression and control of cell division and differentiation. The fsI and betaII isoenzymes result from the alternative splicing of the PKCbeta gene (PRKCB1), previously found to be associated with autism. We performed a family-based association study in 229 simplex and 5 multiplex families, and a postmortem study of PRKCB1 gene expression in temporocortical gray matter (BA41/42) of 11 autistic patients and controls. PRKCB1 gene haplotypes are significantly associated with autism (P<0.05) and have the autistic endophenotype of enhanced oligopeptiduria (P<0.05). Temporocortical PRKCB1 gene expression was reduced on average by 35 and 31% for the PRKCB1-1 and PRKCB1-2 isoforms (P<0.01 and <0.05, respectively) according to qPCR. Protein amounts measured for the PKCbetaII isoform were similarly decreased by 35% (P=0.05). Decreased gene expression characterized patients carrying the 'normal' PRKCB1 alleles, whereas patients homozygous for the autism-associated alleles displayed mRNA levels comparable to those of controls. Whole genome expression analysis unveiled a partial disruption in the coordinated expression of PKCbeta-driven genes, including several cytokines. These results confirm the association between autism and PRKCB1 gene variants, point toward PKCbeta roles in altered epithelial permeability, demonstrate a significant downregulation of brain PRKCB1 gene expression in autism and suggest that it could represent a compensatory adjustment aimed at limiting an ongoing dysreactive immune process. Altogether, these data underscore potential PKCbeta roles in autism pathogenesis and spur interest in the identification and functional characterization of PRKCB1 gene variants conferring autism vulnerability. PMID:18317465
A new algorithm to reduce noise in microscopy images implemented with a simple program in python.
Papini, Alessio
2012-03-01
All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. PMID:21898664
Documentation for subroutine REDUC3, an algorithm for the linear filtering of gridded magnetic data
Blakely, Richard J.
1977-01-01
Subroutine REDUC3 transforms a total field anomaly h1(x,y) , measured on a horizontal and rectangular grid, into a new anomaly h2(x,y). This new anomaly is produced by the same source as h1(x,y) , but (1) is observed at a different elevation, (2) has a source with a different direction of magnetization, and/or (3) has a different direction of residual field. Case 1 is tantamount to upward or downward continuation. Cases 2 and 3 are 'reduction to the pole', if the new inclinations of both the magnetization and regional field are 90 degrees. REDUC3 is a filtering operation applied in the wave-number domain. It first Fourier transforms h1(x,y) , multiplies by the appropriate filter, and inverse Fourier transforms the result to obtain h2(x,y). No assumptions are required about the shape of the source or how the intensity of magnetization varies within it.
Aihara, Hiroyuki; Ryou, Marvin; Kumar, Nitin; Ryan, Michele B.; Thompson, Christopher C.
2016-01-01
Background and study aims In endoscopic submucosal dissection (ESD), effective countertraction may overcome the current drawbacks of longer procedure times and increased technical demands. The objective of this study was to compare the efficacy of ESD using a novel magnetic countertraction device with that of the traditional technique. Methods Each ESD was performed on simulated gastric lesions of 30mm diameter created at five different locations. In total, 10 ESDs were performed using this novel device and 10 were performed by the standard technique. Results The magnetic countertraction device allowed directional tissue manipulation and exposure of the submucosal space. The total procedure time was 605 ± 303.7 seconds in the countertraction group vs. 1082 ± 515.9 seconds in the control group (P=0.021). Conclusions This study demonstrated that using a novel magnetic countertraction device during ESD is technically feasible and enables the operator to dynamically manipulate countertraction such that the submucosal layer is visualized directly. Use of this device significantly reduced procedure time compared with conventional ESD techniques. PMID:24573770
NASA Astrophysics Data System (ADS)
Zhao, Chenglong; LeBrun, Thomas W.
2015-08-01
Gold nanoparticles (GNP) have wide applications ranging from nanoscale heating to cancer therapy and biological sensing. Optical trapping of GNPs as small as 18 nm has been successfully achieved with laser power as high as 855 mW, but such high powers can damage trapped particles (particularly biological systems) as well heat the fluid, thereby destabilizing the trap. In this article, we show that counter propagating beams (CPB) can successfully trap GNP with laser powers reduced by a factor of 50 compared to that with a single beam. The trapping position of a GNP inside a counter-propagating trap can be easily modulated by either changing the relative power or position of the two beams. Furthermore, we find that under our conditions while a single-beam most stably traps a single particle, the counter-propagating beam can more easily trap multiple particles. This (CPB) trap is compatible with the feedback control system we recently demonstrated to increase the trapping lifetimes of nanoparticles by more than an order of magnitude. Thus, we believe that the future development of advanced trapping techniques combining counter-propagating traps together with control systems should significantly extend the capabilities of optical manipulation of nanoparticles for prototyping and testing 3D nanodevices and bio-sensing.
2014-01-01
Background High-throughput sequencing has opened up exciting possibilities in population and conservation genetics by enabling the assessment of genetic variation at genome-wide scales. One approach to reduce genome complexity, i.e. investigating only parts of the genome, is reduced-representation library (RRL) sequencing. Like similar approaches, RRL sequencing reduces ascertainment bias due to simultaneous discovery and genotyping of single-nucleotide polymorphisms (SNPs) and does not require reference genomes. Yet, generating such datasets remains challenging due to laboratory and bioinformatical issues. In the laboratory, current protocols require improvements with regards to sequencing homologous fragments to reduce the number of missing genotypes. From the bioinformatical perspective, the reliance of most studies on a single SNP caller disregards the possibility that different algorithms may produce disparate SNP datasets. Results We present an improved RRL (iRRL) protocol that maximizes the generation of homologous DNA sequences, thus achieving improved genotyping-by-sequencing efficiency. Our modifications facilitate generation of single-sample libraries, enabling individual genotype assignments instead of pooled-sample analysis. We sequenced ~1% of the orangutan genome with 41-fold median coverage in 31 wild-born individuals from two populations. SNPs and genotypes were called using three different algorithms. We obtained substantially different SNP datasets depending on the SNP caller. Genotype validations revealed that the Unified Genotyper of the Genome Analysis Toolkit and SAMtools performed significantly better than a caller from CLC Genomics Workbench (CLC). Of all conflicting genotype calls, CLC was only correct in 17% of the cases. Furthermore, conflicting genotypes between two algorithms showed a systematic bias in that one caller almost exclusively assigned heterozygotes, while the other one almost exclusively assigned homozygotes. Conclusions
Intelligent speckle reducing anisotropic diffusion algorithm for automated 3-D ultrasound images.
Wu, Jun; Wang, Yuanyuan; Yu, Jinhua; Shi, Xinling; Zhang, Junhua; Chen, Yue; Pang, Yun
2015-02-01
A novel 3-D filtering method is presented for speckle reduction and detail preservation in automated 3-D ultrasound images. First, texture features of an image are analyzed by using the improved quadtree (QT) decomposition. Then, the optimal homogeneous and the obvious heterogeneous regions are selected from QT decomposition results. Finally, diffusion parameters and diffusion process are automatically decided based on the properties of these two selected regions. The computing time needed for 2-D speckle reduction is very short. However, the computing time required for 3-D speckle reduction is often hundreds of times longer than 2-D speckle reduction. This may limit its potential application in practice. Because this new filter can adaptively adjust the time step of iteration, the computation time is reduced effectively. Both synthetic and real 3-D ultrasound images are used to evaluate the proposed filter. It is shown that this filter is superior to other methods in both practicality and efficiency. PMID:26366596
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
Verma, Pankaj Kumar; Verma, Shikha; Meher, Alok Kumar; Pande, Veena; Mallick, Shekhar; Bansiwal, Amit Kumar; Tripathi, Rudra Deo; Dhankher, Om Parkash; Chakrabarty, Debasis
2016-09-01
Arsenic (As) is an acute poison and class I carcinogen, can cause a serious health risk. Staple crops like rice are the primary source of As contamination in human food. Rice grown on As contaminated areas accumulates higher As in their edible parts. Based on our previous transcriptome data, two rice glutaredoxins (OsGrx_C7 and OsGrx_C2.1) were identified that showed up-regulated expression during As stress. Here, we report OsGrx_C7 and OsGrx_C2.1 from rice involved in the regulation of intracellular arsenite (AsIII). To elucidate the mechanism of OsGrx mediated As tolerance, both OsGrxs were cloned and expressed in Escherichia coli (Δars) and Saccharomyces cerevisiae mutant strains (Δycf1, Δacr3). The expression of OsGrxs increased As tolerance in E. coli (Δars) mutant strain (up to 4 mM AsV and up to 0.6 mM AsIII). During AsIII exposure, S. cerevisiae (Δacr3) harboring OsGrx_C7 and OsGrx_C2.1 have lower intracellular AsIII accumulation (up to 30.43% and 24.90%, respectively), compared to vector control. Arsenic accumulation in As-sensitive S. cerevisiae mutant (Δycf1) also reduced significantly on exposure to inorganic As. The expression of OsGrxs in yeast maintained intracellular GSH pool and increased extracellular GSH concentration. Purified OsGrxs displays in vitro GSH-disulfide oxidoreductase, glutathione reductase and arsenate reductase activities. Also, both OsGrxs are involved in AsIII extrusion by altering the Fps1 transcripts in yeast and protect the cell by maintaining cellular GSH pool. Thus, our results strongly suggest that OsGrxs play a crucial role in the maintenance of the intracellular GSH pool and redox status of the cell during both AsV and AsIII stress and might be involved in regulating intracellular AsIII levels by modulation of aquaporin expression and functions. PMID:27174139
Ruuska, S A; Badger, M R; Andrews, T J; von Caemmerer, S
2000-02-01
Transgenic tobacco (Nicotiana tabacum L. cv. W38) plants with an antisense gene directed against the mRNA of the small subunit of Rubisco were used to investigate the role of O2 as an electron acceptor during photosynthesis. The reduction in Rubisco has reduced the capacity for CO2-fixation in these plants without a similar reduction in electron transport capacity. Concurrent measurements of chlorophyll fluorescence and CO2 assimilation at different CO2 and O2 partial pressures showed close linear relationships between chloroplast electron transport rates calculated from chlorophyll fluorescence and those calculated from CO2-fixation. These relationships were similar for wild-type and transgenic plants, indicating that the reduced capacity for CO2 fixation in the transgenic plants did not result in extra electron transport not associated with the photosynthetic carbon reduction (PCR) or photorespiratory carbon oxidation (PCO) cycle. This was further investigated with mass spectrometric measurements of 16O2 and 18O2 exchange made concurrently with measurements of chlorophyll fluorescence. In all tobacco lines the rates of 18O2 uptake in the dark were similar to the 18O2 uptake rates at very high CO2 partial pressures in the light. Rates of oxygenase activity calculated from 18O2 uptake at the compensation point were linearly related to the Rubisco content of leaves. The ratios of oxygenase to carboxylase rates were calculated from measurements of 16O2 evolution and 18O2 uptake at the compensation point. These ratios were lower in the transgenic plants, consistent with their higher CO2 compensation points. It is concluded that although there may be some electron transport to O2 to balance conflicting demands of NADPH to ATP requirements, this flux must decrease in proportion with the reduced demand for ATP and NADPH consumption in the transgenic lines. The altered balance between electron transport and Rubisco capacity, however, does not result in rampant electron
Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh
2015-09-01
The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays. PMID:26068719
Andretta, I; Pomar, C; Rivest, J; Pomar, J; Radünz, J
2016-07-01
This study was developed to assess the impact on performance, nutrient balance, serum parameters and feeding costs resulting from the switching of conventional to precision-feeding programs for growing-finishing pigs. A total of 70 pigs (30.4±2.2 kg BW) were used in a performance trial (84 days). The five treatments used in this experiment were a three-phase group-feeding program (control) obtained with fixed blending proportions of feeds A (high nutrient density) and B (low nutrient density); against four individual daily-phase feeding programs in which the blending proportions of feeds A and B were updated daily to meet 110%, 100%, 90% or 80% of the lysine requirements estimated using a mathematical model. Feed intake was recorded automatically by a computerized device in the feeders, and the pigs were weighed weekly during the project. Body composition traits were estimated by scanning with an ultrasound device and densitometer every 28 days. Nitrogen and phosphorus excretions were calculated by the difference between retention (obtained from densitometer measurements) and intake. Feeding costs were assessed using 2013 ingredient cost data. Feed intake, feed efficiency, back fat thickness, body fat mass and serum contents of total protein and phosphorus were similar among treatments. Feeding pigs in a daily-basis program providing 110%, 100% or 90% of the estimated individual lysine requirements also did not influence BW, body protein mass, weight gain and nitrogen retention in comparison with the animals in the group-feeding program. However, feeding pigs individually with diets tailored to match 100% of nutrient requirements made it possible to reduce (P<0.05) digestible lysine intake by 26%, estimated nitrogen excretion by 30% and feeding costs by US$7.60/pig (-10%) relative to group feeding. Precision feeding is an effective approach to make pig production more sustainable without compromising growth performance. PMID:26759074
Kanamori, Keiko; Ross, Brian D.
2013-01-01
Summary Rats were given unilateral kainate injection into hippocampal CA3 region, and the effect of chronic electrographic seizures on extracellular glutamine (GLNECF) was examined in those with low and steady levels of extracellular glutamate (GLUECF). GLNECF, collected by microdialysis in awake rats for 5 h, decreased to 62 ± 4.4% of the initial concentration (n = 6). This change correlated with the frequency and magnitude of seizure activity, and occurred in the ipsilateral but not in contralateral hippocampus, nor in kainate-injected rats that did not undergo seizure (n = 6). Hippocampal intracellular GLN did not differ between the Seizure and No-Seizure Groups. These results suggested an intriguing possibility that seizure-induced decrease of GLNECF reflects not decreased GLN efflux into the extracellular fluid, but increased uptake into neurons. To examine this possibility, neuronal uptake of GLNECF was inhibited in vivo by intrahippocampal perfusion of 2-(methylamino)isobutyrate, a competitive and reversible inhibitor of the sodium-coupled neutral amino acid transporter (SNAT) subtypes 1 and 2, as demonstrated by 1.8 ± 0.17 fold elevation of GLNECF (n = 7). The frequency of electrographic seizures during uptake inhibition was reduced to 35 ± 7% (n = 7) of the frequency in pre-perfusion period, and returned to 88 ± 9% in the post-perfusion period. These novel in vivo results strongly suggest that, in this well-established animal model of temporal-lobe epilepsy, the observed seizure-induced decrease of GLNECF reflects its increased uptake into neurons to sustain enhanced glutamatergic epileptiform activity, thereby demonstrating a possible new target for anti-seizure therapies. PMID:24070846
Recknagel, Stefan; Bindl, Ronny; Kurz, Julian; Wehner, Tim; Schoengraf, Philipp; Ehrnthaller, Christian; Qu, Hongchang; Gebhard, Florian; Huber-Lang, Markus; Lambris, John D; Claes, Lutz; Ignatius, Anita
2012-04-01
Confirming clinical evidence, we recently demonstrated that a blunt chest trauma considerably impaired fracture healing in rats, possibly via the interaction of posttraumatic systemic inflammation with local healing processes, the underlying mechanisms being unknown. An important trigger of systemic inflammation is the complement system, with the potent anaphylatoxin C5a. Therefore, we investigated whether the impairment of fracture healing by a severe trauma resulted from systemically activated complement. Rats received a blunt chest trauma and a femur osteotomy stabilized with an external fixator. To inhibit the C5a-dependent posttraumatic systemic inflammation, half of the rats received a C5aR-antagonist intravenously immediately and 12 h after the thoracic trauma. Compared to the controls (control peptide), the treatment with the C5aR-antagonist led to a significantly increased flexural rigidity (three-point-bending test), an improved bony bridging of the fracture gap, and a slightly larger and qualitatively improved callus (µCT, histomorphometry) after 35 days. In conclusion, immunomodulation by a C5aR-antagonist could abolish the deleterious effects of a thoracic trauma on fracture healing, possibly by influencing the function of inflammatory and bone cells locally at the fracture site. C5a could possibly represent a target to prevent delayed bone healing in patients with severe trauma. PMID:21922535
Berenbrock, Charles E.
2015-01-01
The effects of reduced cross-sectional data points on steady-flow profiles were also determined. Thirty-five cross sections of the original steady-flow model of the Kootenai River were used. These two methods were tested for all cross sections with each cross section resolution reduced to 10, 20 and 30 data points, that is, six tests were completed for each of the thirty-five cross sections. Generally, differences from the original water-surface elevation were smaller as the number of data points in reduced cross sections increased, but this was not always the case, especially in the braided reach. Differences were smaller for reduced cross sections developed by the genetic algorithm method than the standard algorithm method.
NASA Technical Reports Server (NTRS)
Herman, G. C.
1986-01-01
A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.
Murakami, Yoshimasa; Tsuboi, Naoya; Inden, Yasuya; Yoshida, Yukihiko; Murohara, Toyoaki; Ihara, Zenichi; Takami, Mitsuaki
2010-01-01
Aims Managed ventricular pacing (MVP) and Search AV+ are representative dual-chamber pacing algorithms for minimizing ventricular pacing (VP). This randomized, crossover study aimed to examine the difference in ability to reduce percentage of VP (%VP) between these two algorithms. Methods and results Symptomatic bradyarrhythmia patients implanted with a pacemaker equipped with both algorithms (Adapta DR, Medtronic) were enrolled. The %VPs of the patients during two periods were compared: 1 month operation of either one of the two algorithms for each period. All patients were categorized into subgroups according to the atrioventricular block (AVB) status at baseline: no AVB (nAVB), first-degree AVB (1AVB), second-degree AVB (2AVB), episodic third-degree AVB (e3AVB), and persistent third-degree AVB (p3AVB). Data were available from 127 patients for the analysis. For all patient subgroups, except for p3AVB category, the median %VPs were lower during the MVP operation than those during the Search AV+ (nAVB: 0.2 vs. 0.8%, P < 0.0001; 1AVB: 2.3 vs. 27.4%, P = 0.001; 2AVB: 16.4% vs. 91.9%, P = 0.0052; e3AVB: 37.7% vs. 92.7%, P = 0.0003). Conclusion Managed ventricular pacing algorithm, when compared with Search AV+, offers further %VP reduction in patients implanted with a dual-chamber pacemaker, except for patients diagnosed with persistent loss of atrioventricular conduction. PMID:19762332
Technology Transfer Automated Retrieval System (TEKTRAN)
This pilot study tested whether varying protein source and quantity in a reduced energy diet would result in significant differences in weight, body composition, and renin angiotensin aldosterone system activity in midlife adults. Eighteen subjects enrolled in a 5 month weight reduction study, invol...
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI
Bechet, P; Mitran, R; Munteanu, M
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact. PMID:24007088
NASA Astrophysics Data System (ADS)
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Hummel, H E; Eisinger, M T; Hein, D F; Breuer, M; Schmid, S; Leithold, G
2012-01-01
Pheromone effects discovered some 130 years, but scientifically defined just half a century ago, are a great bonus for basic and applied biology. Specifically, pest management efforts have been advanced in many insect orders, either for purposes or monitoring, mass trapping, or for mating disruption. Finding and applying a new search algorithm, nearly 20,000 entries in the pheromone literature have been counted, a number much higher than originally anticipated. This compilation contains identified and thus synthesizable structures for all major orders of insects. Among them are hundreds of agriculturally significant insect pests whose aggregated damages and costly control measures range in the multibillions of dollars annually. Unfortunately, and despite a lot of effort within the international entomological scene, the number of efficient and cheap engineering solutions for dispensing pheromones under variable field conditions is uncomfortably lagging behind. Some innovative approaches are cited from the relevant literature in an attempt to rectify this situation. Recently, specifically designed electrospun organic nanofibers offer a lot of promise. With their use, the mating communication of vineyard insects like Lobesia botrana (Lep.: Tortricidae) can be disrupted for periods of seven weeks. PMID:23885431
NASA Astrophysics Data System (ADS)
Kauczor, Joanna; Norman, Patrick; Christiansen, Ove; Coriani, Sonia
2013-12-01
We present a reduced-space algorithm for solving the complex (damped) linear response equations required to compute the complex linear response function for the hierarchy of methods: coupled cluster singles, coupled cluster singles and iterative approximate doubles, and coupled cluster singles and doubles. The solver is the keystone element for the development of damped coupled cluster response methods for linear and nonlinear effects in resonant frequency regions.
De Backer, Charlotte J S; Hudders, Liselot
2014-01-01
This study explores vegetarians' and semi-vegetarians' motives for reducing their meat intake. Participants are categorized as vegetarians (remove all meat from their diet); semi-vegetarians (significantly reduce meat intake: at least three days a week); or light semi-vegetarians (mildly reduce meat intake: once or twice a week). Most differences appear between vegetarians and both groups of semi-vegetarians. Animal-rights and ecological concerns, together with taste preferences, predict vegetarianism, while an increase in health motives increases the odds of being semi-vegetarian. Even within each group, subgroups with different motives appear, and it is recommended that future researchers pay more attention to these differences. PMID:25357269
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
NASA Astrophysics Data System (ADS)
Lin, Wenwen; Yu, D. Y.; Wang, S.; Zhang, Chaoyong; Zhang, Sanqiang; Tian, Huiyu; Luo, Min; Liu, Shengqiang
2015-07-01
In addition to energy consumption, the use of cutting fluids, deposition of worn tools and certain other manufacturing activities can have environmental impacts. All these activities cause carbon emission directly or indirectly; therefore, carbon emission can be used as an environmental criterion for machining systems. In this article, a direct method is proposed to quantify the carbon emissions in turning operations. To determine the coefficients in the quantitative method, real experimental data were obtained and analysed in MATLAB. Moreover, a multi-objective teaching-learning-based optimization algorithm is proposed, and two objectives to minimize carbon emissions and operation time are considered simultaneously. Cutting parameters were optimized by the proposed algorithm. Finally, the analytic hierarchy process was used to determine the optimal solution, which was found to be more environmentally friendly than the cutting parameters determined by the design of experiments method.
NASA Astrophysics Data System (ADS)
Tamascelli, D.; Rosenbach, R.; Plenio, M. B.
2015-06-01
When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.
Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong
2014-01-01
Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136
Lemström, K. B.; Bruning, J. H.; Bruggeman, C. A.; Lautenschlager, I. T.; Häyry, P. J.
1994-01-01
The effect of triple drug immunosuppression (cyclosporine A 10 mg/kg/day+methylprednisolone 0.5 mg/kg/day+azathioprine 2 mg/kg/day) on rat cytomegalovirus (RCMV)-enhanced allograft arteriosclerosis was investigated applying WF (AG-B2, RT1v) recipients of DA (AG-B4, RT1a) aortic allografts. The recipients were inoculated intraperitoneally with 10(5) plaque-forming units of RCMV 1 day after transplantation or left noninfected. The grafts were removed on 7 and 14 days, and at 1, 3, and 6 months after transplantation. The presence of viral infection was demonstrated by plaque assays, cell proliferation by [3H]thymidine autoradiography, and vascular wall alterations by quantitative histology and immunohistochemistry. Triple drug immunosuppression reduced the presence of infectious virus in plaque assays and induced early latency of viral infection. It significantly reduced the peak adventitial inflammatory response (P < 0.05) and reduced and delayed intimal nuclear intensity and intimal thickening (P < 0.05) in RCMV-infected allografts. The proliferative response of smooth muscle cells was reduced by triple drug immunosuppression to 50% of that observed in nonimmunosuppressed RCMV-infected allografts, but still the proliferative peak response was seen at 1 month. Only low level immune activation, ie, the expression of interleukin-2 receptor (P < 0.05) and MHC class II, was observed under triple drug immunosuppression in the adventitia of RCMV-infected allografts, whereas there was no substantial change in the phenotypic distribution of inflammatory cells. In conclusion, although RCMV infection significantly enhances allograft arteriosclerosis also in immunosuppressed allografts, triple drug immunosuppression has no additional detrimental effect but rather a protective one on vascular wall histology. These results further suggest that RCMV-enhanced allograft arteriosclerosis may be an immunopathological condition linked to the host immune response toward the graft and
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure
NASA Astrophysics Data System (ADS)
Shin, Frances B.; Kil, David H.; Dobeck, Gerald J.
1997-07-01
In distributed underwater signal processing for area surveillance and sanitization during regional conflicts, it is often necessary to transmit raw imagery data to a remote processing station for detection-report confirmation and more sophisticated automatic target recognition (ATR) processing. Because of he limited bandwidth available for transmission, image compression is of paramount importance. At the same time, preservation of useful information that contains essential signal attributes is crucial for effective mine detection and classification in shallow water. In this paper, we present an integrated processing strategy that combines image compression and ATR algorithms for superior detection performance while achieving maximal bandwidth reduction. Our reduced-dimension image compression algorithm comprises image-content classification for the subimage-specific transformation, principal component analysis for further dimension reduction, and vector quantization to obtain minimal information state. Next, using an integrated pattern recognition paradigm, our ATR algorithm optimally combines low-dimensional features and an appropriate classifier topology to extract maximum recognition performance from reconstructed images. Instead of assessing performance of the image compression algorithm in terms of commonly used peak signal-to-noise ratio or normalized mean-squared error criteria, we quantify our algorithm performance using a metric that reflects human and operational factors - ATR performance. Our preliminary analysis based on high-frequency sonar real data indicates that we can achieve a compression ratio of up to 57:1 with minimal sacrifice in PD and PFA. Furthermore, we discuss the concept of the classification Cramer-Rao bound in terms of data compression, sufficient statistics, and class separability to quantify the extent to which a classifier approximates the Bayes classifier.
Wright, H F; Hall, S; Hames, A; Hardiman, J; Mills, R; Mills, D S
2015-08-01
This study describes the impact of pet dogs on stress of primary carers of children with Autism Spectrum Disorder (ASD). Stress levels of 38 primary carers acquiring a dog and 24 controls not acquiring a dog were sampled at: Pre-intervention (17 weeks before acquiring a dog), post-intervention (3-10 weeks after acquisition) and follow-up (25-40 weeks after acquisition), using the Parenting Stress Index. Analysis revealed significant improvements in the intervention compared to the control group for Total Stress, Parental Distress and Difficult Child. A significant number of parents in the intervention group moved from clinically high to normal levels of Parental Distress. The results highlight the potential of pet dogs to reduce stress in primary carers of children with an ASD. PMID:25832799
Kirabo, Annet; Park, Sung O.; Wamsley, Heather L.; Gali, Meghanath; Baskin, Rebekah; Reinhard, Mary K.; Zhao, Zhizhuang J.; Bisht, Kirpal S.; Keserű, György M.; Cogle, Christopher R.; Sayeski, Peter P.
2013-01-01
Philadelphia chromosome–negative myeloproliferative neoplasms, including polycythemia vera, essential thrombocytosis, and myelofibrosis, are disorders characterized by abnormal hematopoiesis. Among these myeloproliferative neoplasms, myelofibrosis has the most unfavorable prognosis. Furthermore, currently available therapies for myelofibrosis have little to no efficacy in the bone marrow and hence, are palliative. We recently developed a Janus kinase 2 (Jak2) small molecule inhibitor called G6 and found that it exhibits marked efficacy in a xenograft model of Jak2-V617F–mediated hyperplasia and a transgenic mouse model of Jak2-V617F–mediated polycythemia vera/essential thrombocytosis. However, its efficacy in Jak2-mediated myelofibrosis has not previously been examined. Here, we hypothesized that G6 would be efficacious in Jak2-V617F–mediated myelofibrosis. To test this, mice expressing the human Jak2-V617F cDNA under the control of the vav promoter were administered G6 or vehicle control solution, and efficacy was determined by measuring parameters within the peripheral blood, liver, spleen, and bone marrow. We found that G6 significantly reduced extramedullary hematopoiesis in the liver and splenomegaly. In the bone marrow, G6 significantly reduced pathogenic Jak/STAT signaling by 53%, megakaryocytic hyperplasia by 70%, and the Jak2 mutant burden by 68%. Furthermore, G6 significantly improved the myeloid to erythroid ratio and significantly reversed the myelofibrosis. Collectively, these results indicate that G6 is efficacious in Jak2-V617F–mediated myelofibrosis, and given its bone marrow efficacy, it may alter the natural history of this disease. PMID:22796437
Faridi, Mohd Hafeez; Altintas, Mehmet M.; Gomez, Camilo; Duque, Juan Camilo; Vazquez-Padron, Roberto I.; Gupta, Vineet
2013-01-01
BACKGROUND CD11b/CD18 is a key adhesion receptor that mediates leukocyte adhesion, migration and immune functions. We recently identified novel compounds, leukadherins, that allosterically enhance CD11b/CD18-dependent cell adhesion and reduce inflammation in vivo, suggesting integrin activation to be a novel mechanism of action for the development of anti-inflammatory therapeutics. Since a number of well-characterized anti-CD11b/CD18 activating antibodies are currently available, we wondered if such biological agonists could also become therapeutic leads following this mechanism of action. METHODS We compared the two types of agonists using in vitro cell adhesion and wound-healing assays and using animal model systems. We also studied effects of the two types of agonists on outside-in signaling in treated cells. RESULTS Both types of agonists similarly enhanced integrin-mediated cell adhesion and decreased cell migration. However, unlike leukadherins, the activating antibodies produced significant CD11b/CD18 macro clustering and induced phosphorylation of key proteins involved in outside-in signaling. Studies using conformation reporter antibodies showed that leukadherins did not induce global conformational changes in CD11b/CD18 explaining the reason behind their lack of ligand-mimetic outside-in signaling. In vivo, leukadherins reduced vascular injury in a dose-dependent fashion, but, surprisingly, the anti-CD11b activating antibody ED7 was ineffective. CONCLUSIONS Our results suggest that small molecule allosteric agonists of CD11b/CD18 have clear advantages over the biologic activating antibodies and provide a mechanistic basis for the difference. GENERAL SIGNIFICANCE CD11b/CD18 activation represents a novel strategy for reducing inflammatory injury. Our study establishes small molecule leukadherins as preferred agonists over activating antibodies for future development as novel anti-inflammatory therapeutics. PMID:23454649
Esmaeili, M; Ghaedi, K; Shoaraye Nejati, A; Nematollahi, M; Shiralyian, H; Nasr-Esfahani, M H
2016-01-15
Peroxisomes constitute special cellular organelles which display a variety of metabolic functions including fatty acid oxidation and free radical elimination. Abundance of these flexible organelles varies in response to different environmental stimuli. It has been demonstrated that PEX11β, a peroxisomal membrane elongation factor, is involved in the regulation of size, shape and number of peroxisomes. To investigate the role of PEX11β in neural differentiation of mouse embryonic stem cells (mESCs), we generated a stably transduced mESCs line that derives the expression of a short hairpin RNA against Pex11β gene following doxycycline (Dox) induction. Knock-down of Pex11β, during neural differentiation, significantly reduced the expression of neural progenitor cells and mature neuronal markers (p<0.05) indicating that decreased expression of PEX11β suppresses neuronal maturation. Additionally, mRNA levels of other peroxisome-related genes such as PMP70, Pex11α, Catalase, Pex19 and Pex5 were also significantly reduced by Pex11β knock-down (p<0.05). Interestingly, pretreatment of transduced mESCs with peroxisome proliferator-activated receptor γ agonist (pioglitazone (Pio)) ameliorated the inhibitory effects of Pex11β knock down on neural differentiation. Pio also significantly (p<0.05) increased the expression of neural progenitor and mature neuronal markers besides the expression of peroxisomal genes in transduced mESC. Results elucidated the importance of Pex11β expression in neural differentiation of mESCs, thereby highlighting the essential role of peroxisomes in mammalian neural differentiation. The observation that Pio recovered peroxisomal function and improved neural differentiation of Pex11β knocked-down mESCs, proposes a potential new pharmacological implication of Pio for neurogenesis in patients with peroxisomal defects. PMID:26562432
Herbert, Alex D; Carr, Antony M; Hoffmann, Eva
2014-01-01
Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci) to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ. PMID:25478967
Grulich-Henn, J.; Lichtenstein, S.; Hörster, F.; Hoffmann, G. F.; Nawroth, P. P.; Hamann, A.
2011-01-01
Background. Metabolic risk factors like insulin resistance and dyslipidemia are frequently observed in severly obese children. We investigated the hypothesis that moderate weight reduction by a low-threshold intervention is already able to reduce insulin resistance and cardiovascular risk factors in severely obese children. Methods. A group of 58 severely obese children and adolescents between 8 and 17 years participating in a six-month-long outpatient program was studied before and after treatment. The program included behavioral treatment, dietary education and specific physical training. Metabolic parameters were measured in the fasting state, insulin resistance was evaluated in an oral glucose tolerance test. Results. Mean standard deviation score of the body mass index (SDS-BMI) in the study group dropped significantly from +2.5 ± 0.5 to 2.3 ± 0.6 (P < 0.0001) after participation in the program. A significant decrease was observed in HOMA (6.3 ± 4.2 versus 4.9 ± 2.4, P < 0.03, and in peak insulin levels (232.7 ± 132.4 versus 179.2 ± 73.3 μU/mL, P < 0.006). Significant reductions were also observed in mean levels of hemoglobin A1c, total cholesterol and LDL cholesterol. Conclusions. These data demonstrate that already moderate weight reduction is able to decrease insulin resistance and dyslipidemia in severely obese children and adolescents. PMID:21904547
Hoogsteen, Ilse J. . E-mail: i.hoogsteen@rther.umcn.nl; Pop, Lucas A.M.; Marres, Henri A.M.; Hoogen, Franciscus J.A. van den; Kaanders, Johannes H.A.M.
2006-01-01
Purpose: To evaluate the prognostic significance of hemoglobin (Hb) levels measured before and during treatment with accelerated radiotherapy with carbogen and nicotinamide (ARCON). Methods and Materials: Two hundred fifteen patients with locally advanced tumors of the head and neck were included in a phase II trial of ARCON. This treatment regimen combines accelerated radiotherapy for reduction of repopulation with carbogen breathing and nicotinamide to reduce hypoxia. In these patients, Hb levels were measured before, during, and after radiotherapy. Results: Preirradiation and postirradiation Hb levels were available for 206 and 195 patients respectively. Hb levels below normal were most frequently seen among patients with T4 (p < 0.001) and N2 (p < 0.01) disease. Patients with a larynx tumor had significantly higher Hb levels (p < 0.01) than other tumor sites. During radiotherapy, 69 patients experienced a decrease in Hb level. In a multivariate analysis there was no prognostic impact of Hb level on locoregional control, disease-free survival, and overall survival. Primary tumor site was independently prognostic for locoregional control (p = 0.018), and gender was the only prognostic factor for disease-free and overall survival (p < 0.05). High locoregional control rates were obtained for tumors of the larynx (77%) and oropharynx (72%). Conclusion: Hemoglobin level was not found to be of prognostic significance for outcome in patients with squamous cell carcinoma of the head and neck after oxygen-modifying treatment with ARCON.
Mitchell, Pamela H.; Veith, Richard C.; Becker, Kyra J.; Buzaitis, Ann; Cain, Kevin C.; Fruin, Michael; Tirschwell, David; Teri, Linda
2009-01-01
Background and Purpose Depression following stroke is prevalent, diminishing recovery and quality of life. Brief behavioral intervention, adjunctive to antidepressant therapy has not been well evaluated for long-term efficacy in those with post-stroke depression. Methods 101 clinically depressed ischemic stroke patients within four months of index stroke were randomly assigned to an 8 week brief psychosocial-behavioral intervention plus antidepressant or usual care, including antidepressant. Primary endpoint was reduction in depressive symptom severity at 12 months following entry. Results Hamilton Rating Scale for Depression (HRSD) raw score in the intervention group was significantly lower immediately post-treatment (p = < 0.001) and at 12 months (p = 0.05) compared to controls. Remission (HRSD < 10), was significantly greater immediately post-treatment and at 12 months in the intervention group compared to usual care control. The mean percent decrease (47% ± 26% intervention versus 32% ± 36% control, p = .02) and the mean absolute decrease (-9.2 ± 5.7 intervention versus -6.2 ± 6.4 control, p = 0.023) in HRSD at 12 months were clinically important and statistically significant in the intervention group compared to control. Conclusion A brief psychosocial/behavioral intervention is highly effective in reducing depression in both the short and long term. PMID:19661478
Kurien, Biji T; Harris, Valerie M; Quadri, Syed M S; Coutinho-de Souza, Patricia; Cavett, Joshua; Moyer, Amanda; Ittiq, Bilal; Metcalf, Angela; Ramji, Husayn F; Truong, Dat; Kumar, Ramesh; Koelsch, Kristi A; Centola, Mike; Payne, Adam; Danda, Debashish; Scofield, R Hal
2015-01-01
Objectives Commercial curcumin (CU), derived from food spice turmeric (TU), has been widely studied as a potential therapeutic for a variety of oncological and inflammatory conditions. Lack of solubility/bioavailability has hindered curcumin's therapeutic efficacy in human diseases. We have solubilised curcumin in water applying heat/pressure, obtaining up to 35-fold increase in solubility (ultrasoluble curcumin (UsC)). We hypothesised that UsC or ultrasoluble turmeric (UsT) will ameliorate systemic lupus erythematosus (SLE) and Sjögren's syndrome (SS)-like disease in MRL-lpr/lpr mice. Methods Eighteen female MRL-lpr/lpr (6 weeks old) and 18 female MRL-MpJ mice (6 weeks old) were used. Female MRL-lpr/lpr mice develop lupus-like disease at the 10th week and die at an average age of 17 weeks. MRL-MpJ mice develop lupus-like disease around 47 weeks and typically die at 73 weeks. Six mice of each strain received autoclaved water only (lpr-water or MpJ-water group), UsC (lpr-CU or MpJ-CU group) or UsT (lpr-TU or MpJ-TU group) in the water bottle. Results UsC or UsT ameliorates SLE in the MRL-lpr/lpr mice by significantly reducing lymphoproliferation, proteinuria, lesions (tail) and autoantibodies. lpr-CU group had a 20% survival advantage over lpr-water group. However, lpr-TU group lived an average of 16 days shorter than lpr-water group due to complications unrelated to lupus-like illness. CU/TU treatment inhibited lymphadenopathy significantly compared with lpr-water group (p=0.03 and p=0.02, respectively) by induction of apoptosis. Average lymph node weights were 2606±1147, 742±331 and 385±68 mg, respectively, for lpr-water, lpr-CU and lpr-TU mice. Transferase dUTP nick end labelling assay showed that lymphocytes in lymph nodes of lpr-CU and lpr-TU mice underwent apoptosis. Significantly reduced cellular infiltration of the salivary glands in the lpr-TU group compared with the lpr-water group, and a trend towards reduced kidney damage was observed in
Musavian, Hanieh S; Krebs, Niels H; Nonboe, Ulf; Corry, Janet E L; Purnell, Graham
2014-04-17
Steam or hot water decontamination treatment of broiler carcasses is hampered by process limitations due to prolonged treatment times and adverse changes to the epidermis. In this study, a combination of steam with ultrasound (SonoSteam®) was investigated on naturally contaminated broilers that were processed at conventional slaughter speeds of 8,500 birds per hour in a Danish broiler plant. Industrial-scale SonoSteam equipment was installed in the evisceration room, before the inside/outside carcass washer. The SonoSteam treatment was evaluated in two separate trials performed on two different dates. Numbers of naturally occurring Campylobacter spp. and TVC were determined from paired samples of skin excised from opposite sides of the breast of the same carcass, before and after treatments. Sampling was performed at two different points on the line: i) before and after the SonoSteam treatment and ii) before the SonoSteam treatment and after 80 min of air chilling. A total of 44 carcasses were examined in the two trials. Results from the first trial showed that the mean initial Campylobacter contamination level of 2.35 log₁₀ CFU was significantly reduced (n=12, p<0.001) to 1.40 log₁₀ CFU after treatment. A significant reduction (n=11, p<0.001) was also observed with samples analyzed before SonoSteam treatment (2.64 log₁₀ CFU) and after air chilling (1.44 log₁₀ CFU). In the second trial, significant reductions (n=10, p<0.05) were obtained for carcasses analyzed before (mean level of 2.23 log₁₀ CFU) and after the treatment (mean level of 1.36 log₁₀ CFU). Significant reductions (n=11, p<0.01) were also found for Campylobacter numbers analyzed before the SonoSteam treatment (2.02 log₁₀ CFU) and after the air chilling treatment (1.37 log₁₀ CFU). The effect of air chilling without SonoSteam treatment was determined using 12 carcasses pre- and postchill. Results showed insignificant reductions of 0.09 log₁₀ from a mean initial level of
NASA Technical Reports Server (NTRS)
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW
Oguntibeju, Oluwafemi O.; Meyer, Samantha; Aboua, Yapo G.; Goboza, Mediline
2016-01-01
Background. Hypoxis hemerocallidea is a native plant that grows in the Southern African regions and is well known for its beneficial medicinal effects in the treatment of diabetes, cancer, and high blood pressure. Aim. This study evaluated the effects of Hypoxis hemerocallidea on oxidative stress biomarkers, hepatic injury, and other selected biomarkers in the liver and kidneys of healthy nondiabetic and streptozotocin- (STZ-) induced diabetic male Wistar rats. Materials and Methods. Rats were injected intraperitoneally with 50 mg/kg of STZ to induce diabetes. The plant extract-Hypoxis hemerocallidea (200 mg/kg or 800 mg/kg) aqueous solution was administered (daily) orally for 6 weeks. Antioxidant activities were analysed using a Multiskan Spectrum plate reader while other serum biomarkers were measured using the RANDOX chemistry analyser. Results. Both dosages (200 mg/kg and 800 mg/kg) of Hypoxis hemerocallidea significantly reduced the blood glucose levels in STZ-induced diabetic groups. Activities of liver enzymes were increased in the diabetic control and in the diabetic group treated with 800 mg/kg, whereas the 200 mg/kg dosage ameliorated hepatic injury. In the hepatic tissue, the oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP), catalase, and total glutathione were reduced in the diabetic control group. However treatment with both doses improved the antioxidant status. The FRAP and the catalase activities in the kidney were elevated in the STZ-induced diabetic group treated with 800 mg/kg of the extract possibly due to compensatory responses. Conclusion. Hypoxis hemerocallidea demonstrated antihyperglycemic and antioxidant effects especially in the liver tissue. PMID:27403200
Oguntibeju, Oluwafemi O; Meyer, Samantha; Aboua, Yapo G; Goboza, Mediline
2016-01-01
Background. Hypoxis hemerocallidea is a native plant that grows in the Southern African regions and is well known for its beneficial medicinal effects in the treatment of diabetes, cancer, and high blood pressure. Aim. This study evaluated the effects of Hypoxis hemerocallidea on oxidative stress biomarkers, hepatic injury, and other selected biomarkers in the liver and kidneys of healthy nondiabetic and streptozotocin- (STZ-) induced diabetic male Wistar rats. Materials and Methods. Rats were injected intraperitoneally with 50 mg/kg of STZ to induce diabetes. The plant extract-Hypoxis hemerocallidea (200 mg/kg or 800 mg/kg) aqueous solution was administered (daily) orally for 6 weeks. Antioxidant activities were analysed using a Multiskan Spectrum plate reader while other serum biomarkers were measured using the RANDOX chemistry analyser. Results. Both dosages (200 mg/kg and 800 mg/kg) of Hypoxis hemerocallidea significantly reduced the blood glucose levels in STZ-induced diabetic groups. Activities of liver enzymes were increased in the diabetic control and in the diabetic group treated with 800 mg/kg, whereas the 200 mg/kg dosage ameliorated hepatic injury. In the hepatic tissue, the oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP), catalase, and total glutathione were reduced in the diabetic control group. However treatment with both doses improved the antioxidant status. The FRAP and the catalase activities in the kidney were elevated in the STZ-induced diabetic group treated with 800 mg/kg of the extract possibly due to compensatory responses. Conclusion. Hypoxis hemerocallidea demonstrated antihyperglycemic and antioxidant effects especially in the liver tissue. PMID:27403200
Papoiu, Alexandru DP; Chaudhry, Hunza; Hayes, Erin C; Chan, Yiong-Huak; Herbst, Kenneth D
2015-01-01
Background Itch is one of the most frequent skin complaints and its treatment is challenging. From a neurophysiological perspective, two distinct peripheral and spinothalamic pathways have been described for itch transmission: a histaminergic pathway and a nonhistaminergic pathway mediated by protease-activated receptors (PAR)2 and 4. The nonhistaminergic itch pathway can be activated exogenously by spicules of cowhage, a tropical plant that releases a cysteine protease named mucunain that binds to and activates PAR2 and PAR4. Purpose This study was conducted to assess the antipruritic effect of a novel over-the-counter (OTC) steroid-free topical hydrogel formulation, TriCalm®, in reducing itch intensity and duration, when itch was induced with cowhage, and compared it with two other commonly used OTC anti-itch drugs. Study participants and methods This double-blinded, vehicle-controlled, randomized, crossover study recorded itch intensity and duration in 48 healthy subjects before and after skin treatment with TriCalm hydrogel, 2% diphenhydramine, 1% hydrocortisone, and hydrogel vehicle, used as a vehicle control. Results TriCalm hydrogel significantly reduced the peak intensity and duration of cowhage-induced itch when compared to the control itch curve, and was significantly superior to the two other OTC antipruritic agents and its own vehicle in antipruritic effect. TriCalm hydrogel was eight times more effective than 1% hydrocortisone and almost six times more effective than 2% diphenhydramine in antipruritic action, as evaluated by the reduction of area under the curve. Conclusion TriCalm hydrogel has a robust antipruritic effect against nonhistaminergic pruritus induced via the PAR2 pathway, and therefore it could represent a promising treatment option for itch. PMID:25941445
Rivinius, Rasmus; Helmschrott, Matthias; Ruhparwar, Arjang; Schmack, Bastian; Erbel, Christian; Gleissner, Christian A; Akhavanpoor, Mohammadreza; Frankenstein, Lutz; Darche, Fabrice F; Schweizer, Patrick A; Thomas, Dierk; Ehlermann, Philipp; Bruckner, Tom; Katus, Hugo A; Doesch, Andreas O
2016-01-01
Background Amiodarone is a frequently used antiarrhythmic drug in patients with end-stage heart failure. Given its long half-life, pre-transplant use of amiodarone has been controversially discussed, with divergent results regarding morbidity and mortality after heart transplantation (HTX). Aim The aim of this study was to investigate the effects of long-term use of amiodarone before HTX on early post-transplant atrial fibrillation (AF) and mortality after HTX. Methods Five hundred and thirty patients (age ≥18 years) receiving HTX between June 1989 and December 2012 were included in this retrospective single-center study. Patients with long-term use of amiodarone before HTX (≥1 year) were compared to those without long-term use (none or <1 year of amiodarone). Primary outcomes were early post-transplant AF and mortality after HTX. The Kaplan–Meier estimator using log-rank tests was applied for freedom from early post-transplant AF and survival. Results Of the 530 patients, 74 (14.0%) received long-term amiodarone therapy, with a mean duration of 32.3±26.3 months. Mean daily dose was 223.0±75.0 mg. Indications included AF, Wolff–Parkinson–White syndrome, ventricular tachycardia, and ventricular fibrillation. Patients with long-term use of amiodarone before HTX had significantly lower rates of early post-transplant AF (P=0.0105). Further, Kaplan–Meier analysis of freedom from early post-transplant AF showed significantly lower rates of AF in this group (P=0.0123). There was no statistically significant difference between patients with and without long-term use of amiodarone prior to HTX in 1-year (P=0.8596), 2-year (P=0.8620), 5-year (P=0.2737), or overall follow-up mortality after HTX (P=0.1049). Moreover, Kaplan–Meier survival analysis showed no statistically significant difference in overall survival (P=0.1786). Conclusion Long-term use of amiodarone in patients before HTX significantly reduces early post-transplant AF and is not associated with
Hertel, Ole; Hvidberg, Martin; Ketzel, Matthias; Storm, Lars; Stausgaard, Lizzi
2008-01-15
A proper selection of route through the urban area may significantly reduce the air pollution exposure. This is the main conclusion from the presented study. Air pollution exposure is determined for two selected cohorts along the route going from home to working place, and back from working place to home. Exposure is determined with a street pollution model for three scenarios: bicycling along the shortest possible route, bicycling along the low exposure route along less trafficked streets, and finally taking the shortest trip using public transport. Furthermore, calculations are performed for the cases the trip takes place inside as well as outside the traffic rush hours. The results show that the accumulated air pollution exposure for the low exposure route is between 10% and 30% lower for the primary pollutants (NO(x) and CO). However, the difference is insignificant and in some cases even negative for the secondary pollutants (NO(2) and PM(10)/PM(2.5)). Considering only the contribution from traffic in the travelled streets, the accumulated air pollution exposure is between 54% and 67% lower for the low exposure route. The bus is generally following highly trafficked streets, and the accumulated exposure along the bus route is therefore between 79% and 115% higher than the high exposure bicycle route (the short bicycle route). Travelling outside the rush hour time periods reduces the accumulated exposure between 10% and 30% for the primary pollutants, and between 5% and 20% for the secondary pollutants. The study indicates that a web based route planner for selecting the low exposure route through the city might be a good service for the public. In addition the public may be advised to travel outside rush hour time periods. PMID:17936337
Liu, Gangjun; Tan, Ou; Gao, Simon S.; Pechauer, Alex D.; Lee, ByungKun; Lu, Chen D.; Fujimoto, James G.; Huang, David
2015-01-01
We propose methods to align interferograms affected by trigger jitter to a reference interferogram based on the information (amplitude/phase) at a fixed-pattern noise location to reduce residual fixed-pattern noise and improve the phase stability of swept source optical coherence tomography (SS-OCT) systems. One proposed method achieved this by introducing a wavenumber shift (k-shift) in the interferograms of interest and searching for the k-shift that minimized the fixed-pattern noise amplitude. The other method calculated the relative k-shift using the phase information at the residual fixed-pattern noise location. Repeating this wavenumber alignment procedure for all A-lines of interest produced fixed-pattern noise free and phase stable OCT images. A system incorporating these correction routines was used for human retina OCT and Doppler OCT imaging. The results from the two methods were compared, and it was found that the intensity-based method provided better results. PMID:25969023
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Anselmi, Mariella; Buonfrate, Dora; Guevara Espinoza, Angel; Prandi, Rosanna; Marquez, Monica; Gobbo, Maria; Montresor, Antonio; Albonico, Marco; Racines Orbe, Marcia; Bisoffi, Zeno
2015-01-01
Objectives To evaluate the effect of ivermectin mass drug administration on strongyloidiasis and other soil transmitted helminthiases. Methods We conducted a retrospective analysis of data collected in Esmeraldas (Ecuador) during surveys conducted in areas where ivermectin was annually administered to the entire population for the control of onchocerciasis. Data from 5 surveys, conducted between 1990 (before the start of the distribution of ivermectin) and 2013 (six years after the interruption of the intervention) were analyzed. The surveys also comprised areas where ivermectin was not distributed because onchocerciasis was not endemic. Different laboratory techniques were used in the different surveys (direct fecal smear, formol-ether concentration, IFAT and IVD ELISA for Strongyloides stercoralis). Results In the areas where ivermectin was distributed the strongyloidiasis prevalence fell from 6.8% in 1990 to zero in 1996 and 1999. In 2013 prevalence in children was zero with stool examination and 1.3% with serology, in adult 0.7% and 2.7%. In areas not covered by ivermectin distribution the prevalence was 23.5% and 16.1% in 1996 and 1999, respectively. In 2013 the prevalence was 0.6% with fecal exam and 9.3% with serology in children and 2.3% and 17.9% in adults. Regarding other soil transmitted helminthiases: in areas where ivermectin was distributed the prevalence of T. trichiura was significantly reduced, while A. lumbricoides and hookworms were seemingly unaffected. Conclusions Periodic mass distribution of ivermectin had a significant impact on the prevalence of strongyloidiasis, less on trichuriasis and apparently no effect on ascariasis and hookworm infections. PMID:26540412
Zhu, Li; Yang, Zhong-Cheng; Li, Ao; Cheng, De-Chang
2000-02-01
AIM:To investigate the changes of gastric acid production and its mechanism in shock period of severe burn in rats.METHODS:A rat model with 30% TBSA full thickness burn injury was employed and the gastric acid production,together with gastric mucosal blood flow (GMBF) and energy charge (EC) were measured serially within 48h postburn.RESULTS:The gastric acid production in the acute shock period was markedly inhibited after severe burn injury.At the 3rd h postburn,the gastric juice volume, total acidity and acid output were already significantly decreased (P < 0.01 =, and reached the lowest point, 0.63mL/L ± 0.20mL/L, 10.81mmol/L ± 2.58mmol/L and 2.23mmol/h ± 0.73mmol/h respectively, at the 12th h postburn. Although restored to some degree 24h after thermal injury, the variables above were still statistically lower, compared with those of control animals at the 48th h postburn. The GMBF and EC were also significantly reduced after severe burns, consistent with the trend of gastric acid production changes.CONCLUSION:Gastric acid production, as well as GMBF and EC was predominantly decreased in the early postburn stage, suggesting that gastric mucosal ischemia and hypoxia with resultant disturbance in energy metabolism, but not gastric acid proper, might be the decisive factor in the pathogenesis of AGML after thermal injury, and that the preventive use of anti-acid drugs during burn shock period was unreasonable in some respects. Therefore, taking effective measures to improve gastric mucosal blood perfusion as early as possible postburn might be more preferable for the AGML prevention and treatment. PMID:11819529
Mai, Volker; Ukhanova, Maria; Reinhard, Mary K; Li, Manrong; Sulakvelidze, Alexander
2015-01-01
We used a mouse model to establish safety and efficacy of a bacteriophage cocktail, ShigActive™, in reducing fecal Shigella counts after oral challenge with a susceptible strain. Groups of inbred C57BL/6J mice challenged with Shigella sonnei strain S43-NalAcR were treated with a phage cocktail (ShigActive™) composed of 5 lytic Shigella bacteriophages and ampicillin. The treatments were administered (i) 1 h after, (ii) 3 h after, (iii) 1 h before and after, and (iv) 1 h before bacterial challenge. The treatment regimens elicited a 10- to 100-fold reduction in the CFU's of the challenge strain in fecal and cecum specimens compared to untreated control mice, (P < 0.05). ShigActiveTM treatment was at least as effective as treatment with ampicillin but had a significantly less impact on the gut microbiota. Long-term safety studies did not identify any side effects or distortions in overall gut microbiota associated with bacteriophage administration. Shigella phages may be therapeutically effective in a “classical phage therapy” approach, at least during the early stages after Shigella ingestion. Oral prophylactic “phagebiotic” administration of lytic bacteriophages may help to maintain a healthy gut microbiota by killing specifically targeted bacterial pathogens in the GI tract, without deleterious side effects and without altering the normal gut microbiota. PMID:26909243
Marzano, Shin-Yi Lee; Hobbs, Houston A.; Nelson, Berlin D.; Hartman, Glen L.; Eastburn, Darin M.; McCoppin, Nancy K.
2015-01-01
ABSTRACT A recombinant strain of Sclerotinia sclerotiorum hypovirus 2 (SsHV2) was identified from a North American Sclerotinia sclerotiorum isolate (328) from lettuce (Lactuca sativa L.) by high-throughput sequencing of total RNA. The 5′- and 3′-terminal regions of the genome were determined by rapid amplification of cDNA ends. The assembled nucleotide sequence was up to 92% identical to two recently reported SsHV2 strains but contained a deletion near its 5′ terminus of more than 1.2 kb relative to the other SsHV2 strains and an insertion of 524 nucleotides (nt) that was distantly related to Valsa ceratosperma hypovirus 1. This suggests that the new isolate is a heterologous recombinant of SsHV2 with a yet-uncharacterized hypovirus. We named the new strain Sclerotinia sclerotiorum hypovirus 2 Lactuca (SsHV2L) and deposited the sequence in GenBank with accession number KF898354. Sclerotinia sclerotiorum isolate 328 was coinfected with a strain of Sclerotinia sclerotiorum endornavirus 1 and was debilitated compared to cultures of the same isolate that had been cured of virus infection by cycloheximide treatment and hyphal tipping. To determine whether SsHV2L alone could induce hypovirulence in S. sclerotiorum, a full-length cDNA of the 14,538-nt viral genome was cloned. Transcripts corresponding to the viral RNA were synthesized in vitro and transfected into a virus-free isolate of S. sclerotiorum, DK3. Isolate DK3 transfected with SsHV2L was hypovirulent on soybean and lettuce and exhibited delayed maturation of sclerotia relative to virus-free DK3, completing Koch's postulates for the association of hypovirulence with SsHV2L. IMPORTANCE A cosmopolitan fungus, Sclerotinia sclerotiorum infects more than 400 plant species and causes a plant disease known as white mold that produces significant yield losses in major crops annually. Mycoviruses have been used successfully to reduce losses caused by fungal plant pathogens, but definitive relationships between
Rashid, Mohammed H; Revazishvili, Tamara; Dean, Timothy; Butani, Amy; Verratti, Kathleen; Bishop-Lilly, Kimberly A; Sozhamannan, Shanmuga; Sulakvelidze, Alexander; Rajanna, Chythanya
2012-07-01
Five Y. pestis bacteriophages obtained from various sources were characterized to determine their biological properties, including their taxonomic classification, host range and genomic diversity. Four of the phages (YpP-G, Y, R and YpsP-G) belong to the Podoviridae family, and the fifth phage (YpsP-PST) belongs to the Myoviridae family, of the order Caudovirales comprising of double-stranded DNA phages. The genomes of the four Podoviridae phages were fully sequenced and found to be almost identical to each other and to those of two previously characterized Y. pestis phages Yepe2 and φA1122. However, despite their genomic homogeneity, they varied in their ability to lyse Y. pestis and Y. pseudotuberculosis strains. The five phages were combined to yield a "phage cocktail" (tentatively designated "YPP-100") capable of lysing the 59 Y. pestis strains in our collection. YPP-100 was examined for its ability to decontaminate three different hard surfaces (glass, gypsum board and stainless steel) experimentally contaminated with a mixture of three genetically diverse Y. pestis strains CO92, KIM and 1670G. Five minutes of exposure to YPP-100 preparations containing phage concentrations of ca. 10(9), 10(8) and 10(7) PFU/mL completely eliminated all viable Y. pestis cells from all three surfaces, but a few viable cells were recovered from the stainless steel coupons treated with YPP-100 diluted to contain ca. 10(6) PFU/mL. However, even that highly diluted preparation significantly (p = < 0.05) reduced Y. pestis levels by ≥ 99.97%. Our data support the idea that Y. pestis phages may be useful for decontaminating various hard surfaces naturally- or intentionally-contaminated with Y. pestis. PMID:23275868
Courtin, Fabrice; Camara, Mamadou; Rayaisse, Jean-Baptiste; Kagbadouno, Moise; Dama, Emilie; Camara, Oumou; Traoré, Ibrahima S.; Rouamba, Jérémi; Peylhard, Moana; Somda, Martin B.; Leno, Mamadou; Lehane, Mike J.; Torr, Steve J.; Solano, Philippe; Jamonneau, Vincent; Bucheton, Bruno
2015-01-01
Background Control of gambiense sleeping sickness, a neglected tropical disease targeted for elimination by 2020, relies mainly on mass screening of populations at risk and treatment of cases. This strategy is however challenged by the existence of undetected reservoirs of parasites that contribute to the maintenance of transmission. In this study, performed in the Boffa disease focus of Guinea, we evaluated the value of adding vector control to medical surveys and measured its impact on disease burden. Methods The focus was divided into two parts (screen and treat in the western part; screen and treat plus vector control in the eastern part) separated by the Rio Pongo river. Population census and baseline entomological data were collected from the entire focus at the beginning of the study and insecticide impregnated targets were deployed on the eastern bank only. Medical surveys were performed in both areas in 2012 and 2013. Findings In the vector control area, there was an 80% decrease in tsetse density, resulting in a significant decrease of human tsetse contacts, and a decrease of disease prevalence (from 0.3% to 0.1%; p=0.01), and an almost nil incidence of new infections (<0.1%). In contrast, incidence was 10 times higher in the area without vector control (>1%, p<0.0001) with a disease prevalence increasing slightly (from 0.5 to 0.7%, p=0.34). Interpretation Combining medical and vector control was decisive in reducing T. b. gambiense transmission and in speeding up progress towards elimination. Similar strategies could be applied in other foci. PMID:26267667
Dise, J; Liang, X; Lin, L; Teo, B
2014-06-15
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions from day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic
Ahmed, Mohammed; Singh, Ajay K; Mondal, Jahur A; Sarkar, Sisir K
2013-08-22
Water in the presence of electrolytes plays an important role in biological and industrial processes. The properties of water, such as the intermolecular coupling, Fermi resonance (FR), hydrogen-bonding, and Raman cross section were investigated by measuring the Raman spectra in the OD and OH stretch regions in presence of alkali halides (NaX; X = F, Cl, Br, I). It is observed that the changes in spectral characteristics by the addition of NaX in D2O are similar to those obtained by the addition of H2O in D2O. The spectral width decreases significantly by the addition of NaX in D2O (H2O) than that in the isotopically diluted water. Quantitative estimation, on the basis of integrated Raman intensity, revealed that the relative Raman cross section, σ(H)/σ(b) (σ(H) and σ(b) are the average Raman cross section of water in the first hydration shell of X(-) and in bulk, respectively), in D2O and H2O is higher than those in the respective isotopically diluted water. These results suggest that water in the hydration shell has reduced FR and intermolecular coupling compared to those in bulk. In the isotopically diluted water, the relative Raman cross section increases with increase in size of the halide ions (σ(H)/σ(b) = 0.6, 1.1, 1.5, and 1.9 for F(-), Cl(-), Br(-), and I(-), respectively), which is assignable to the enhancement of Raman cross section by charge transfer from halide ions to the hydrating water. Nevertheless, the experimentally determined σ(H)/σ(b) is lower than the calculated values obtained on the basis of the energy of the charge transfer state of water. The weak enhancement of σ(H)/σ(b) signifies that the charge transfer transition in the hydration shell of halide ions causes little change in the OD (OH) bond lengths of hydrating water. PMID:23895453
A novel waveband routing algorithm in hierarchical WDM optical networks
NASA Astrophysics Data System (ADS)
Huang, Jun; Guo, Xiaojin; Qiu, Shaofeng; Luo, Jiangtao; Zhang, Zhizhong
2007-11-01
Hybrid waveband/wavelength switching in intelligent optical networks is gaining more and more academic attention. It is very challenging to develop efficient algorithms to efficiently use waveband switching capability. In this paper, we propose a novel cross-layer routing algorithm, waveband layered graph routing algorithm (WBLGR), in waveband switching-enabled optical networks. Through extensive simulation WBLGR algorithm can significantly improve the performance in terms of reduced call blocking probability.
Not Available
2005-05-01
This case study prepared for the U.S. Department of Energy's Industrial Technologies Program describes a plant-wide energy assessment conducted at the Metaldyne, Inc., forging plant in Royal Oak, Michigan. The assessment focused on reducing the plant's operating costs, inventory, and energy use. If the company were to implement all the recommendations that came out of the assessment, its total annual energy savings for electricity would be about 11.5 million kWh and annual cost savings would be $12.6 million.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Igarashi, Hironaka; Suzuki, Yuji; Kwee, Ingrid L; Nakada, Tsutomu
2014-12-01
Recent studies on cerebrospinal fluid (CSF) homeostasis emphasize the importance of water influx into the peri-capillary (Virchow-Robin) space through aquaporin 4 (AQP-4). This water flow is believed to have the functionality equivalent to the systemic lymphatic system and plays a critical role in beta-amyloid clearance. Using a newly developed molecular imaging technique capable of tracing water molecules, in vivo, water influx into the CSF was quantitatively analyzed in senile plaque (SP) bearing transgenic Alzheimer's disease (AD) model mice. The results unequivocally demonstrated that water influx into CSF is significantly impaired in SP-bearing transgenic mice, the degree of which being virtually identical to that previously observed in AQP-4 knockout mice. The study strongly indicates that disturbance in AQP-4-based water flow and, hence, impairment in beta-amyloid clearance play a significant role in SP formation. PMID:25082552
Overview of an Algorithm Plugin Package (APP)
NASA Astrophysics Data System (ADS)
Linda, M.; Tilmes, C.; Fleig, A. J.
2004-12-01
Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.
Oron, Amir; Oron, Uri; Streeter, Jackson; De Taboada, Luis; Alexandrovich, Alexander; Trembovler, Victoria; Shohami, Esther
2012-01-20
Near-infrared transcranial laser therapy (TLT) has been found to modulate various biological processes including traumatic brain injury (TBI). Following TBI in mice, in this study we assessed the possibility of various near-infrared TLT modes (pulsed versus continuous) in producing a beneficial effect on the long-term neurobehavioral outcome and brain lesions of these mice. TBI was induced by a weight-drop device, and neurobehavioral function was assessed from 1 h to 56 days post-trauma using the Neurological Severity Score (NSS). The extent of recovery is expressed as the difference in NSS (dNSS), the difference between the initial score and that at any other later time point. An 808-nm Ga-Al-As diode laser was employed transcranially 4, 6, or 8 h post-trauma to illuminate the entire cortex of the brain. Mice were divided into several groups of 6-8 mice: one control group that received a sham treatment and experimental groups that received either TLT continuous wave (CW) or pulsed wave (PW) mode transcranially. MRI was taken prior to sacrifice at 56 days post-injury. From 5-28 days post-TBI, the NSS of the laser-treated mice were significantly lower (p<0.05) than those of the non-laser-treated control mice. The percentage of surviving mice that demonstrated full recovery at 56 days post-CHI (NSS=0, as in intact mice) was the highest (63%) in the group that had received TLT in the PW mode at 100 Hz. In addition, magnetic resonance imaging (MRI) analysis demonstrated significantly smaller infarct lesion volumes in laser-treated mice compared to controls. Our data suggest that non-invasive TLT of mice post-TBI provides a significant long-term functional neurological benefit, and that the pulsed laser mode at 100 Hz is the preferred mode for such treatment. PMID:22040267
Sinn, Brandon T; Kelly, Lawrence M; Freudenstein, John V
2015-08-01
The drivers of angiosperm diversity have long been sought and the flower-arthropod association has often been invoked as the most powerful driver of the angiosperm radiation. We now know that features that influence arthropod interactions cannot only affect the diversification of lineages, but also expedite or constrain their rate of extinction, which can equally influence the observed asymmetric richness of extant angiosperm lineages. The genus Asarum (Aristolochiaceae; ∼100 species) is widely distributed in north temperate forests, with substantial vegetative and floral divergence between its three major clades, Euasarum, Geotaenium, and Heterotropa. We used Binary-State Speciation and Extinction Model (BiSSE) Net Diversification tests of character state distributions on a Maximum Likelihood phylogram and a Coalescent Bayesian species tree, inferred from seven chloroplast markers and nuclear rDNA, to test for signal of asymmetric diversification, character state transition, and extinction rates of floral and vegetative characters. We found that reduction in vegetative growth, loss of autonomous self-pollination, and the presence of putative fungal-mimicking floral structures are significantly correlated with increased diversification in Asarum. No significant difference in model likelihood was identified between symmetric and asymmetric rates of character state transitions or extinction. We conclude that the flowers of the Heterotropa clade may have converged on some aspects of basidiomycete sporocarp morphology and that brood-site mimicry, coupled with a reduction in vegetative growth and the loss of autonomous self-pollination, may have driven diversification within Asarum. PMID:25937558
Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; Ligeiro de Oliveira, Ana Paula
2015-01-01
Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J—660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD. PMID:26322981
Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; de Oliveira, Ana Paula Ligeiro
2015-01-01
Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J-660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD. PMID:26322981
Singh, Jagdeep K.; Farnie, Gillian; Bundred, Nigel J.; Simões, Bruno M; Shergill, Amrita; Landberg, Göran; Howell, Sacha; Clarke, Robert B.
2012-01-01
Purpose Breast cancer stem-like cells (CSCs) are an important therapeutic target as they are predicted to be responsible for tumour initiation, maintenance and metastases. Interleukin-8 (IL-8) is upregulated in breast cancer and associated with poor prognosis. Breast cancer cell line studies indicate that IL-8 via its cognate receptors, CXCR1 and CXCR2, is important in regulating breast CSC activity. We investigated the role of IL-8 in the regulation of CSC activity using patient-derived breast cancers and determined the potential benefit of combining CXCR1/2 inhibition with HER2-targeted therapy. Experimental design CSC activity of metastatic and invasive human breast cancers (n=19) was assessed ex vivo using the mammosphere colony forming assay. Results Metastatic fluid IL-8 level correlated directly with mammosphere formation (r=0.652; P<0.05; n=10). Recombinant IL-8 directly increased mammosphere formation/self-renewal in metastatic and invasive breast cancers (n=17). IL-8 induced activation of EGFR/HER2 and downstream signalling pathways and effects were abrogated by inhibition of SRC, EGFR/HER2, PI3K or MEK. Furthermore, lapatinib inhibited the mammosphere-promoting effect of IL-8 in both HER2-positive and negative patient-derived cancers. CXCR1/2 inhibition also blocked the effect of IL-8 on mammosphere formation and added to the efficacy of lapatinib in HER2-positive cancers. Conclusions These studies establish a role for IL-8 in the regulation of patient-derived breast CSC activity and demonstrate that IL-8/CXCR1/2 signalling is partly mediated via a novel SRC and EGFR/HER2-dependent pathway. Combining CXCR1/2 inhibitors with current HER2-targeted therapies has potential as an effective therapeutic strategy to reduce CSC activity in breast cancer and improve the survival of HER2-positive patients. PMID:23149820
Junka, Adam F; Szymczyk, Patrycja; Secewicz, Anna; Pawlak, Andrzej; Smutnicka, Danuta; Ziółkowski, Grzegorz; Bartoszewicz, Marzenna; Chlebus, Edward
2016-01-01
In our previous work we reported the impact of hydrofluoric and nitric acid used for chemical polishing of Ti-6Al-7Nb scaffolds on decrease of the number of Staphylococcus aureus biofilm forming cells. Herein, we tested impact of the aforementioned substances on biofilm of Gram-negative microorganism, Pseudomonas aeruginosa, dangerous pathogen responsible for plethora of implant-related infections. The Ti-6Al-7Nb scaffolds were manufactured using Selective Laser Melting method. Scaffolds were subjected to chemical polishing using a mixture of nitric acid and fluoride or left intact (control group). Pseudomonal biofilm was allowed to form on scaffolds for 24 hours and was removed by mechanical vortex shaking. The number of pseudomonal cells was estimated by means of quantitative culture and Scanning Electron Microscopy. The presence of nitric acid and fluoride on scaffold surfaces was assessed by means of IR and rentgen spetorscopy. Quantitative data were analysed using the Mann-Whitney test (P ≤ 0.05). Our results indicate that application of chemical polishing correlates with significant drop of biofilm-forming pseudomonal cells on the manufactured Ti-6Al-7Nb scaffolds ( p = 0.0133, Mann-Whitney test) compared to the number of biofilm-forming cells on non-polished scaffolds. As X-ray photoelectron spectroscopy revealed the presence of fluoride and nitrogen on the surface of scaffold, we speculate that drop of biofilm forming cells may be caused by biofilm-supressing activity of these two elements. PMID:27150429
Phillpotts, R J; Lescott, T; Gates, A J; Jones, L
2000-01-01
Although it is unlikely that large-scale vaccination against smallpox will ever be required again, it is conceivable that the need may arise to vaccinate against a human orthopoxvirus infection. A possible example could be the emergence of monkey poxvirus (MPV) as a significant human disease in Africa. Vaccinia virus (VV) recombinants, genetically modified to carry the immunogenic proteins of other pathogenic organisms, have potential use as vaccines against other diseases present in this region. The immune response to parental wild-type (wt) or recombinant VV was examined by binding and functional assays, relevant to protection: total IgG, IgG subclass profile, B5R gene product (gp42)-specific IgG, neutralizing antibodies and class 1-mediated cytotoxic lymphocyte activity. There was a substantial reduction in the immune response to VV after scarification with about 10(8) PFU of recombinant as compared to wt virus. These data suggest that to achieve the levels of immunity associated with protection against human orthopoxvirus infection, and to control a possible future outbreak of orthopoxvirus disease, the use of wt VV would be necessary. PMID:11155357
Uclés Moreno, Ana; Herrera López, Sonia; Reichert, Barbara; Lozano Fernández, Ana; Hernando Guil, María Dolores; Fernández-Alba, Amadeo Rodríguez
2015-01-20
This manuscript reports a new pesticide residue analysis method employing a microflow-liquid chromatography system coupled to a triple quadrupole mass spectrometer (microflow-LC-ESI-QqQ-MS). This uses an electrospray ionization source with a narrow tip emitter to generate smaller droplets. A validation study was undertaken to establish performance characteristics for this new approach on 90 pesticide residues, including their degradation products, in three commodities (tomato, pepper, and orange). The significant benefits of the microflow-LC-MS/MS-based method were a high sensitivity gain and a notable reduction in matrix effects delivered by a dilution of the sample (up to 30-fold); this is as a result of competition reduction between the matrix compounds and analytes for charge during ionization. Overall robustness and a capability to withstand long analytical runs using the microflow-LC-MS system have been demonstrated (for 100 consecutive injections without any maintenance being required). Quality controls based on the results of internal standards added at the samples' extraction, dilution, and injection steps were also satisfactory. The LOQ values were mostly 5 μg kg(-1) for almost all pesticide residues. Other benefits were a substantial reduction in solvent usage and waste disposal as well as a decrease in the run-time. The method was successfully applied in the routine analysis of 50 fruit and vegetable samples labeled as organically produced. PMID:25495653
Romesser, Paul B.; Cahlon, Oren; Scher, Eli; Zhou, Ying; Berry, Sean L.; Rybkin, Alisa; Sine, Kevin M.; Tang, Shikui; Sherman, Eric J.; Wong, Richard; Lee, Nancy Y.
2016-01-01
Background As proton beam radiation therapy (PBRT) may allow greater normal tissue sparing when compared with intensity-modulated radiation therapy (IMRT), we compared the dosimetry and treatment-related toxicities between patients treated to the ipsilateral head and neck with either PBRT or IMRT. Methods Between 01/2011 and 03/2014, 41 consecutive patients underwent ipsilateral irradiation for major salivary gland cancer or cutaneous squamous cell carcinoma. The availability of PBRT, during this period, resulted in an immediate shift in practice from IMRT to PBRT, without any change in target delineation. Acute toxicities were assessed using the National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0. Results Twenty-three (56.1%) patients were treated with IMRT and 18 (43.9%) with PBRT. The groups were balanced in terms of baseline, treatment, and target volume characteristics. IMRT plans had a greater median maximum brainstem (29.7 Gy vs. 0.62 Gy (RBE), P < 0.001), maximum spinal cord (36.3 Gy vs. 1.88 Gy (RBE), P < 0.001), mean oral cavity (20.6 Gy vs. 0.94 Gy (RBE), P < 0.001), mean contralateral parotid (1.4 Gy vs. 0.0 Gy (RBE), P < 0.001), and mean contralateral submandibular (4.1 Gy vs. 0.0 Gy (RBE), P < 0.001) dose when compared to PBRT plans. PBRT had significantly lower rates of grade 2 or greater acute dysgeusia (5.6% vs. 65.2%, P < 0.001), mucositis (16.7% vs. 52.2%, P = 0.019), and nausea (11.1% vs. 56.5%, P = 0.003). Conclusions The unique properties of PBRT allow greater normal tissue sparing without sacrificing target coverage when irradiating the ipsilateral head and neck. This dosimetric advantage seemingly translates into lower rates of acute treatment-related toxicity. PMID:26867969
Dynamic Bubble-Check Algorithm for Check Node Processing in Q-Ary LDPC Decoders
NASA Astrophysics Data System (ADS)
Lin, Wei; Bai, Baoming; Ma, Xiao; Sun, Rong
A simplified algorithm for check node processing of extended min-sum (EMS) q-ary LDPC decoders is presented in this letter. Compared with the bubble check algorithm, the so-called dynamic bubble-check (DBC) algorithm aims to further reduce the computational complexity for the elementary check node (ECN) processing. By introducing two flag vectors in ECN processing, The DBC algorithm can use the minimum number of comparisons at each step. Simulation results show that, DBC algorithm uses significantly fewer comparison operations than the bubble check algorithm, and presents no performance loss compared with standard EMS algorithm on AWGN channels.
Hankey, Brandon; Riley, Brad
2015-06-01
A shortcut review was carried out to establish whether a procalcitonin-guided algorithm could safely reduce antibiotic consumption for patients with an exacerbation of chronic obstructive pulmonary disease attending the emergency department. Four randomised controlled trials were found directly relevant to the three-part question, combined within a later systematic review and meta-analysis. A further prospective cohort study was also found relevant to the three-part question. The author, date and country of publication, patient group studied, study type, relevant outcomes, results and study weaknesses of these papers are tabulated. The clinical bottom line is that a procalcitonin algorithm appears to be a useful strategy to guide initiation and duration of antibiotics and can reduce consumption without conferring additional risk. However, no cost effectiveness data are available as yet and further validation studies are required prior to widespread recommendation. PMID:25991774
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
A new frame-based registration algorithm.
Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834
A new frame-based registration algorithm
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.
1998-01-01
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Sampling Within k-Means Algorithm to Cluster Large Datasets
Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George
2011-08-01
Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.
Chen, Zhenhua; Chen, Xun; Wu, Wei
2013-04-28
In this paper, by applying the reduced density matrix (RDM) approach for nonorthogonal orbitals developed in the first paper of this series, efficient algorithms for matrix elements between VB structures and energy gradients in valence bond self-consistent field (VBSCF) method were presented. Both algorithms scale only as nm(4) for integral transformation and d(2)n(β)(2) for VB matrix elements and 3-RDM evaluation, while the computational costs of other procedures are negligible, where n, m, d, and n(β )are the numbers of variable occupied active orbitals, basis functions, determinants, and active β electrons, respectively. Using tensor properties of the energy gradients with respect to the orbital coefficients presented in the first paper of this series, a partial orthogonal auxiliary orbital set was introduced to reduce the computational cost of VBSCF calculation in which orbitals are flexibly defined. Test calculations on the Diels-Alder reaction of butadiene and ethylene have shown that the novel algorithm is very efficient for VBSCF calculations. PMID:23635124
ERIC Educational Resources Information Center
Andrews, Ian A.
1999-01-01
Provides a crossword puzzle with an answer key corresponding to the book entitled "Significant Treasures/Tresors Parlants" that is filled with color and black-and-white prints of paintings and artifacts from 131 museums and art galleries as a sampling of the 2,200 such Canadian institutions. (CMK)
Improved local linearization algorithm for solving the quaternion equations
NASA Technical Reports Server (NTRS)
Yen, K.; Cook, G.
1980-01-01
The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Jiménez-Núñez, Francisco Gabriel; Manrique-Arija, Sara; Ureña-Garnica, Inmaculada; Romero-Barco, Carmen María; Panero-Lamothe, Blanca; Descalzo, Miguel Angel; Carmona, Loreto; Rodríguez-Pérez, Manuel; Fernández-Nebro, Antonio
2013-07-01
We evaluated the efficacy of a triage approach based on a combination of osteoporosis risk-assessment tools plus peripheral densitometry to identify low bone density accurately enough to be useful for clinical decision making in postmenopausal women. We conducted a cross-sectional diagnostic study in postmenopausal Caucasian women from primary and tertiary care. All women underwent dual-energy X-ray absorptiometric (DXA) measurement at the hip and lumbar spine and were categorized as osteoporotic or not. Additionally, patients had a nondominant heel densitometry performed with a PIXI densitometer. Four osteoporosis risk scores were tested: SCORE, ORAI, OST, and OSIRIS. All measurements were cross-blinded. We estimated the area under the curve (AUC) to predict the DXA results of 16 combinations of PIXI plus risk scores. A formula including the best combination was derived from a regression model and its predictability estimated. We included 505 women, in whom the prevalence of osteoporosis was 20 %, similar in both settings. The best algorithm was a combination of PIXI + OST + SCORE with an AUC of 0.826 (95 % CI 0.782-0.869). The proposed formula is Risk = (-12) × [PIXI + (-5)] × [OST + (-2)] × SCORE and showed little bias in the estimation (0.0016). If the formula had been implemented and the intermediate risk cutoff set at -5 to 20, the system would have saved
Kushner, Steven; Han, David; Oscar-Berman, Marlene; William Downs, B; Madigan, Margaret A; Giordano, John; Beley, Thomas; Jones, Scott; Barh, Debmayla; Simpatico, Thomas; Dushaj, Kristina; Lohmann, Raquel; Braverman, Eric R; Schoenthaler, Stephen; Ellison, David; Blum, Kenneth
2013-01-01
It is well established that inherited human aldehyde dehydrogenase 2 (ALDH-2) deficiency reduces the risk for alcoholism. Kudzu plants and extracts have been used for 1,000 years in traditional Chinese medicine to treat alcoholism. Kudzu contains daidzin, which inhibits ALDH-2 and suppresses heavy drinking in rodents. Decreased drinking due to ALDH-2 inhibition is attributed to aversive properties of acetaldehyde accumulated during alcohol consumption. However not all of the anti-alcohol properties of diadzin are due to inhibition of ALDH-2. This is in agreement with our earlier work showing significant interaction effects of both pyrozole (ALDH-2 inhibitor) and methyl-pyrozole (non-inhibitor) and ethanol’s depressant effects. Moreover, it has been suggested that selective ALDH 2 inhibitors reduce craving for alcohol by increasing dopamine in the nucleus accumbens (NAc). In addition there is significant evidence related to the role of the genetics of bitter receptors (TAS2R) and its stimulation as an aversive mechanism against alcohol intake. The inclusion of bitters such as Gentian & Tangerine Peel in Declinol provides stimulation of gut TAS2R receptors which is potentially synergistic with the effects of Kudzu. Finally the addition of Radix Bupleuri in the Declinol formula may have some protective benefits not only in terms of ethanol induced liver toxicity but neurochemical actions involving endorphins, dopamine and epinephrine. With this information as a rationale, we report herein that this combination significantly reduced Alcohol Use Disorders Identification Test (AUDIT) scores administered to ten heavy drinkers (M=8, F=2; 43.2 ± 14.6 years) attending a recovery program. Specifically, from the pre-post comparison of the AUD scores, it was found that the score of every participant decreased after the intervention which ranged from 1 to 31. The decrease in the scores was found to be statistically significant with the p-value of 0.00298 (two-sided paired
Development and Evaluation of Algorithms for Breath Alcohol Screening
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-01-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576
Development and Evaluation of Algorithms for Breath Alcohol Screening.
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-01-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
a Distributed Polygon Retrieval Algorithm Using Mapreduce
NASA Astrophysics Data System (ADS)
Guo, Q.; Palanisamy, B.; Karimi, H. A.
2015-07-01
The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Automatic-control-algorithm effects on energy production
McNerney, G.M.
1981-01-01
Algorithm control strategy for unattended wind turbine operation is a potentially important aspect of wind energy production that has thus far escaped treatment in the literature. Early experience in automatic operation of the Sandia 17-m VAWT has demonstrated the need for a systematic study of control algorithms. To this end, a computer model has been developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model has been used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long-term energy production. An attempt has been made to generalize these results from local site and turbine characteristics to obtain general guidelines for control algorithm design.
A set-membership approach to blind channel equalization algorithm
NASA Astrophysics Data System (ADS)
Li, Yue-ming
2013-03-01
The constant modulus algorithm (CMA) has low computational complexity while presenting slow convergence and possible convergence to local minima, the CMA family of algorithms based on affine projection version (AP-CMA) alleviates the speed limitations of the CMA. However, the computational complexity has been a weak point in the implementation of AP-CMA. To reduce the computational complexity of the adaptive filtering algorithm, a new AP-CMA algorithm based on set membership (SM-AP-CMA) is proposed. The new algorithm combines a bounded error specification on the adaptive filter with the concept of data-reusing. Simulations confirmed that the convergence rate of the proposed algorithm is significantly faster; meanwhile, the excess mean square error can be maintained in a relatively low level and a substantial reduction in the number of updates when compared with its conventional counterpart.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Reduced Basis Method for Nanodevices Simulation
Pau, George Shu Heng
2008-05-23
Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.
Efficient implementation of the adaptive scale pixel decomposition algorithm
NASA Astrophysics Data System (ADS)
Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.
2016-08-01
Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.
Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören
2016-01-01
The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15–24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01–0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages. PMID:27482705
Kumakech, Edward; Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören
2016-01-01
The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15-24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01-0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages. PMID:27482705
Wang, Dian; Zhang, Qiang; Eisenberg, Burton L.; Kane, John M.; Li, X. Allen; Lucas, David; Petersen, Ivy A.; DeLaney, Thomas F.; Freeman, Carolyn R.; Finkelstein, Steven E.; Hitchcock, Ying J.; Bedi, Manpreet; Singh, Anurag K.; Dundas, George; Kirsch, David G.
2015-01-01
Purpose We performed a multi-institutional prospective phase II trial to assess late toxicities in patients with extremity soft tissue sarcoma (STS) treated with preoperative image-guided radiation therapy (IGRT) to a reduced target volume. Patients and Methods Patients with extremity STS received IGRT with (cohort A) or without (cohort B) chemotherapy followed by limb-sparing resection. Daily pretreatment images were coregistered with digitally reconstructed radiographs so that the patient position could be adjusted before each treatment. All patients received IGRT to reduced tumor volumes according to strict protocol guidelines. Late toxicities were assessed at 2 years. Results In all, 98 patients were accrued (cohort A, 12; cohort B, 86). Cohort A was closed prematurely because of poor accrual and is not reported. Seventy-nine eligible patients from cohort B form the basis of this report. At a median follow-up of 3.6 years, five patients did not have surgery because of disease progression. There were five local treatment failures, all of which were in field. Of the 57 patients assessed for late toxicities at 2 years, 10.5% experienced at least one grade ≥ 2 toxicity as compared with 37% of patients in the National Cancer Institute of Canada SR2 (CAN-NCIC-SR2: Phase III Randomized Study of Pre- vs Postoperative Radiotherapy in Curable Extremity Soft Tissue Sarcoma) trial receiving preoperative radiation therapy without IGRT (P < .001). Conclusion The significant reduction of late toxicities in patients with extremity STS who were treated with preoperative IGRT and absence of marginal-field recurrences suggest that the target volumes used in the Radiation Therapy Oncology Group RTOG-0630 (A Phase II Trial of Image-Guided Preoperative Radiotherapy for Primary Soft Tissue Sarcomas of the Extremity) study are appropriate for preoperative IGRT for extremity STS. PMID:25667281
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Juneja, B; Gilland, D; Hintenlang, D; Doxsee, K; Bova, F
2014-06-15
Purpose: In Compton Backscatter Imaging (CBI), the source and detector reside on the same side of the patient. We previously demonstrated the applicability of CBI systems for medical purposes using an industrial system. To assist in post-processing images from a CBI system, a forward model based on radiation absorption and scatter principles has been developed. Methods: The forward model was developed in C++ using raytracing to track particles. The algorithm accepts phantoms of any size and resolution to calculate the fraction of incident photons scattered back to the detector, and can perform these calculations for any detector geometry and source specification. To validate the model, results were compared to MCNP-X, which is a Monte Carlo based simulation software, for various combinations of source specifications, detector geometries, and phantom compositions. Results: The model verified that the backscatter signal to the detector was based on three interaction probabilities: a) attenuation of photons going into the phantom, b) Compton scatter of photons toward the detector, and c) attenuation of photons coming out of the phantom. The results from the MCNP-X simulations and the forward model varied from 1 to 5%. This difference was less than 1% for energies higher than 30 keV, but was up to 4% for lower energies. At 50 keV, the difference was less than 1% for multiple detector widths and for both homogeneous and heterogeneous phantoms. Conclusion: As part of the optimization of a medical CBI system, an efficient and accurate forward model was constructed in C++ to estimate the output of CBI system. The model characterized individual components contributing to CBI output and increased computational efficiency over Monte Carlo simulations. It is now used in the development of novel post-processing algorithms that reduce image blur by reversing undesired contribution from outside the region of interest.
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Application of fast BLMS algorithm in acoustic echo cancellation
NASA Astrophysics Data System (ADS)
Zhao, Yue; Li, Nian Q.
2013-03-01
The acoustic echo path is usually very long and ranges from several hundreds to few thousands of taps. Frequency domain adaptive filter provides a solution to acoustic echo cancellation by means of resulting a significant reduction in the computational burden. In this paper, fast BLMS (Block Least-Mean-Square) algorithm in frequency domain is realized by using fast FFT technology. The adaptation of filter parameters is actually performed in the frequency domain. The proposed algorithm can ensure convergence with high speed and reduce computational complexity. Simulation results indicate that the algorithm demonstrates good performance for acoustic echo cancellation in communication systems.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case
Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing
2014-01-01
This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038
PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm
Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
PCB drill path optimization by combinatorial cuckoo search algorithm.
Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
Roberts, Catherine
2012-05-01
A short-cut review was carried out to establish whether ambulatory patients immobilized in a below knee plaster of paris cast and administered with a prophylactic dose anticoagulation with low molecular weight heparin; LMWH can benefit from a reduced risk of venous thromboembolism within the next 90 days One Cochrane Review was relevant to the question. The author, date and country of publication, patient group studied, study type, relevant outcomes, results and study weaknesses of these papers are tabulated. The clinical bottom line is that the use of LMWH thromboprophylaxis is effective at reducing the incidence of VTE in these patients. PMID:22523146
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Addendum to 'A new hybrid algorithm for computing a fast discrete Fourier transform'
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Benjauthrit, B.
1981-01-01
The reported investigation represents a continuation of a study conducted by Reed and Truong (1979), who proposed a hybrid algorithm for computing the discrete Fourier transform (DFT). The proposed technique employs a Winograd-type algorithm in conjunction with the Mersenne prime-number theoretic transform to perform a DFT. The implementation of the technique involves a considerable number of additions. The new investigation shows an approach which can reduce the number of additions significantly. It is proposed to use Winograd's algorithm for computing the Mersenne prime-number theoretic transform in the transform portion of the hybrid algorithm.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Star pattern recognition algorithm aided by inertial information
NASA Astrophysics Data System (ADS)
Liu, Bao; Wang, Ke-dong; Zhang, Chao
2011-08-01
Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.
Convergence behavior of a new DSMC algorithm.
Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert; Bird, Graeme A.
2008-10-01
The convergence rate of a new direct simulation Monte Carlo (DSMC) method, termed 'sophisticated DSMC', is investigated for one-dimensional Fourier flow. An argon-like hard-sphere gas at 273.15K and 266.644Pa is confined between two parallel, fully accommodating walls 1mm apart that have unequal temperatures. The simulations are performed using a one-dimensional implementation of the sophisticated DSMC algorithm. In harmony with previous work, the primary convergence metric studied is the ratio of the DSMC-calculated thermal conductivity to its corresponding infinite-approximation Chapman-Enskog theoretical value. As discretization errors are reduced, the sophisticated DSMC algorithm is shown to approach the theoretical values to high precision. The convergence behavior of sophisticated DSMC is compared to that of original DSMC. The convergence of the new algorithm in a three-dimensional implementation is also characterized. Implementations using transient adaptive sub-cells and virtual sub-cells are compared. The new algorithm is shown to significantly reduce the computational resources required for a DSMC simulation to achieve a particular level of accuracy, thus improving the efficiency of the method by a factor of 2.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Statistical or biological significance?
Saxon, Emma
2015-01-01
Oat plants grown at an agricultural research facility produce higher yields in Field 1 than in Field 2, under well fertilised conditions and with similar weather exposure; all oat plants in both fields are healthy and show no sign of disease. In this study, the authors hypothesised that the soil microbial community might be different in each field, and these differences might explain the difference in oat plant growth. They carried out a metagenomic analysis of the 16 s ribosomal 'signature' sequences from bacteria in 50 randomly located soil samples in each field to determine the composition of the bacterial community. The study identified >1000 species, most of which were present in both fields. The authors identified two plant growth-promoting species that were significantly reduced in soil from Field 2 (Student's t-test P < 0.05), and concluded that these species might have contributed to reduced yield. PMID:26541972
Using genetic algorithms to select and create features for pattern classification. Technical report
Chang, E.I.; Lippmann, R.P.
1991-03-11
Genetic algorithms were used to select and create features and to select reference exemplar patterns for machine vision and speech pattern classification tasks. On a 15-feature machine-vision inspection task, it was found that genetic algorithms performed no better than conventional approaches to feature selection but required much more computation. For a speech recognition task, genetic algorithms required no more computation time than traditional approaches but reduced the number of features required by a factor of five (from 153 to 33 features). On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) that reduced classification error rates from 10 to almost 0 percent. Neural net and nearest-neighbor classifiers were unable to provide such low error rates using only the original features. Genetic algorithms were also used to reduce the number of reference exemplar patterns and to select the value of k for a k-nearest-neighbor classifier. On a .338 training pattern vowel recognition problem with 10 classes, genetic algorithms simultaneously reduced the number of stored exemplars from 338 to 63 and selected k without significantly decreasing classification accuracy. In all applications, genetic algorithms were easy to apply and found good solutions in many fewer trials than would be required by an exhaustive search. Run times were long but not unreasonable. These results suggest that genetic algorithms may soon be practical for pattern classification problems as faster serial and parallel computers are developed.
Duan, Lin; Wang, Zhongyuan; Hou, Yan; Wang, Zepeng; Gao, Guandao; Chen, Wei; Alvarez, Pedro J J
2016-10-15
Metal oxides are often anchored to graphene materials to achieve greater contaminant removal efficiency. To date, the enhanced performance has mainly been attributed to the role of graphene materials as a conductor for electron transfer. Herein, we report a new mechanism via which graphene materials enhance oxidation of organic contaminants by metal oxides. Specifically, Mn3O4-rGO nanocomposites (Mn3O4 nanoparticles anchored to reduced graphene oxide (rGO) nanosheets) enhanced oxidation of 1-naphthylamine (used here as a reaction probe) compared to bare Mn3O4. Spectroscopic analyses (X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy) show that the rGO component of Mn3O4-rGO was further reduced during the oxidation of 1-naphthylamine, although rGO reduction was not the result of direct interaction with 1-naphthylamine. We postulate that rGO improved the oxidation efficiency of anchored Mn3O4 by re-oxidizing Mn(II) formed from the reaction between Mn3O4 and 1-naphthylamine, thereby regenerating the surface-associated oxidant Mn(III). The proposed role of rGO was verified by separate experiments demonstrating its ability to oxidize dissolved Mn(II) to Mn(III), which subsequently can oxidize 1-naphthylamine. The role of dissolved oxygen in re-oxidizing Mn(II) was ruled out by anoxic (N2-purged) control experiments showing similar results as O2-sparged tests. Opposite pH effects on the oxidation efficiency of Mn3O4-rGO versus bare Mn3O4 were also observed, corroborating the proposed mechanism because higher pH facilitates oxidation of surface-associated Mn(II) even though it lowers the oxidation potential of Mn3O4. Overall, these findings may guide the development of novel metal oxide-graphene nanocomposites for contaminant removal. PMID:27448035
[Algorithm for treating preoperative anemia].
Bisbe Vives, E; Basora Macaya, M
2015-06-01
Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics. PMID:26320341
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Efficacy of a diagnostic and therapeutic algorithm for Clostridium difficile infection.
Marukawa, Yohei; Komura, Takuya; Kagaya, Takashi; Ohta, Hajime; Unoura, Masashi
2016-08-01
In July 2012, metronidazole was approved for the treatment of Clostridium difficile infection (CDI). To clarify the selection criteria for the drug in terms of CDI severity, we established a diagnostic and therapeutic algorithm with reference to the SHEA-IDSA Clinical Practice Guidelines. We compared patients whose treatments were guided by the algorithm (29 cases, October 2012-September 2013) with patients treated prior to the development of the algorithm (37 cases, October 2011-September 2012). All cases treated with reference to the algorithm were diagnosed using enzyme immunoassay of C. difficile toxins A and B and glutamate dehydrogenase;an appropriate drug was prescribed in 93.1% of the cases. We found no significant between-group differences in the cure, recurrence, or complication rates. However, drug costs in cases wherein treatments were guided by the algorithm were markedly reduced. We have, thus, shown that algorithm-guided treatment is efficacious and cost-effective. PMID:27498935
Vehicle counting and classification algorithms for unattended ground sensors
NASA Astrophysics Data System (ADS)
Hohil, Myron E.; Heberley, Jeffrey R.; Chang, Jay; Rotolo, Anthony
2003-09-01
Unattended ground sensor technology used for battlefield awareness and other wide area surveillance applications requires state-of-the-art algorithms to address the unprecedented challenges faced in detecting, classifying and tracking military combat vehicles. The performance of traditional acoustic sensor systems often degrades unacceptably against the dynamic and highly mobile multiple target environments in which today's forces must operate. In the present work, a target counting algorithm has been developed to solve problems attributed to unstable tracking performance by resolving track loss deficiencies inherent to closely spaced target environments. The algorithm provides a way to discriminate between vehicles as they pass through an acoustic "trip-line" formed by a sensor in a predetermined field of view (FOV). The proposed approach is realized through an adaptive beamforming algorithm that achieves enhanced directivity in a principal look direction by significantly reducing the effects of interferers outside the precise bearing of the steering direction. The classification algorithm described herein facilitates a minimal representation for features extracted from harmonically related structures characteristic to acoustic emissions from ground vehicles found in battlefield environments. The reduced feature space representation exploits an ordering for the principal narrowband components found in a vehicle's engine noise and has proven effective in solving fundamental problems associated with discriminating between vehicles in variable SNR environments. The performance of the algorithms is demonstrated using signature data collected during various acoustic sensor field test experiments.
Sprecher, Christoph M; Schmidutz, Florian; Helfen, Tobias; Richards, R Geoff; Blauth, Michael; Milz, Stefan
2015-12-01
Osteoporosis is a systemic disorder predominantly affecting postmenopausal women but also men at an advanced age. Both genders may suffer from low-energy fractures of, for example, the proximal humerus when reduction of the bone stock or/and quality has occurred.The aim of the current study was to compare the amount of bone in typical fracture zones of the proximal humerus in osteoporotic and non-osteoporotic individuals.The amount of bone in the proximal humerus was determined histomorphometrically in frontal plane sections. The donor bones were allocated to normal and osteoporotic groups using the T-score from distal radius DXA measurements of the same extremities. The T-score evaluation was done according to WHO criteria. Regional thickness of the subchondral plate and the metaphyseal cortical bone were measured using interactive image analysis.At all measured locations the amount of cancellous bone was significantly lower in individuals from the osteoporotic group compared to the non-osteoporotic one. The osteoporotic group showed more significant differences between regions of the same bone than the non-osteoporotic group. In both groups the subchondral cancellous bone and the subchondral plate were least affected by bone loss. In contrast, the medial metaphyseal region in the osteoporotic group exhibited higher bone loss in comparison to the lateral side.This observation may explain prevailing fracture patterns, which frequently involve compression fractures and certainly has an influence on the stability of implants placed in this medial region. It should be considered when planning the anchoring of osteosynthesis materials in osteoporotic patients with fractures of the proximal humerus. PMID:26705200
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Roy, Gourgopal; Fedorkin, Oleg; Fujiki, Masaaki; Skarjinskaia, Marina; Knapp, Elisabeth; Rabindran, Shailaja; Yusibov, Vidadi
2013-07-01
Alfalfa mosaic virus (AlMV) RNAs 1 and 2 with deletions in their 3' non‑translated regions (NTRs) have been previously shown to be encapsidated into virions by coat protein (CP) expressed from RNA3, indicating that the 3' NTRs of RNAs 1 and 2 are not required for virion assembly. Here, we constructed various mutants by deleting sequences within the 3' NTR of AlMV subgenomic (sg) RNA4 (same as of RNA3) and examined the effect of these deletions on replication and translation of chimeric Tobacco mosaic virus (TMV) expressing AlMV sgRNA4 from the TMV CP sg promoter (Av/A4) in tobacco protoplasts and Nicotiana benthamiana plants. While the Av/A4 mutants were as competent as the wild-type Av/A4 in RNA replication in protoplasts, their encapsidation, long-distance movement and virus accumulation varied significantly in N. benthamiana. These data suggest that the 3' NTR of AlMV sgRNA4 contains potential elements necessary for virus encapsidation. PMID:23867804
Pileri, Emanuela; Gibert, Elisa; Soldevila, Ferran; García-Saenz, Ariadna; Pujols, Joan; Diaz, Ivan; Darwich, Laila; Casal, Jordi; Martín, Marga; Mateu, Enric
2015-01-30
The present study assessed the efficacy of vaccination against genotype 1 porcine reproductive and respiratory syndrome virus (PRRSV) in terms of reduction of the transmission. Ninety-eight 3-week-old piglets were divided in two groups: V (n=40) and NV (n=58) that were housed separately. V animals were vaccinated with a commercial genotype 1 PRRSV vaccine while NV were kept as controls. On day 35 post-vaccination, 14 NV pigs were separated and inoculated intranasally with 2 ml of a heterologous genotype 1 PRRSV isolate ("seeder" pigs, SP). The other V and NV animals were distributed in groups of 5 pigs each. Two days later, one SP was introduced into each pen to expose V and NV to PRRSV. Sentinel pigs were allocated in adjacent pens. Follow-up was of 21 days. All NV (30/30) became viremic after contact with SP while only 53% of V pigs were detected so (21/40, p<0.05). Vaccination shortened viremia (12.2±4 versus 3.7±3.4 days in NV and V pigs, respectively, p<0.01). The 50% survival time for becoming infected (Kaplan-Meier) for V was 21 days (CI95%=14.1-27.9) compared to 7 days (CI95%=5.2-8.7) for NV animals (p<0.01). These differences were reflected in the R value as well: 2.78 (CI95%=2.13-3.43) for NV and 0.53 (CI95%=0.19-0.76) for V pigs (p<0.05). All sentinel pigs (10/10) in pens adjacent to NV+SP pens got infected compared to 1/4 sentinel pigs allocated contiguous to a V+SP pen. These data show that vaccination of piglets significantly decrease parameters related to PRRSV transmission. PMID:25439650
A graph spectrum based geometric biclustering algorithm.
Wang, Doris Z; Yan, Hong
2013-01-21
Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. PMID:23079285
Noise filtering algorithm for the MFTF-B computer based control system
Minor, E.G.
1983-11-30
An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions.
NASA Astrophysics Data System (ADS)
El-Guibaly, Fayez; Sabaa, A.
1996-10-01
In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.
Algorithms for SU(n) boson realizations and D -functions
NASA Astrophysics Data System (ADS)
Dhand, Ish; Sanders, Barry C.; de Guise, Hubert
2015-11-01
Boson realizations map operators and states of groups to transformations and states of bosonic systems. We devise a graph-theoretic algorithm to construct the boson realizations of the canonical SU(n) basis states, which reduce the canonical subgroup chain, for arbitrary n. The boson realizations are employed to construct D -functions, which are the matrix elements of arbitrary irreducible representations, of SU(n) in the canonical basis. We demonstrate that our D -function algorithm offers significant advantage over the two competing procedures, namely, factorization and exponentiation.
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current
Statistically significant relational data mining :
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.
2014-02-01
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Climate warming could reduce runoff significantly in New England, USA
Huntington, T.G.
2003-01-01
The relation between mean annual temperature (MAT), mean annual precipitation (MAP) and evapotranspiration (ET) for 38 forested watersheds was determined to evaluate the potential increase in ET and resulting decrease in stream runoff that could occur following climate change and lengthening of the growing season. The watersheds were all predominantly forested and were located in eastern North America, along a gradient in MAT from 3.5??C in New Brunswick, CA, to 19.8??C in northern Florida. Regression analysis for MAT versus ET indicated that along this gradient ET increased at a rate of 2.85 cm??C-1 increase in MAT (??0.96 cm??C-1, 95% confidence limits). General circulation models (GCM) using current mid-range emission scenarios project global MAT to increase by about 3??C during the 21st century. The inferred, potential, reduction in annual runoff associated with a 3??C increase in MAT for a representative small coastal basin and an inland mountainous basin in New England would be 11-13%. Percentage reductions in average daily runoff could be substantially larger during the months of lowest flows (July-September). The largest absolute reductions in runoff are likely to be during April and May with smaller reduction in the fall. This seasonal pattern of reduction in runoff is consistent with lengthening of the growing season and an increase in the ratio of rain to snow. Future increases in water use efficiency (WUE), precipitation, and cloudiness could mitigate part or all of this reduction in runoff but the full effects of changing climate on WUE remain quite uncertain as do future trends in precipitation and cloudiness.
Bacteriophage significantly reduces Listeria monocytogenes on raw salmon fillet tissue
Technology Transfer Automated Retrieval System (TEKTRAN)
We have demonstrated the antilisterial activity of generally recognized as safe (GRAS) bacteriophage LISTEX P100 (phage P100) on the surface of raw salmon fillet tissue against Listeria monocytogenes serotypes 1/2a and 4b. In a broth model system, phage P100 completely inhibited L. monocytogenes gro...
RDX-based nanocomposite microparticles for significantly reduced shock sensitivity.
Qiu, Hongwei; Stepanov, Victor; Di Stasio, Anthony R; Chou, Tsengming; Lee, Woo Y
2011-01-15
Cyclotrimethylenetrinitramine (RDX)-based nanocomposite microparticles were produced by a simple, yet novel spray drying method. The microparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and high performance liquid chromatography (HPLC), which shows that they consist of small RDX crystals (∼0.1-1 μm) uniformly and discretely dispersed in a binder. The microparticles were subsequently pressed to produce dense energetic materials which exhibited a markedly lower shock sensitivity. The low sensitivity was attributed to small crystal size as well as small void size (∼250 nm). The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions. PMID:20940087
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
CORDIC algorithms in four dimensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc; Hsiao, Shen-Fu
1990-11-01
CORDIC algorithms offer an attractive alternative to multiply-and-add based algorithms for the implementation of two-dimensional rotations preserving either norm: (x2 + 2) or (x2 _ y2)/2 Indeed these norms whose computation is a significant part of the evaluation of the two-dimensional rotations are computed much more easily by the CORDIC algorithms. However the part played by norm computations in the evaluation of rotations becomes quickly small as the dimension of the space increases. Thus in spaces of dimension 5 or more there is no practical alternative to multiply-and-add based algorithms. In the intermediate region dimensions 3 and 4 extensions of the CORDIC algorithms are an interesting option. The four-dimensional extensions are particularly elegant and are the main object of this paper.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
A Task-parallel Clustering Algorithm for Structured AMR
Gunney, B N; Wissink, A M
2004-11-02
A new parallel algorithm, based on the Berger-Rigoutsos algorithm for clustering grid points into logically rectangular regions, is presented. The clustering operation is frequently performed in the dynamic gridding steps of structured adaptive mesh refinement (SAMR) calculations. A previous study revealed that although the cost of clustering is generally insignificant for smaller problems run on relatively few processors, the algorithm scaled inefficiently in parallel and its cost grows with problem size. Hence, it can become significant for large scale problems run on very large parallel machines, such as the new BlueGene system (which has {Omicron}(10{sup 4}) processors). We propose a new task-parallel algorithm designed to reduce communication wait times. Performance was assessed using dynamic SAMR re-gridding operations on up to 16K processors of currently available computers at Lawrence Livermore National Laboratory. The new algorithm was shown to be up to an order of magnitude faster than the baseline algorithm and had better scaling trends.
Vu, Michael M; Kim, John Y S
2015-06-01
Acellular dermal matrix (ADM) is widely used in primary prosthetic breast reconstruction. Many indications and contraindications to use ADM have been reported in the literature, and their use varies by institution and surgeon. Developing rational, tested algorithms to determine when ADM is appropriate can significantly improve surgical outcomes and reduce costs associated with ADM use. We review the important indications and contraindications, and discuss the algorithms that have been put forth so far. Further research into algorithmic decision-making for ADM use will allow optimized balancing of cost with risk and benefit. PMID:26161304
Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D
2012-01-01
Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160
Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees
Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng
2015-01-01
In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597
Eddy-current NDE inverse problem with sparse grid algorithm
NASA Astrophysics Data System (ADS)
Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric
2016-02-01
In model-based inverse problems, the unknown parameters (such as length, width, depth) need to be estimated. When the unknown parameters are few, the conventional mathematical methods are suitable. But the increasing number of unknown parameters will make the computation become heavy. To reduce the burden of computation, the sparse grid algorithm was used in our work. As a result, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid.
A Fast Implementation of the ISOCLUS Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2003-01-01
Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
NASA Astrophysics Data System (ADS)
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS
Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.
2010-06-11
The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve
Outline of a fast hardware implementation of Winograd's DFT algorithm
NASA Technical Reports Server (NTRS)
Zohar, S.
1980-01-01
The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.
Kamath, Jayesh; Wakai, Sara; Zhang, Wanli; Kesten, Karen; Shelton, Deborah; Trestman, Robert
2016-08-01
Use of medication algorithms in the correctional setting may facilitate clinical decision making, improve consistency of care, and reduce polypharmacy. The objective of the present study was to evaluate effectiveness of algorithm (Texas Implementation of Medication Algorithm [TIMA])-driven treatment of bipolar disorder (BD) compared with Treatment as Usual (TAU) in the correctional environment. A total of 61 women inmates with BD were randomized to TIMA (n = 30) or TAU (n = 31) and treated over a 12-week period. The outcome measures included measures of BD symptoms, comorbid symptomatology, quality of life, and psychotropic medication utilization. In comparison with TAU, TIMA-driven treatment reduced polypharmacy, decreased overall psychotropic medication utilization, and significantly decreased use of specific classes of psychotropic medication (antipsychotics and antidepressants). This pilot study confirmed the feasibility and benefits of algorithm-driven treatment of BD in the correctional setting, primarily by enhancing appropriate use of evidence-based treatment. PMID:25829456
Reduced order parameter estimation using quasilinearization and quadratic programming
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Putti, Mario; Yeh, William W.-G.
2012-06-01
The ability of a particular model to accurately predict how a system responds to forcing is predicated on various model parameters that must be appropriately identified. There are many algorithms whose purpose is to solve this inverse problem, which is often computationally intensive. In this study, we propose a new algorithm that significantly reduces the computational burden associated with parameter identification. The algorithm is an extension of the quasilinearization approach where the governing system of differential equations is linearized with respect to the parameters. The resulting inverse problem therefore becomes a linear regression or quadratic programming problem (QP) for minimizing the sum of squared residuals; the solution becomes an update on the parameter set. This process of linearization and regression is repeated until convergence takes place. This algorithm has not received much attention, as the QPs can become quite large, often infeasible for real-world systems. To alleviate this drawback, proper orthogonal decomposition is applied to reduce the size of the linearized model, thereby reducing the computational burden of solving each QP. In fact, this study shows that the snapshots need only be calculated once at the very beginning of the algorithm, after which no further calculations of the reduced-model subspace are required. The proposed algorithm therefore only requires one linearized full-model run per parameter at the first iteration followed by a series of reduced-order QPs. The method is applied to a groundwater model with about 30,000 computation nodes where as many as 15 zones of hydraulic conductivity are estimated.
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
A Synthesized Heuristic Task Scheduling Algorithm
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244
Aligning parallel arrays to reduce communication
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.; Schreiber, Robert; Gilbert, John R.; Chatterjee, Siddhartha
1994-01-01
Axis and stride alignment is an important optimization in compiling data-parallel programs for distributed-memory machines. We previously developed an optimal algorithm for aligning array expressions. Here, we examine alignment for more general program graphs. We show that optimal alignment is NP-complete in this setting, so we study heuristic methods. This paper makes two contributions. First, we show how local graph transformations can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. Second, we give a heuristic that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. Our algorithms have been implemented; we present experimental results showing their effect on the performance of some example programs running on the CM-5.
ICESat-2 / ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.
2013-12-01
NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in 2016 with a 3 year mission lifetime. The ICESat-2 orbital altitude will be 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of eliminating solar background noise in near real-time, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allow the instrument to telemeter only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and an onboard relief map. Similar to what was flown on the ATLAS predecessor GLAS (Geoscience Laser Altimeter System) the DEM provides minimum and maximum heights for each 1 degree x 1 degree tile on the Earth. This information allows the onboard algorithm to limit its signal search to the region between minimum and maximum heights (plus some margin for errors). The understanding that the surface echoes will tend to clump while noise will be randomly distributed led us to histogram the received event times. The selection of the signal locations is based on those histogram bins with statistically significant counts. Once the signal location has been established the onboard Digital Relief Map (DRM) is used to determine the vertical width of the telemetry band about the signal. The ATLAS Receiver Algorithms are nearing completion of the development phase and are currently being tested using a Monte Carlo Software Simulator that models the instrument, the orbit and the environment
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Developing dataflow algorithms
Hiromoto, R.E. ); Bohm, A.P.W. . Dept. of Computer Science)
1991-01-01
Our goal is to study the performance of a collection of numerical algorithms written in Id which is available to users of Motorola's dataflow machine Monsoon. We will study the dataflow performance of these implementations first under the parallel profiling simulator Id World, and second in comparison with actual dataflow execution on the Motorola Monsoon. This approach will allow us to follow the computational and structural details of the parallel algorithms as implemented on dataflow systems. When running our programs on the Id World simulator we will examine the behaviour of algorithms at dataflow graph level, where each instruction takes one timestep and data becomes available at the next. This implies that important machine level phenomena such as the effect that global communication time may have on the computation are not addressed. These phenomena will be addressed when we run our programs on the Monsoon hardware. Potential ramifications for compilation techniques, functional programming style, and program efficiency are significant to this study. In a later stage of our research we will compare the efficiency of Id programs to programs written in other languages. This comparison will be of a rather qualitative nature as there are too many degrees of freedom in a language implementation for a quantitative comparison to be of interest. We begin our study by examining one routine that exhibit different computational characteristics. This routine and its corresponding characteristics is Fast Fourier Transforms; computational parallelism and data dependences between the butterfly shuffles.
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.
2015-01-01
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K
2015-01-01
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406
A new algorithm for speckle reduction of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Avanaki, Mohammadreza R. N.; Marques, Manuel J.; Bradu, Adrian; Hojjatoleslami, Ali; Podoleanu, Adrian G.
2014-03-01
In this study, we present a new algorithm based on an artificial neural network (ANN) for reducing speckle noise from optical coherence tomography (OCT) images. The noise is modeled for different parts of the image using Rayleigh distribution with a noise parameter, sigma, estimated by the ANN. This is then used along with a numerical method to solve the inverse Rayleigh function to reduce the noise in the image. The algorithm is tested successfully on OCT images of retina, demonstrating a significant increase in the signal-to-noise ratio (SNR) and the contrast of the processed images.
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
A New Aloha Anti-Collision Algorithm Based on CDMA
NASA Astrophysics Data System (ADS)
Bai, Enjian; Feng, Zhu
The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.
Fast motion prediction algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel
2011-06-01
Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.
Squint mode SAR processing algorithms
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Jin, M.; Curlander, J. C.
1989-01-01
The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.
ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.
Claire, Robert W.
1984-01-01
An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.
The design of flux-corrected transport (FCT) algorithms on structured grids
NASA Astrophysics Data System (ADS)
Zalesak, Steven T.
2005-12-01
A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow field; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order algorithms, in flux form, in the various regions of the flow field. In this dissertation, we describe a set of design principles that significantly enhance the accuracy and robustness of FCT algorithms by enhancing the accuracy and robustness of each of the three components individually. These principles include the use of very high order spatial operators in the design of the high order fluxes, the use of non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. We show via standard test problems the kind of algorithm performance one can expect if these design principles are adhered to. We give examples of applications of these design principles in several areas of physics. Finally, we compare the performance of these enhanced algorithms with that of other recent front-capturing methods.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Song, Yang; Zhang, Bin; He, Anzhi
2006-11-01
A novel algebraic iterative algorithm based on deflection tomography is presented. This algorithm is derived from the essentials of deflection tomography with a linear expansion of the local basis functions. By use of this algorithm the tomographic problem is finally reduced to the solution of a set of linear equations. The algorithm is demonstrated by mapping a three-peak Gaussian simulative temperature field. Compared with reconstruction results obtained by other traditional deflection algorithms, its reconstruction results provide a significant improvement in reconstruction accuracy, especially in cases with noisy data added. In the density diagnosis of a hypersonic wind tunnel, this algorithm is adopted to reconstruct density distributions of an axial symmetry flow field. One cross section of the reconstruction results is selected to be compared with the inverse Abel transform algorithm. Results show that the novel algorithm can achieve an accuracy equivalent to the inverse Abel transform algorithm. However, the novel algorithm is more versatile because it is applicable to arbitrary kinds of distribution. PMID:17068552
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
A Modified NASA Team Sea Ice Algorithm for the Antarctic
NASA Technical Reports Server (NTRS)
Cavalieri, Donald J.; Markus, Thorsten
1998-01-01
A recent comparative study of the NASA Team and Bootstrap passive microwave sea ice algorithms revealed significantly different sea ice concentration retrievals in some parts of the Antarctic. The study identified potential reasons for the discrepancies including the influence of sea ice temperature variability on the Bootstrap retrievals and the influence of ice surface reflectivity on the horizontally polarized emissivity in the NASA Team retrievals. In this study, we present a modified version of the NASA Team algorithm which reduces the error associated with the use of horizontally polarized radiance data, while retaining the relative insensitivity to ice temperature variations provided by radiance ratios. By retaining the 19 GHz polarization as an independent variable, we also maintain a relatively large dynamic range in sea ice concentration. The modified algorithm utilizes the 19 GHz polarization (PR19) and both gradient ratios, GRV and GRH defined by (37V-19V)/(37V+19V) and (37H-19H)/(37H+19H), respectively, rather than just GRV used in the current NASA Team algorithm. A plot of GRV versus GRH shows that the preponderance of points lie along a quadratic curve, whereas those points affected by surface reflectivity anomalies deviate from this curve. This serves as a method of identifying the problems points. The 19H brightness temperature of these problem points is increased so they too fall along quadratic curve. Sea ice concentrations derived from AVHRR imagery illustrate the extent to which this method reduces the error associated with surface layering.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm, designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.
Singular Parameter Prediction Algorithm for Bistable Neural Systems.
Durand, Dominique M; Jahangiri, Anila
2010-04-01
An algorithm is presented to predict the intensity and timing of a singular single stimulus required to switch the state of a bistable system from repetitive activity to a stable point. The algorithm is first tested on a modified Hodgkin-Huxley model to predict the parameters of a stimulus capable of annihilating the spontaneously occurring repetitive action potentials. Elevation of the potassium equilibrium potential causes oscillations in the V, m, h and n parameters and generates periodic activity. Equations describing the time-varying behavior of these parameters can be used to predict the pulse width, coupling interval and intensity of a single anodic pulse applied between two consecutive action potentials to suppress the activity. The algorithm was then applied to predict the singular parameters of quasi-periodic epileptiform activity generated in the hippocampus slice preparation exposed to high-potassium concentrations. The results indicate that a stimulus with the estimated parameters was able to either completely annihilate the action potentials in the HH model or predict the region of unpredictable latencies. Therefore this algorithm is capable a predicting singular parameters accurately when the model is known. In the case of an experimental system where the equations of the system are not known, the algorithm predicted parameters in the range of those observed experimentally. Therefore, the algorithm could reduce significantly the amount of time required to find the singular parameters of experimental bistable systems normally obtained by a systematic exploration of the parameter space. In particular, this algorithm could be useful to predict the singular parameters of quasi periodic epileptiform activity leading to the suppression of this activity if the system is bistable. PMID:21866209
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
An enhanced algorithm to estimate BDS satellite's differential code biases
NASA Astrophysics Data System (ADS)
Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei
2016-02-01
This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Speech Enhancement based on Compressive Sensing Algorithm
NASA Astrophysics Data System (ADS)
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
ALGORITHM FOR SORTING GROUPED DATA
NASA Technical Reports Server (NTRS)
Evans, J. D.
1994-01-01
It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Threshold extended ID3 algorithm
NASA Astrophysics Data System (ADS)
Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.
2012-04-01
Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.
Reduced discretization error in HZETRN
Slaba, Tony C.; Blattnig, Steve R.; Tweed, John
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Significant lexical relationships
Pedersen, T.; Kayaalp, M.; Bruce, R.
1996-12-31
Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study. PMID:26480397
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Efficient algorithms for single-axis attitude estimation
NASA Astrophysics Data System (ADS)
Shuster, M. D.
1981-10-01
The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.
Improved Algorithm For Finite-Field Normal-Basis Multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1989-01-01
Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.
Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W
2014-04-01
We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. PMID:24556079
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Lack of Statistical Significance
ERIC Educational Resources Information Center
Kehle, Thomas J.; Bray, Melissa A.; Chafouleas, Sandra M.; Kawano, Takuji
2007-01-01
Criticism has been leveled against the use of statistical significance testing (SST) in many disciplines. However, the field of school psychology has been largely devoid of critiques of SST. Inspection of the primary journals in school psychology indicated numerous examples of SST with nonrandom samples and/or samples of convenience. In this…
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
ERIC Educational Resources Information Center
Docking, R. A.; Docking, E.
1984-01-01
Reports on a case study of inservice training conducted to enhance the teacher/student relationship and reduce teacher anxiety. Found significant improvements in attitudes, classroom management activities, and lower anxiety among teachers. (MD)
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718
Fast prediction algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel
2013-03-01
The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.
2014-12-01
Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
ERIC Educational Resources Information Center
Timpane, Michael; And Others
A group of three conference papers, all addressing the subject of effective programs to decrease the number of school dropouts, is presented in this document. The first paper, "Systemic Approaches to Reducing Dropouts" (Michael Timpane), asserts that dropping out is a symptom of failures in the social, economic, and educational systems. Dropping…
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
Protein disorder reduced in Saccharomyces cerevisiae to survive heat shock.
Vicedo, Esmeralda; Gasik, Zofia; Dong, Yu-An; Goldberg, Tatyana; Rost, Burkhard
2015-01-01
Recent experiments established that a culture of Saccharomyces cerevisiae (baker's yeast) survives sudden high temperatures by specifically duplicating the entire chromosome III and two chromosomal fragments (from IV and XII). Heat shock proteins (HSPs) are not significantly over-abundant in the duplication. In contrast, we suggest a simple algorithm to " postdict " the experimental results: Find a small enough chromosome with minimal protein disorder and duplicate this region. This algorithm largely explains all observed duplications. In particular, all regions duplicated in the experiment reduced the overall content of protein disorder. The differential analysis of the functional makeup of the duplication remained inconclusive. Gene Ontology (GO) enrichment suggested over-representation in processes related to reproduction and nutrient uptake. Analyzing the protein-protein interaction network (PPI) revealed that few network-central proteins were duplicated. The predictive hypothesis hinges upon the concept of reducing proteins with long regions of disorder in order to become less sensitive to heat shock attack. PMID:26673203
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Significant biases affecting abundance determinations
NASA Astrophysics Data System (ADS)
Wesson, Roger
2015-08-01
I have developed two highly efficient codes to automate analyses of emission line nebulae. The tools place particular emphasis on the propagation of uncertainties. The first tool, ALFA, uses a genetic algorithm to rapidly optimise the parameters of gaussian fits to line profiles. It can fit emission line spectra of arbitrary resolution, wavelength range and depth, with no user input at all. It is well suited to highly multiplexed spectroscopy such as that now being carried out with instruments such as MUSE at the VLT. The second tool, NEAT, carries out a full analysis of emission line fluxes, robustly propagating uncertainties using a Monte Carlo technique.Using these tools, I have found that considerable biases can be introduced into abundance determinations if the uncertainty distribution of emission lines is not well characterised. For weak lines, normally distributed uncertainties are generally assumed, though it is incorrect to do so, and significant biases can result. I discuss observational evidence of these biases. The two new codes contain routines to correctly characterise the probability distributions, giving more reliable results in analyses of emission line nebulae.
Attitude Estimation Signal Processing: A First Report on Possible Algorithms and Their Utility
NASA Technical Reports Server (NTRS)
Riasati, Vahid R.
1998-01-01
In this brief effort, time has been of the essence. The data had to be acquired from APL/Lincoln Labs, stored, and sorted out to obtain the pertinent streams. This has been a significant part of this effort and hardware and software problems have been addressed with the appropriate solutions to accomplish this part of the task. Passed this, some basic and important algorithms are utilized to improve the performance of the attitude estimation systems. These algorithms are an essential part of the signal processing for the attitude estimation problem as they are utilized to reduce the amount of the additive/multiplicative noise that in general may or may not change its structure and probability density function, pdf, in time. These algorithms are not currently utilized in the processing of the data, at least, we are not aware of their use in this attitude estimation problem. Some of these algorithms, like the variable thresholding, are new conjectures, but one would expect that someone somewhere must have utilized this kind of scheme before. The variable thresholding idea is a straightforward scheme to use in case of a slowly varying pdf, or statistical moments of the unwanted random process. The algorithms here are kept simple but yet effective for processing the data and removing the unwanted noise. For the most part, these algorithms can be arranged so that their consecutive and orderly execution would complement the preceding algorithm and improve the overall performance of the signal processing chain.
An efficient QoS-aware routing algorithm for LEO polar constellations
NASA Astrophysics Data System (ADS)
Tian, Xin; Pham, Khanh; Blasch, Erik; Tian, Zhi; Shen, Dan; Chen, Genshe
2013-05-01
In this work, a Quality of Service (QoS)-aware routing (QAR) algorithm is developed for Low-Earth Orbit (LEO) polar constellations. LEO polar orbits are the only type of satellite constellations where inter-plane inter-satellite links (ISLs) are implemented in real world. The QAR algorithm exploits features of the topology of the LEO satellite constellation, which makes it more efficient than general shortest path routing algorithms such as Dijkstra's or extended Bellman-Ford algorithms. Traffic density, priority, and error QoS requirements on communication delays can be easily incorporated into the QAR algorithm through satellite distances. The QAR algorithm also supports efficient load balancing in the satellite network by utilizing the multiple paths from the source satellite to the destination satellite, and effectively lowers the rate of network congestion. The QAR algorithm supports a novel robust routing scheme in LEO polar constellation, which is able to significantly reduce the impact of inter-satellite link (ISL) congestions on QoS in terms of communication delay and jitter.
Robustness of Tree Extraction Algorithms from LIDAR
NASA Astrophysics Data System (ADS)
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
On the scalability of parallel genetic algorithms.
Cantú-Paz, E; Goldberg, D E
1999-01-01
This paper examines the scalability of several types of parallel genetic algorithms (GAs). The objective is to determine the optimal number of processors that can be used by each type to minimize the execution time. The first part of the paper considers algorithms with a single population. The investigation focuses on an implementation where the population is distributed to several processors, but the results are applicable to more common master-slave implementations, where the population is entirely stored in a master processor and multiple slaves are used to evaluate the fitness. The second part of the paper deals with parallel GAs with multiple populations. It first considers a bounding case where the connectivity, the migration rate, and the frequency of migrations are set to their maximal values. Then, arbitrary regular topologies with lower migration rates are considered and the frequency of migrations is set to its lowest value. The investigationis mainly theoretical, but experimental evidence with an additively-decomposable function is included to illustrate the accuracy of the theory. In all cases, the calculations show that the optimal number of processors that minimizes the execution time is directly proportional to the square root of the population size and the fitness evaluation time. Since these two factors usually increase as the domain becomes more difficult, the results of the paper suggest that parallel GAs can integrate large numbers of processors and significantly reduce the execution time of many practical applications. PMID:10578030
A semisimultaneous inversion algorithm for SAGE III
NASA Astrophysics Data System (ADS)
Ward, Dale M.
2002-12-01
The Stratospheric Aerosol and Gas Experiment (SAGE) III instrument was successfully launched into orbit on 10 December 2001. The planned operational species separation inversion algorithm will utilize a stepwise retrieval strategy. This paper presents an alternative, semisimultaneous species separation inversion that simultaneously retrieves all species over user-specified vertical intervals or blocks. By overlapping these vertical blocks, retrieved species profiles over the entire vertical range of the measurements are obtained. The semisimultaneous retrieval approach provides a more straightforward method for evaluating the error coupling that occurs among the retrieved profiles due to various types of input uncertainty. Simulation results are presented to show how the semisimultaneous inversion can enhance understanding of the SAGE III retrieval process. In the future, the semisimultaneous inversion algorithm will be used to help evaluate the results and performance of the operational inversion. Compared to SAGE II, SAGE III will provide expanded and more precise spectral measurements. This alone is shown to significantly reduce the uncertainties in the retrieved ozone, nitrogen dioxide, and aerosol extinction profiles for SAGE III. Additionally, the well-documented concern that SAGE II retrievals are biased by the level of volcanic aerosol is greatly alleviated for SAGE III.
Algorithms for Automatic Alignment of Arrays
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Oliker, Leonid; Schreiber, Robert; Sheffler, Thomas J.
1996-01-01
Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Anthropological significance of phenylketonuria.
Saugstad, L F
1975-01-01
The highest incidence rates of phenylketonuria (PKU) have been observed in Ireland and Scotlant. Parents heterozygous for PKU in Norway differ significantly from the general population in the Rhesus, Kell and PGM systems. The parents investigated showed an excess of Rh negative, Kell plus and PGM type 1 individuals, which makes them similar to the present populations in Ireland and Scotlant. It is postulated that the heterozygotes for PKU in Norway are descended from a completely assimilated sub-population of Celtic origin, who came or were brought here, 1ooo years ago. Bronze objects of Western European (Scottish, Irish) origin, found in Viking graves widely distributed in Norway, have been taken as evidence of Vikings returning with loot (including a number of Celts) from Western Viking settlements. The continuity of residence since the Viking age in most habitable parts of Norway, and what seems to be a nearly complete regional relationship between the sites where Viking graves contain western imported objects and the birthplaces of grandparents of PKUs identified in Norway, lend further support to the hypothesis that the heterozygotes for PKU in Norway are descended from a completely assimilated subpopulation. The remarkable resemblance between Iceland and Ireland, in respect of several genetic markers (including the Rhesus, PGM and Kell systems), is considered to be an expression of a similar proportion of people of Celtic origin in each of the two countries. Their identical, high incidence rates of PKU are regarded as further evidence of this. The significant decline in the incidence of PKU when one passes from Ireland, Scotland and Iceland, to Denmark and on to Norway and Sweden, is therefore explained as being related to a reduction in the proportion of inhabitants of Celtic extraction in the respective populations. PMID:803884
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
A convergent hybrid decomposition algorithm model for SVM training.
Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco
2009-06-01
Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach. PMID:19435679
Optimized mean shift algorithm for color segmentation in image sequences
NASA Astrophysics Data System (ADS)
Bailer, Werner; Schallauer, Peter; Haraldsson, Harald B.; Rehatschek, Herwig
2005-03-01
The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.
Iterative minimization algorithm for efficient calculations of transition states
NASA Astrophysics Data System (ADS)
Gao, Weiguo; Leng, Jing; Zhou, Xiang
2016-03-01
This paper presents an efficient algorithmic implementation of the iterative minimization formulation (IMF) for fast local search of transition state on potential energy surface. The IMF is a second order iterative scheme providing a general and rigorous description for the eigenvector-following (min-mode following) methodology. We offer a unified interpretation in numerics via the IMF for existing eigenvector-following methods, such as the gentlest ascent dynamics, the dimer method and many other variants. We then propose our new algorithm based on the IMF. The main feature of our algorithm is that the translation step is replaced by solving an optimization subproblem associated with an auxiliary objective function which is constructed from the min-mode information. We show that using an efficient scheme for the inexact solver and enforcing an adaptive stopping criterion for this subproblem, the overall computational cost will be effectively reduced and a super-linear rate between the accuracy and the computational cost can be achieved. A series of numerical tests demonstrate the significant improvement in the computational efficiency for the new algorithm.
CS based confocal microwave imaging algorithm for breast cancer detection.
Sun, Y P; Zhang, S; Cui, Z; Qu, L L
2016-04-29
Based on compressive sensing (CS) technology, a high resolution confocal microwave imaging algorithm is proposed for breast cancer detection. With the exploitation of the spatial sparsity of the target space, the proposed image reconstruction problem is cast within the framework of CS and solved by the sparse constraint optimization. The effectiveness and validity of the proposed CS imaging method is verified by the full wave synthetic data from numerical breast phantom using finite-difference time-domain (FDTD) method. The imaging results have shown that the proposed imaging scheme can improve the imaging quality while significantly reducing the amount of data measurements and collection time when compared to the traditional delay-and-sum imaging algorithm. PMID:27177106
A hierarchical algorithm for molecular similarity (H-FORMS).
Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel
2015-07-15
A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy. PMID:26037060
Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms
NASA Astrophysics Data System (ADS)
Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei
2016-01-01
In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).
Fast imaging system and algorithm for monitoring microlymphatics
NASA Astrophysics Data System (ADS)
Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.
2010-02-01
The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.
A Dynamic Framed Slotted ALOHA Algorithm Using Collision Factor for RFID Identification
NASA Astrophysics Data System (ADS)
Choi, Seung Sik; Kim, Sangkyung
In RFID systems, collision resolution is a significant issue in fast tag identification. This letter presents a dynamic frame-slotted ALOHA algorithm that uses a collision factor (DFSA-CF). This method enables fast tag identification by estimating the next frame size with the collision factor in the current frame. Simulation results show that the proposed method reduces slot times Required for RFID identification. When the number of tags is larger than the frame size, the efficiency of the proposed method is greater than those of conventional algorithms.
Algorithme intelligent d'optimisation d'un design structurel de grande envergure
NASA Astrophysics Data System (ADS)
Dominique, Stephane
The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number
Significant Radionuclides Determination
Jo A. Ziegler
2001-07-31
The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.
Fungi producing significant mycotoxins.
2012-01-01
Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors
Mapping algorithms on regular parallel architectures
Lee, P.
1989-01-01
It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.
Iterative phase retrieval algorithms. I: optimization.
Guo, Changliang; Liu, Shi; Sheridan, John T
2015-05-20
Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Berry, K.; Dayton, S.
1996-10-28
Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.
Sharma, Ashok; Podolsky, Robert; Zhao, Jieping; McIndoe, Richard A.
2009-01-01
Motivation: As the number of publically available microarray experiments increases, the ability to analyze extremely large datasets across multiple experiments becomes critical. There is a requirement to develop algorithms which are fast and can cluster extremely large datasets without affecting the cluster quality. Clustering is an unsupervised exploratory technique applied to microarray data to find similar data structures or expression patterns. Because of the high input/output costs involved and large distance matrices calculated, most of the algomerative clustering algorithms fail on large datasets (30 000 + genes/200 + arrays). In this article, we propose a new two-stage algorithm which partitions the high-dimensional space associated with microarray data using hyperplanes. The first stage is based on the Balanced Iterative Reducing and Clustering using Hierarchies algorithm with the second stage being a conventional k-means clustering technique. This algorithm has been implemented in a software tool (HPCluster) designed to cluster gene expression data. We compared the clustering results using the two-stage hyperplane algorithm with the conventional k-means algorithm from other available programs. Because, the first stage traverses the data in a single scan, the performance and speed increases substantially. The data reduction accomplished in the first stage of the algorithm reduces the memory requirements allowing us to cluster 44 460 genes without failure and significantly decreases the time to complete when compared with popular k-means programs. The software was written in C# (.NET 1.1). Availability: The program is freely available and can be downloaded from http://www.amdcc.org/bioinformatics/bioinformatics.aspx. Contact: rmcindoe@mail.mcg.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19261720
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Xiang, Bingren; Wu, Xiaohong; Liu, Dan
2014-01-01
Simultaneous determination of multiple weak chromatographic peaks via stochastic resonance algorithm attracts much attention in recent years. However, the optimization of the parameters is complicated and time consuming, although the single-well potential stochastic resonance algorithm (SSRA) has already reduced the number of parameters to only one and simplified the process significantly. Even worse, it is often difficult to keep amplified peaks with beautiful peak shape. Therefore, multiobjective genetic algorithm was employed to optimize the parameter of SSRA for multiple optimization objectives (i.e., S/N and peak shape) and multiple chromatographic peaks. The applicability of the proposed method was evaluated with an experimental data set of Sudan dyes, and the results showed an excellent quantitative relationship between different concentrations and responses. PMID:24526920
Adaptive motion artifact reducing algorithm for wrist photoplethysmography application
NASA Astrophysics Data System (ADS)
Zhao, Jingwei; Wang, Guijin; Shi, Chenbo
2016-04-01
Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.
A Cross Unequal Clustering Routing Algorithm for Sensor Network
NASA Astrophysics Data System (ADS)
Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles
2013-08-01
In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime
Exploring a new best information algorithm for Iliad.
Guo, D.; Lincoln, M. J.; Haug, P. J.; Turner, C. W.; Warner, H. R.
1991-01-01
Iliad is a diagnostic expert system for internal medicine. One important feature that Iliad offers is the ability to analyze a particular patient case and to determine the most cost-effective method for pursuing the work-up. Iliad's current "best information" algorithm has not been previously validated and compared to other potential algorithms. Therefore, this paper presents a comparison of four new algorithms to the current algorithm. The basis for this comparison was eighteen "vignette" cases derived from real patient cases from the University of Utah Medical Center. The results indicated that the current algorithm can be significantly improved. More promising algorithms are suggested for future investigation. PMID:1807677
Adaptive color image watermarking algorithm
NASA Astrophysics Data System (ADS)
Feng, Gui; Lin, Qiwei
2008-03-01
As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.
Bayesian Smoothing Algorithms in Partially Observed Markov Chains
NASA Astrophysics Data System (ADS)
Ait-el-Fquih, Boujemaa; Desbouvries, François
2006-11-01
Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.
A Revision of the NASA Team Sea Ice Algorithm
NASA Technical Reports Server (NTRS)
Markus, T.; Cavalieri, Donald J.
1998-01-01
In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
The energetic significance of cooking.
Carmody, Rachel N; Wrangham, Richard W
2009-10-01
While cooking has long been argued to improve the diet, the nature of the improvement has not been well defined. As a result, the evolutionary significance of cooking has variously been proposed as being substantial or relatively trivial. In this paper, we evaluate the hypothesis that an important and consistent effect of cooking food is a rise in its net energy value. The pathways by which cooking influences net energy value differ for starch, protein, and lipid, and we therefore consider plant and animal foods separately. Evidence of compromised physiological performance among individuals on raw diets supports the hypothesis that cooked diets tend to provide energy. Mechanisms contributing to energy being gained from cooking include increased digestibility of starch and protein, reduced costs of digestion for cooked versus raw meat, and reduced energetic costs of detoxification and defence against pathogens. If cooking consistently improves the energetic value of foods through such mechanisms, its evolutionary impact depends partly on the relative energetic benefits of non-thermal processing methods used prior to cooking. We suggest that if non-thermal processing methods such as pounding were used by Lower Palaeolithic Homo, they likely provided an important increase in energy gain over unprocessed raw diets. However, cooking has critical effects not easily achievable by non-thermal processing, including the relatively complete gelatinisation of starch, efficient denaturing of proteins, and killing of food borne pathogens. This means that however sophisticated the non-thermal processing methods were, cooking would have conferred incremental energetic benefits. While much remains to be discovered, we conclude that the adoption of cooking would have led to an important rise in energy availability. For this reason, we predict that cooking had substantial evolutionary significance. PMID:19732938
The functional significance of stereopsis.
O'Connor, Anna R; Birch, Eileen E; Anderson, Susan; Draper, Hayley
2010-04-01
Purpose. Development or restoration of binocular vision is one of the key goals of strabismus management; however, the functional impact of stereoacuity has largely been neglected. Methods. Subjects aged 10 to 30 years with normal, reduced, or nil stereoacuity performed three tasks: Purdue pegboard (measured how many pegs placed in 30 seconds), bead threading (with two sizes of bead, to increase the difficulty; measured time taken to thread a number of beads), and water pouring (measured both accuracy and time). All tests were undertaken both with and without occlusion of one eye. Results. One hundred forty-three subjects were recruited, 32.9% (n = 47) with a manifest deviation. Performances on the pegboard and bead tasks were significantly worse in the nil stereoacuity group when compared with that of the normal stereoacuity group. On the large and small bead tasks, those with reduced stereoacuity were better than those with nil stereoacuity (when the Preschool Randot Stereoacuity Test [Stereo Optical Co, Inc., Chicago, IL] results were used to determine stereoacuity levels). Comparison of the short-term monocular conditions (those with normal stereoacuity but occluded) with nil stereoacuity showed that, on all measures, the performance was best in the nil stereoacuity group and was statistically significant for the large and small beads task, irrespective of which test result was used to define the stereoacuity levels. Conclusions. Performance on motor skills tasks was related to stereoacuity, with subjects with normal stereoacuity performing best on all tests. This quantifiable degradation in performance on some motor skill tasks supports the need to implement management strategies to maximize development of high-grade stereoacuity. PMID:19933184
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Acceleration of iterative image restoration algorithms.
Biggs, D S; Andrews, M
1997-03-10
A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
Birefringent filter design by use of a modified genetic algorithm.
Wen, Mengtao; Yao, Jianping
2006-06-10
A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031
Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster
NASA Astrophysics Data System (ADS)
Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.
2015-10-01
Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.
Social significance of community structure: Statistical view
NASA Astrophysics Data System (ADS)
Li, Hui-Jia; Daniels, Jasmine J.
2015-01-01
Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.
Social significance of community structure: statistical view.
Li, Hui-Jia; Daniels, Jasmine J
2015-01-01
Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc. PMID:25679651
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
A novel hardware-friendly algorithm for hyperspectral linear unmixing
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Santos, Lucana; López, Sebastián.; Sarmiento, Roberto
2015-10-01
significantly reduced.
NASA Astrophysics Data System (ADS)
Weber, Bruce A.
2005-07-01
We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.
NASA Astrophysics Data System (ADS)
Yueh, Simon; Tang, Wenqing; Fore, Alexander; Hayashi, Akiko; Song, Yuhe T.; Lagerloef, Gary
2014-08-01
This paper describes the updated Combined Active-Passive (CAP) retrieval algorithm for simultaneous retrieval of surface salinity and wind from Aquarius' brightness temperature and radar backscatter. Unlike the algorithm developed by Remote Sensing Systems (RSS), implemented in the Aquarius Data Processing System (ADPS) to produce Aquarius standard products, the Jet Propulsion Laboratory's CAP algorithm does not require monthly climatology SSS maps for the salinity retrieval. Furthermore, the ADPS-RSS algorithm fully uses the National Center for Environmental Predictions (NCEP) wind for data correction, while the CAP algorithm uses the NCEP wind only as a constraint. The major updates to the CAP algorithm include the galactic reflection correction, Faraday rotation, Antenna Pattern Correction, and geophysical model functions of wind or wave impacts. Recognizing the limitation of geometric optics scattering, we improve the modeling of the reflection of galactic radiation; the results are better salinity accuracy and significantly reduced ascending-descending bias. We assess the accuracy of CAP's salinity by comparison with ARGO monthly gridded salinity products provided by the Asia-Pacific Data-Research Center (APDRC) and Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The RMS differences between Aquarius CAP and APDRC's or JAMSTEC's ARGO salinities are less than 0.2 psu for most parts of the ocean, except for the regions in the Intertropical Convergence Zone, near the outflow of major rivers and at high latitudes.
On the convergence of the phase gradient autofocus algorithm for synthetic aperture radar imaging
Hicks, M.J.
1996-01-01
Synthetic Aperture Radar (SAR) imaging is a class of coherent range and Doppler signal processing techniques applied to remote sensing. The aperture is synthesized by recording and processing coherent signals at known positions along the flight path. Demands for greater image resolution put an extreme burden on requirements for inertial measurement units that are used to maintain accurate pulse-to-pulse position information. The recently developed Phase Gradient Autofocus algorithm relieves this burden by taking a data-driven digital signal processing approach to estimating the range-invariant phase aberrations due to either uncompensated motions of the SAR platform or to atmospheric turbulence. Although the performance of this four-step algorithm has been demonstrated, its convergence has not been modeled mathematically. A new sensitivity study of algorithm performance is a necessary step towards this model. Insights that are significant to the application of this algorithm to both SAR and to other coherent imaging applications are developed. New details on algorithm implementation identify an easily avoided biased phase estimate. A new algorithm for defining support of the point spread function is proposed, which promises to reduce the number of iterations required even for rural scenes with low signal-to-clutter ratios.
Gietzelt, Matthias; Wolf, Klaus-Hendrik; Marschollek, Michael; Haux, Reinhold
2013-07-01
Calibration of accelerometers can be reduced to 3D-ellipsoid fitting problems. Changing extrinsic factors like temperature, pressure or humidity, as well as intrinsic factors like the battery status, demand to calibrate the measurements permanently. Thus, there is a need for fast calibration algorithms, e.g. for online analyses. The primary aim of this paper is to propose a non-iterative calibration algorithm for accelerometers with the focus on minimal execution time and low memory consumption. The secondary aim is to benchmark existing calibration algorithms based on 3D-ellipsoid fitting methods. We compared the algorithms regarding the calibration quality and the execution time as well as the number of quasi-static measurements needed for a stable calibration. As evaluation criterion for the calibration, both the norm of calibrated real-life measurements during inactivity and simulation data was used. The algorithms showed a high calibration quality, but the execution time differed significantly. The calibration method proposed in this paper showed the shortest execution time and a very good performance regarding the number of measurements needed to produce stable results. Furthermore, this algorithm was successfully implemented on a sensor node and calibrates the measured data on-the-fly while continuously storing the measured data to a microSD-card. PMID:23566707
Performance analysis of cone detection algorithms.
Mariotti, Letizia; Devaney, Nicholas
2015-04-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758
Faster Algorithms on Branch and Clique Decompositions
NASA Astrophysics Data System (ADS)
Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin
We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.
Material design using surrogate optimization algorithm
NASA Astrophysics Data System (ADS)
Khadke, Kunal R.
Nanocomposite ceramics have been widely studied in order to tailor desired properties at high temperatures. Methodologies for development of material design are still under effect . While finite element modeling (FEM) provides significant insight on material behavior, few design researchers have addressed the design paradox that accompanies this rapid design space expansion. A surrogate optimization model management framework has been proposed to make this design process tractable. In the surrogate optimization material design tool, the analysis cost is reduced by performing simulations on the surrogate model instead of high density finite element model. The methodology is incorporated to find the optimal number of silicon carbide (SiC) particles, in a silicon-nitride Si3N 4 composite with maximum fracture energy [2]. Along with a deterministic optimization algorithm, model uncertainties have also been considered with the use of robust design optimization (RDO) method ensuring a design of minimum sensitivity to changes in the parameters. These methodologies applied to nanocomposites design have a signicant impact on cost and design cycle time reduced.
Oscillation Detection Algorithm Development Summary Report and Test Plan
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang
2009-10-03
Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement
Algorithmic Strategies in Combinatorial Chemistry
GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN
2000-08-01
Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Uccella, S; Cromi, A; Colombo, G F; Bogani, G; Casarin, J; Agosti, M; Ghezzi, F
2015-04-01
Our aim was to investigate the accuracy in predicting intrapartum fetal acidaemia and the interobserver reproducibility of a mathematical algorithm for the interpretation of electronic fetal heart rate (FHR) monitoring throughout labour. Eight physicians (blinded to the clinical outcomes of the deliveries) evaluated four randomly selected intrapartum FHR tracings by common visual interpretation, trying to predict umbilical artery base excess at birth. They subsequently were asked to re-evaluate the tracings using a mathematical algorithm for FHR tracing interpretation. Common visual interpretation allowed a correct estimation of the umbilical artery base excess in 34.4% of cases, with a poor interobserver reproducibility (Kappa correlation coefficient = 0.24). After implementation of the algorithm, the proportion of correct estimates significantly increased to 90.6% (p < 0.001), with excellent inter-clinician agreement (κ: 0.85). To conclude, incorporation of a standardised algorithm reduces the interobserver variability and allows a better estimation of fetal acidaemia at birth. PMID:25254299
Fast decoding algorithms for coded aperture systems
NASA Astrophysics Data System (ADS)
Byard, Kevin
2014-08-01
Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.
Algorithms for optimal dyadic decision trees
Hush, Don; Porter, Reid
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
Cell list algorithms for nonequilibrium molecular dynamics
NASA Astrophysics Data System (ADS)
Dobson, Matthew; Fox, Ian; Saracino, Alexandra
2016-06-01
We present two modifications of the standard cell list algorithm that handle molecular dynamics simulations with deforming periodic geometry. Such geometry naturally arises in the simulation of homogeneous, linear nonequilibrium flow modeled with periodic boundary conditions, and recent progress has been made developing boundary conditions suitable for general 3D flows of this type. Previous works focused on the planar flows handled by Lees-Edwards or Kraynik-Reinelt boundary conditions, while the new versions of the cell list algorithm presented here are formulated to handle the general 3D deforming simulation geometry. As in the case of equilibrium, for short-ranged pairwise interactions, the cell list algorithm reduces the computational complexity of the force computation from O(N2) to O(N), where N is the total number of particles in the simulation box. We include a comparison of the complexity and efficiency of the two proposed modifications of the standard algorithm.
A new reconstruction algorithm for Radon data
NASA Astrophysics Data System (ADS)
Xu, Y.; Tischenko, O.; Hoeschen, C.
2006-03-01
A new reconstruction algorithm for Radon data is introduced. We call the new algorithm OPED as it is based on Orthogonal Polynomial Expansion on the Disk. OPED is fundamentally different from the filtered back projection (FBP) method. It allows one to use fan beam geometry directly without any additional procedures such as interpolation or rebinning. It reconstructs high degree polynomials exactly and works for smooth functions without the assumption that functions are band- limited. Our initial tests indicate that the algorithm is stable, provides high resolution images, and has a small global error. Working with the geometry specified by the algorithm and a new mask, OPED could also lead to a reconstruction method that works with reduced x-ray dose (see the paper by Tischenko et al in these proceedings).
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures. PMID:27411128
Reducing the effect of pixel crosstalk in phase only spatial light modulators.
Persson, Martin; Engström, David; Goksör, Mattias
2012-09-24
A method for compensating for pixel crosstalk in liquid crystal based spatial light modulators is presented. By modifying a commonly used hologram generating algorithm to account for pixel crosstalk, the intensity errors in obtained diffraction spot intensities are significantly reduced. We also introduce a novel method for characterizing the pixel crosstalk in phase-only spatial light modulators, providing input for the hologram generating algorithm. The methods are experimentally evaluated and an improvement of the spot uniformity by more than 100% is demonstrated for an SLM with large pixel crosstalk. PMID:23037382
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
An Iterative Soft-Decision Decoding Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao
1996-01-01
This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.
Efficient algorithms for proximity problems
Wee, Y.C.
1989-01-01
Computational geometry is currently a very active area of research in computer science because of its applications to VLSI design, database retrieval, robotics, pattern recognition, etc. The author studies a number of proximity problems which are fundamental in computational geometry. Optimal or improved sequential and parallel algorithms for these problems are presented. Along the way, some relations among the proximity problems are also established. Chapter 2 presents an O(N log{sup 2} N) time divide-and-conquer algorithm for solving the all pairs geographic nearest neighbors problem (GNN) for a set of N sites in the plane under any L{sub p} metric. Chapter 3 presents an O(N log N) divide-and-conquer algorithm for computing the angle restricted Voronoi diagram for a set of N sites in the plane. Chapter 4 introduces a new data structure for the dynamic version of GNN. Chapter 5 defines a new formalism called the quasi-valid range aggregation. This formalism leads to a new and simple method for reducing non-range query-like problems to range queries and often to orthogonal range queries, with immediate applications to the attracted neighbor and the planar all-pairs nearest neighbors problem. Chapter 6 introduces a new approach for the construction of the Voronoi diagram. Using this approach, we design an O(log N) time O (N) processor algorithm for constructing the Voronoi diagram with L{sub 1} and L. metrics on a CREW PRAM machine. Even though the GNN and the Delaunay triangulation (DT) do not have an inclusion relation, we show, using some range type queries, how to efficiently construct DT from the GNN relations over a constant number of angular ranges.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
A design study on complexity reduced multipath mitigation
NASA Astrophysics Data System (ADS)
Wasenmüller, U.; Brack, T.; Groh, I.; Staudinger, E.; Sand, S.; Wehn, N.
2012-09-01
Global navigation satellite systems, e.g. the current GPS and the future European Galileo system, are frequently used in car navigation systems or smart phones to determine the position of a user. The calculation of the mobile position is based on the signal propagation times between the satellites and the mobile terminal. At least four time of arrival (TOA) measurements from four different satellites are required to resolve the position uniquely. Further, the satellites need to be line-of-sight to the receiver for exact position calculation. However, in an urban area, the direct path may be blocked and the resulting multipath propagation causes errors in the order of tens of meters for each measurement. and in the case of non-line-of-sight (NLOS), positive errors in the order of hundreds of meters. In this paper an advanced algorithm for multipath mitigation known as CRMM is presented. CRMM features reduced algorithmic complexity and superior performance in comparison with other state of the art multipath mitigation algorithms. Simulation results demonstrate the significant improvements in position calculation in environments with severe multipath propagation. Nevertheless, in relation to traditional algorithms an increased effort is required for real-time signal processing due to the large amount of data, which has to be processed in parallel. Based on CRMM, we performed a comprehensive design study including a design space exploration for the tracking unit hardware part, and prototype implementation for hardware complexity estimation.
A novel algorithm for non-bonded-list updating in molecular simulations.
Maximova, Tatiana; Keasar, Chen
2006-06-01
Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.
1976-01-01
The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.
Improved delay-leaping simulation algorithm for biochemical reaction systems with delays
NASA Astrophysics Data System (ADS)
Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei
2012-04-01
In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Omura, Yoshiaki; Jones, Marilyn; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu
2013-01-01
Sterilizing the pre-cancer skin of malignant melanoma (M.M.) with 70% Isopropyl alcohol intensified malignancy & the malignant response extended to surrounding normal looking skin, while sterilizing with 80% (vodka) or 12% (plum wine) ethyl alcohol completely inhibited M.M. in the area (both effects lasted for about 90 minutes initially). Burnt food (bread, vegetables, meat, and fish), a variety of smoked & non-smoked fish-skin, many animal's skin, pepper, Vitamin C over 75 mg, mango, pineapple, coconut, almond, sugars, Saccharine & Aspartame, garlic, onion, etc & Electromagnetic field from cellular phones worsened M.M. & induced abnormal M.M. response of surrounding skin. We found the following factors inhibit early stage of M.M. significantly: 1) Increasing normal cell telomere, by taking 500 mg Haritaki, often reached between 400-1150 ng& gradually diminished, but the M.M. response was completely inhibited until normal cell telomeres are reduced to 150 ng, which takes 6-8 hours. More than 70 mg Vitamin C, Orange Juice, & other high Vitamin C containing substances shouldn't be taken because they completely inhibit the effects of Haritaki. 2) We found Chrysotile asbestos & Tremolite asbestos (% of the Chrysotile amount) coexist. A special Cilantro tablet was used to remove asbestos & some toxic metals. 3) Vitamin D3 400 I.U. has a maximum inhibiting effect on M.M. but 800 I.U. or higher promotes malignancy. 4) Noricontaining Iodine, etc., was used. 5) EPA 180 mm with DHA 120 mg was most effectively used after metastasis to the surrounding skin was eliminated. When we combined 1 Cilantro tablet & Vitamin D3 400 I.U. withsmall Nori pieces & EPA with DHA, the effect of complete inhibition of M.M. lasted 9-11 hours. When these anti-M.M.substances (Haritaki, Vitamin D3, Cilantro, Nori, EPA. with DHA) were taken together, the effect lasted 12-14 hoursand M.M. involvement in surrounding normal-looking skin disappeared rapidly & original dark brown or black are as
Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank
2008-11-15
Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.
Spatial averaging algorithms for ultrasonic inspection of austenitic stainless steel welds
Horn, J. E.; Cooper, C.S.; Michaels, T.E.
1980-04-07
Interpretation of ultrasonic inspection data from stainless steel welds is difficult because the signal-to-noise ratio is very low. The three main reasons for this are the granular structure of the weld, the high attenuation of stainless steel, and electronic noise. Averaging in time at the same position in space reduces electronic noise, but does not reduce ultrasonic noise from grain boundary scattering. Averaging wave-forms from different spatial positions helps reduce grain noise, but desired signals can destructively interfere if they shift in time. If the defect geometry is known, the ultrasonic waveforms can be shifted before averaging, ensuring signal reinforcement. The simplest geometry results in a linear time shift. An averaging algorithm has been developed which finds the optimum shift. This algorithm computes the averaged, or composite waveform as a function of the time shift. The optimum occurs when signals from a reflector become aligned in time, producing a large amplitude composite waveform. This algorithm works very well, but requires significant computer time and storage. This paper discusses this linear shift averaging algorithm, and considers an implementation using frequency domain techniques. Also, data from several weld defects are presented and analyzed.
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.
2015-08-10
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.
2015-08-10
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less
Comparing Coordinated Garbage Collection Algorithms for Arrays of Solid-state Drives
Lee, Junghee; Kim, Youngjae; Oral, H Sarp; Shipman, Galen M; Dillow, David A; Wang, Feiyi
2012-01-01
Solid-State Drives (SSDs) offer significant performance improvements over hard disk drives (HDD) on a number of workloads. The frequency of garbage collection (GC) activity is directly correlated with the pattern, frequency, and volume of write requests, and scheduling of GC is controlled by logic internal to the SSD. SSDs can exhibit significant performance degradations when garbage collection (GC) conflicts with an ongoing I/O request stream. When using SSDs in a RAID array, the lack of coordination of the local GC processes amplifies these performance degradations. No RAID controller or SSD available today has the technology to overcome this limitation. In our previous work, we presented a Global Garbage Collection (GGC) mechanism to improve response times and reduce performance variability for a RAID array of SSDs. A coordination method is employed so that GCs in the array can run at the same time. The coordination can exhibit substantial performance improvement. In this paper, we explore various GC coordination algorithms. We develop reactive and proactive GC coordination algorithms and evaluate their I/O performance and block erase counts for various workloads. We show that a proactive GC coordination algorithm can improve the I/O response times by up to 9% further and increase the lifetime of SSDs by reducing the number of block erase counts by up to 79% compared to a reactive algorithm.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Improved bat algorithm applied to multilevel image thresholding.
Alihodzic, Adis; Tuba, Milan
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
Omelyan, I P; Mryglod, I M; Folk, R
2002-08-01
A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach. PMID:12241312
Coding scheme for wireless video transport with reduced frame skipping
NASA Astrophysics Data System (ADS)
Aramvith, Supavadee; Sun, Ming-Ting
2000-05-01
We investigate the scenario of using the Automatic Repeat reQuest (ARQ) retransmission scheme for two-way low bit-rate video communications over wireless Rayleigh fading channels. We show that during the retransmission of error packets, due to the reduced channel throughput, the video encoder buffer may fill-up quickly and cause the TMN8 rate-control algorithm to significantly reduce the bits allocated to each video frame. This results in Peak Signal-to-Noise Ratio (PSNR) degradation and many skipper frames. To reduce the number of frames skipped, in this paper we propose a coding scheme which takes into consideration the effects of the video buffer fill-up, an a priori channel model, the channel feedback information, and hybrid ARQ/FEC. The simulation results indicate that our proposed scheme encode the video sequences with much fewer frame skipping and with higher PSNR compared to H.263 TMN8.
NASA Astrophysics Data System (ADS)
Cartes, David A.; Ray, Laura R.; Collier, Robert D.
2002-04-01
An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. [``Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection,'' Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.
Cartes, David A; Ray, Laura R; Collier, Robert D
2002-04-01
An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. ["Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection," Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields. PMID:12002860
A fast implementation of the incremental backprojection algorithms for parallel beam geometries
Chen, C.M.; Wang, C.Y.; Cho, Z.H.
1996-12-01
Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N{sup 2}) multiplications in contrast to O(N{sup 2}) and O(N{sup 3}) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested.
Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen
2009-01-15
A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the
A cross-layer optimization algorithm for wireless sensor network
NASA Astrophysics Data System (ADS)
Wang, Yan; Liu, Le Qing
2010-07-01
Energy is critical for typical wireless sensor networks (WSN) and how to energy consumption and maximize network lifetime are big challenges for Wireless sensor networks; cross layer algorithm is main method to solve this problem. In this paper, firstly, we analyze current layer-based optimal methods in wireless sensor network and summarize the physical, link and routing optimization techniques. Secondly we compare some strategies in cross-layer optimization algorithms. According to the analysis and summary of the current lifetime algorithms in wireless sensor network A cross layer optimization algorithm is proposed,. Then this optimization algorithm proposed in the paper is adopted to improve the traditional Leach routing protocol. Simulation results show that this algorithm is an excellent cross layer algorithm for reducing energy consumption.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
IJA: an efficient algorithm for query processing in sensor networks.
Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa
2011-01-01
One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375
An efficient Earth Mover's Distance algorithm for robust histogram comparison.
Ling, Haibin; Okada, Kazunori
2007-05-01
We propose EMD-L1: a fast and exact algorithm for computing the Earth Mover's Distance (EMD) between a pair of histograms. The efficiency of the new algorithm enables its application to problems that were previously prohibitive due to high time complexities. The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD. Exploiting the L1 metric structure, the number of unknown variables in EMD-L1 is reduced to O(N) from O(N2) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function of the linear program is simplified. Formally, without any approximation, we prove that the EMD-L1 formulation is equivalent to the original EMD with a L1 ground distance. To perform the EMD-L1 computation, we propose an efficient tree-based algorithm, Tree-EMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when we interpret EMD-L1 as a network flow optimization problem. We empirically show that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD. The accuracy of the proposed methods is evaluated by experiments for two computation-intensive problems: shape recognition and interest point matching using multidimensional histogram-based local features. For shape recognition, EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape data set, as well as an articulated shape data set. For interest point matching, SIFT, shape context and spin image are tested on both synthetic and real image pairs with large geometrical deformation, illumination change, and heavy intensity noise. The results demonstrate that our EMD-L1-based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks. PMID:17356203
Incremental refinement of a multi-user-detection algorithm (II)
NASA Astrophysics Data System (ADS)
Vollmer, M.; Götze, J.
2003-05-01
Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Hybrid Evolutionary-Heuristic Algorithm for Capacitor Banks Allocation
NASA Astrophysics Data System (ADS)
Barukčić, Marinko; Nikolovski, Srete; Jović, Franjo
2010-11-01
The issue of optimal allocation of capacitor banks concerning power losses minimization in distribution networks are considered in this paper. This optimization problem has been recently tackled by application of contemporary soft computing methods such as: genetic algorithms, neural networks, fuzzy logic, simulated annealing, ant colony methods, and hybrid methods. An evolutionaryheuristic method has been proposed for optimal capacitor allocation in radial distribution networks. An evolutionary method based on genetic algorithm is developed. The proposed method has a reduced number of parameters compared to the usual genetic algorithm. A heuristic stage is used for improving the optimal solution given by the evolutionary stage. A new cost-voltage node index is used in the heuristic stage in order to improve the quality of solution. The efficiency of the proposed two-stage method has been tested on different test networks. The quality of solution has been verified by comparison tests with other methods on the same test networks. The proposed method has given significantly better solutions for time dependent load in the 69-bus network than found in references.
Faster unfolding of communities: speeding up the Louvain algorithm.
Traag, V A
2015-09-01
Many complex networks exhibit a modular structure of densely connected groups of nodes. Usually, such a modular structure is uncovered by the optimization of some quality function. Although flawed, modularity remains one of the most popular quality functions. The Louvain algorithm was originally developed for optimizing modularity, but has been applied to a variety of methods. As such, speeding up the Louvain algorithm enables the analysis of larger graphs in a shorter time for various methods. We here suggest to consider moving nodes to a random neighbor community, instead of the best neighbor community. Although incredibly simple, it reduces the theoretical runtime complexity from O(m) to O(nlog〈k〉) in networks with a clear community structure. In benchmark networks, it speeds up the algorithm roughly 2-3 times, while in some real networks it even reaches 10 times faster runtimes. This improvement is due to two factors: (1) a random neighbor is likely to be in a "good" community and (2) random neighbors are likely to be hubs, helping the convergence. Finally, the performance gain only slightly diminishes the quality, especially for modularity, thus providing a good quality-performance ratio. However, these gains are less pronounced, or even disappear, for some other measures such as significance or surprise. PMID:26465522
Cosmic web reconstruction through density ridges: method and algorithm
NASA Astrophysics Data System (ADS)
Chen, Yen-Chi; Ho, Shirley; Freeman, Peter E.; Genovese, Christopher R.; Wasserman, Larry
2015-11-01
The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictate the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the subspace constrained mean shift (SCMS) algorithm (Ozertem & Erdogmus 2011; Genovese et al. 2014) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS first to the data set generated from the Voronoi model. The density ridges show strong agreement with the filaments from Voronoi method. We then apply the SCMS method data sets sampled from a P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA, and to LOWZ and CMASS data from the Baryon Oscillation Spectroscopic Survey (BOSS). To further assess the efficacy of SCMS, we compare the relative locations of BOSS filaments with galaxy clusters in the redMaPPer catalogue, and find that redMaPPer clusters are significantly closer (with p-values <10-9) to SCMS-detected filaments than to randomly selected galaxies.
Lunar Crescent Detection Based on Image Processing Algorithms
NASA Astrophysics Data System (ADS)
Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali
2014-11-01
For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.
Evaluation of hybrids algorithms for mass detection in digitalized mammograms
NASA Astrophysics Data System (ADS)
Cordero, José; Garzón Reyes, Johnson
2011-01-01
The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.
Dynamic Congestion Control using MDB-Routing Algorithm
NASA Astrophysics Data System (ADS)
Anuradha, S.; Raghu Ram, G.
2014-01-01
This paper presents high through put routing algorithm. Modified depth Breadth routing algorithm takes a decision in moving forward packet in the next node which will visit to reach its final destination. Load balancing to improve the performance of distributed by processing power of the entire system to smooth out process of very high congestion at individual nodes, by transferring some of the load of heavily loaded nodes to the other nodes for processing. This achieves a 306.53 average time for packet, compared with the DB routing which achieves 316.13 average time for packet. Results shows that the proposed Modified depth Breadth achieves 348 average time when compared to DB routing which gives 548 for 3500 packets on 5 × 5 grid. Further results show that no of dead packets significantly reduced in the case of MDB. This focuses on Routing Network and Tables. These Network tables includes the information used by a routing algorithm to take a decision in moving forward the packet in the next node which will visit to reach its final destination. Load balancing try to improve the performance of a distributed system by processing power of the entire system to smooth out periods of very high congestion at individual nodes, which is done by transferring some of the load of heavily loaded nodes to other nodes for processing.
Improving CMD Areal Density Analysis: Algorithms and Strategies
NASA Astrophysics Data System (ADS)
Wilson, R. E.
2014-06-01
Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMDÂ¡Â¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.
Faster unfolding of communities: Speeding up the Louvain algorithm
NASA Astrophysics Data System (ADS)
Traag, V. A.
2015-09-01
Many complex networks exhibit a modular structure of densely connected groups of nodes. Usually, such a modular structure is uncovered by the optimization of some quality function. Although flawed, modularity remains one of the most popular quality functions. The Louvain algorithm was originally developed for optimizing modularity, but has been applied to a variety of methods. As such, speeding up the Louvain algorithm enables the analysis of larger graphs in a shorter time for various methods. We here suggest to consider moving nodes to a random neighbor community, instead of the best neighbor community. Although incredibly simple, it reduces the theoretical runtime complexity from O (m ) to O (n log
Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm
Yao, Y
2008-02-08
's decomposition algorithm, much more efficiently, leading to significantly reduced computation time. Test runs on a desktop computer have shown reductions of up to 89%. Our focus this year has been on the implementation of parallel graph clustering on one of LLNL's supercomputers. In order to achieve efficiency in parallel computing, we have exploited the fact that large semantic graphs tend to be sparse, comprising loosely connected dense node clusters. When implemented on distributed memory computers, our approach performed well on several large graphs with up to one billion nodes, as shown in Table 2. The rightmost column of Table 2 contains the associated Newman's modularity [1], a metric that is widely used to assess the quality of community structure. Existing algorithms produce results that merely approximate the optimal solution, i.e., maximum modularity. We have developed a verification tool for decomposition algorithms, based upon a novel integer linear programming (ILP) approach, that computes an exact solution. We have used this ILP methodology to find the maximum modularity and corresponding optimal community structure for several well-studied graphs in the literature (e.g., Figure 1) [3]. The above approaches assume that modularity is the best measure of quality for community structure. In an effort to enhance this quality metric, we have also generalized Newman's modularity based upon an insightful random walk interpretation that allows us to vary the scope of the metric. Generalized modularity has enabled us to develop new, more flexible versions of our algorithms. In developing these methodologies, we have made several contributions to both graph theoretic algorithms and software engineering. We have written two research papers for refereed publication [3-4] and are working on another one [5]. In addition, we have presented our research findings at three academic and professional conferences.
CHROMagar Orientation Medium Reduces Urine Culture Workload
Manickam, Kanchana; Karlowsky, James A.; Adam, Heather; Lagacé-Wiens, Philippe R. S.; Rendina, Assunta; Pang, Paulette; Murray, Brenda-Lee
2013-01-01
Microbiology laboratories continually strive to streamline and improve their urine culture algorithms because of the high volumes of urine specimens they receive and the modest numbers of those specimens that are ultimately considered clinically significant. In the current study, we quantitatively measured the impact of the introduction of CHROMagar Orientation (CO) medium into routine use in two hospital laboratories and compared it to conventional culture on blood and MacConkey agars. Based on data extracted from our Laboratory Information System from 2006 to 2011, the use of CO medium resulted in a 28% reduction in workload for additional procedures such as Gram stains, subcultures, identification panels, agglutination tests, and biochemical tests. The average number of workload units (one workload unit equals 1 min of hands-on labor) per urine specimen was significantly reduced (P < 0.0001; 95% confidence interval [CI], 0.5326 to 1.047) from 2.67 in 2006 (preimplementation of CO medium) to 1.88 in 2011 (postimplementation of CO medium). We conclude that the use of CO medium streamlined the urine culture process and increased bench throughput by reducing both workload and turnaround time in our laboratories. PMID:23363839
Global and Local Optimization Algorithms for Optimal Signal Set Design
Kearsley, Anthony J.
2001-01-01
The problem of choosing an optimal signal set for non-Gaussian detection was reduced to a smooth inequality constrained mini-max nonlinear programming problem by Gockenbach and Kearsley. Here we consider the application of several optimization algorithms, both global and local, to this problem. The most promising results are obtained when special-purpose sequential quadratic programming (SQP) algorithms are embedded into stochastic global algorithms.
A Test Scheduling Algorithm Based on Two-Stage GA
NASA Astrophysics Data System (ADS)
Yu, Y.; Peng, X. Y.; Peng, Y.
2006-10-01
In this paper, we present a new algorithm to co-optimize the core wrapper design and the SOC test scheduling. The SOC test scheduling problem is first formulated into a twodimension floorplan problem and a sequence pair architecture is used to represent it. Then we propose a two-stage GA (Genetic Algorithm) to solve the SOC test scheduling problem. Experiments on ITC'02 benchmark show that our algorithm can effectively reduce test time so as to decrease SOC test cost.
Advanced optimization of permanent magnet wigglers using a genetic algorithm
Hajima, Ryoichi
1995-12-31
In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.
Algorithm to extract the spanning clusters and calculate conductivity in strip geometries
NASA Astrophysics Data System (ADS)
Babalievski, F.
1995-06-01
I present an improved algorithm to solve the random resistor problem using a transfer-matrix technique. Preconditioning by spanning cluster extraction both reduces the size of the matrix and yields faster execution times when compared to previous algorithms.
Meteorological Data Analysis Using MapReduce
Fang, Wei; Sheng, V. S.; Wen, XueZhi; Pan, Wubin
2014-01-01
In the atmospheric science, the scale of meteorological data is massive and growing rapidly. K-means is a fast and available cluster algorithm which has been used in many fields. However, for the large-scale meteorological data, the traditional K-means algorithm is not capable enough to satisfy the actual application needs efficiently. This paper proposes an improved MK-means algorithm (MK-means) based on MapReduce according to characteristics of large meteorological datasets. The experimental results show that MK-means has more computing ability and scalability. PMID:24790576
Versatility of the CFR algorithm for limited angle reconstruction
Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )
1990-04-01
The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
Neural algorithms on VLSI concurrent architectures
Caviglia, D.D.; Bisio, G.M.; Parodi, G.
1988-09-01
The research concerns the study of neural algorithms for developing CAD tools with A.I. features in VLSI design activities. In this paper the focus is on optimization problems such as partitioning, placement and routing. These problems require massive computational power to be solved (NP-complete problems) and the standard approach is usually based on euristic techniques. Neural algorithms can be represented by a circuital model. This kind of representation can be easily mapped in a real circuit, which, however, features limited flexibility with respect to the variety of problems. In this sense the simulation of the neural circuit, by mapping it on a digital VLSI concurrent architecture seems to be preferrable; in addition this solution offers a wider choice with regard to algorithms characteristics (e.g. transfer curve of neural elements, reconfigurability of interconnections, etc.). The implementation with programmable components, such as transputers, allows an indirect mapping of the algorithm (one transputer for N neurons) accordingly to the dimension and the characteristics of the problem. In this way the neural algorithm described by the circuit is reduced to the algorithm that simulates the network behavior. The convergence properties of that formulation are studied with respect to the characteristics of the neural element transfer curve.
Parallel LU-factorization algorithms for dense matrices
Oppe, T.C.; Kincaid, D.R.
1987-05-01
Several serial and parallel algorithms for computing the LU-factorization of a dense matrix are investigated. Numerical experiments and programming considerations to reduce bank conflicts on the Cray X-MP4 parallel computer are presented. Speedup factors are given for the parallel algorithms. 15 refs., 6 tabs.
Image change detection algorithms: a systematic survey.
Radke, Richard J; Andra, Srinivas; Al-Kofahi, Omar; Roysam, Badrinath
2005-03-01
Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer. PMID:15762326
A comprehensive review of swarm optimization algorithms.
Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
Optimal configuration algorithm of a satellite transponder
NASA Astrophysics Data System (ADS)
Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.
2016-04-01
This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.
A Dynamic Navigation Algorithm Considering Network Disruptions
NASA Astrophysics Data System (ADS)
Jiang, J.; Wu, L.
2014-04-01
In traffic network, link disruptions or recoveries caused by sudden accidents, bad weather and traffic congestion, lead to significant increase or decrease in travel times on some network links. Similar situation also occurs in real-time emergency evacuation plan in indoor areas. As the dynamic nature of real-time network information generates better navigation solutions than the static one, a real-time dynamic navigation algorithm for emergency evacuation with stochastic disruptions or recoveries in the network is presented in this paper. Compared with traditional existing algorithms, this new algorithm adjusts pre-existing path to a new optimal one according to the changing link travel time. With real-time network information, it can provide the optional path quickly to adapt to the rapid changing network properties. Theoretical analysis and experimental results demonstrate that this proposed algorithm performs a high time efficiency to get exact solution and indirect information can be calculated in spare time.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
A survey of DNA motif finding algorithms
Das, Modan K; Dai, Ho-Kwok
2007-01-01
Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of
Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm
NASA Astrophysics Data System (ADS)
Hasal, Martin; Pospisil, Lukas; Nowakova, Jana
2016-06-01
Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.
Pennington, A; Selvaraj, R; Kirkpatrick, S; Oliveira, S; Leventouri, T
2014-06-01
Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of −0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV.
Advances in Significance Testing for Cluster Detection
NASA Astrophysics Data System (ADS)
Coleman, Deidra Andrea
Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
Cheney, M.C.
1997-12-31
The cost of energy for renewables has gained greater significance in recent years due to the drop in price in some competing energy sources, particularly natural gas. In pursuit of lower manufacturing costs for wind turbine systems, work was conducted to explore an innovative rotor designed to reduce weight and cost over conventional rotor systems. Trade-off studies were conducted to measure the influence of number of blades, stiffness, and manufacturing method on COE. The study showed that increasing number of blades at constant solidity significantly reduced rotor weight and that manufacturing the blades using pultrusion technology produced the lowest cost per pound. Under contracts with the National Renewable Energy Laboratory and the California Energy Commission, a 400 kW (33m diameter) turbine was designed employing this technology. The project included tests of an 80 kW (15.5m diameter) dynamically scaled rotor which demonstrated the viability of the design.
Reduced-Complexity Reed-Solomon Decoders Based on Cyclotomic FFTs
NASA Astrophysics Data System (ADS)
Chen, Ning; Yan, Zhiyuan
2009-04-01
In this paper, we reduce the computational complexities of partial and dual partial cyclotomic FFTs (CFFTs), which are discrete Fourier transforms where spectral and temporal components are constrained, based on their properties as well as a common subexpression elimination algorithm. Our partial CFFTs achieve smaller computational complexities than previously proposed partial CFFTs. Utilizing our CFFTs in both transform- and time-domain Reed--Solomon decoders, we achieve significant complexity reductions.
Genetic Algorithms for Digital Quantum Simulations
NASA Astrophysics Data System (ADS)
Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.
2016-06-01
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
On Dijkstra's Algorithm for Deadlock Detection
NASA Astrophysics Data System (ADS)
Li, Youming; Greca, Ardian; Harris, James
We study a classical problem in operating systems concerning deadlock detection for systems with reusable resources. The elegant Dijkstra's algorithm utilizes simple data structures, but it has the cost of quadratic dependence on the number of the processes. Our goal is to reduce the cost in an optimal way without losing the simplicity of the data structures. More specifically, we present a graph-free and almost optimal algorithm with the cost of linear dependence on the number of the processes, when the number of resources is fixed and when the units of requests for resources are bounded by constants.
Project resource reallocation algorithm
NASA Technical Reports Server (NTRS)
Myers, J. E.
1981-01-01
A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Sidky, Emil Y.
2015-03-01
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
Programming parallel vision algorithms
Shapiro, L.G.
1988-01-01
Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Large-scale sequential quadratic programming algorithms
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-06-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-03-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
Tsoi, Y H; Xie, S Q
2011-02-01
The kinematics of the human ankle is commonly modeled as a biaxial hinge joint model. However, significant variations in axis orientations have been found between different individuals and also between different foot configurations. For ankle rehabilitation robots, information regarding the ankle kinematic parameters can be used to estimate the ankle and subtalar joint displacements. This can in turn be used as auxiliary variables in adaptive control schemes to allow modification of the robot stiffness and damping parameters to reduce the forces applied at stiffer foot configurations. Due to the large variations observed in the ankle kinematic parameters, an online identification algorithm is required to provide estimates of the model parameters. An online parameter estimation routine based on the recursive least-squares (RLS) algorithm was therefore developed in this research. An extension of the conventional biaxial ankle kinematic model, which allows variation in axis orientations with different foot configurations had also been developed and utilized in the estimation algorithm. Simulation results showed that use of the extended model in the online algorithm is effective in capturing the foot orientation of a biaxial ankle model with variable joint axis orientations. Experimental results had also shown that a modified RLS algorithm that penalizes a deviation of model parameters from their nominal values can be used to obtain more realistic parameter estimates while maintaining a level of estimation accuracy comparable to that of the conventional RLS routine. PMID:21280877
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Distortion correction algorithm for UAV remote sensing image based on CUDA
NASA Astrophysics Data System (ADS)
Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu
2014-03-01
In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability.
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
Identification of Traceability Barcode Based on Phase Correlation Algorithm
NASA Astrophysics Data System (ADS)
Lang, Liying; Zhang, Xiaofang
In the paper phase correlation algorithm based on Fourier transform is applied to the traceability barcode identification, which is a widely used method of image registration. And there is the rotation-invariant phase correlation algorithm which combines polar coordinate transform with phase correlation, that they can recognize the barcode with partly destroyed and rotated. The paper provides the analysis and simulation for the algorithm using Matlab, the results show that the algorithm has the advantages of good real-time and high performance. And it improves the matching precision and reduces the calculation by optimizing the rotation-invariant phase correlation.
Algorithmic cooling in liquid-state nuclear magnetic resonance
NASA Astrophysics Data System (ADS)
Atia, Yosi; Elias, Yuval; Mor, Tal; Weinstein, Yossi
2016-01-01
Algorithmic cooling is a method that employs thermalization to increase qubit purification level; namely, it reduces the qubit system's entropy. We utilized gradient ascent pulse engineering, an optimal control algorithm, to implement algorithmic cooling in liquid-state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of C132-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic-resonance spectroscopy.
Automatic registration and segmentation algorithm for multiple electrophoresis images
NASA Astrophysics Data System (ADS)
Baker, Matthew S.; Busse, Harald; Vogt, Martin
2000-06-01
We present an algorithm for registering, segmenting and quantifying multiple scanned electrophoresis images. (2D gel) Electrophoresis is a technique for separating proteins or other macromolecules in organic material according to net charge and molecular mass and results in scanned grayscale images with dark spots against a light background marking the presence of such macromolecules. The algorithm begins by registering each of the images using a non-rigid registration algorithm. The registered images are then jointly segmented using a Markov random field approach to obtain a single segmentation. By using multiple images, the effect of noise is greatly reduced. We demonstrate the algorithm on several sets of real data.