Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-08-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
Algorithm for Detecting Significant Locations from Raw GPS Data
NASA Astrophysics Data System (ADS)
Kami, Nobuharu; Enomoto, Nobuyuki; Baba, Teruyuki; Yoshikawa, Takashi
We present a fast algorithm for probabilistically extracting significant locations from raw GPS data based on data point density. Extracting significant locations from raw GPS data is the first essential step of algorithms designed for location-aware applications. Assuming that a location is significant if users spend a certain time around that area, most current algorithms compare spatial/temporal variables, such as stay duration and a roaming diameter, with given fixed thresholds to extract significant locations. However, the appropriate threshold values are not clearly known in priori and algorithms with fixed thresholds are inherently error-prone, especially under high noise levels. Moreover, for N data points, they are generally O(N 2) algorithms since distance computation is required. We developed a fast algorithm for selective data point sampling around significant locations based on density information by constructing random histograms using locality sensitive hashing. Evaluations show competitive performance in detecting significant locations even under high noise levels.
An algorithmic method for reducing conductance-based neuron models.
Sorensen, Michael E; DeWeerth, Stephen P
2006-08-01
Although conductance-based neural models provide a realistic depiction of neuronal activity, their complexity often limits effective implementation and analysis. Neuronal model reduction methods provide a means to reduce model complexity while retaining the original model's realism and relevance. Such methods, however, typically include ad hoc components that require that the modeler already be intimately familiar with the dynamics of the original model. We present an automated, algorithmic method for reducing conductance-based neuron models using the method of equivalent potentials (Kelper et al., Biol Cybern 66(5):381-387, 1992) Our results demonstrate that this algorithm is able to reduce the complexity of the original model with minimal performance loss, and requires minimal prior knowledge of the model's dynamics. Furthermore, by utilizing a cost function based on the contribution of each state variable to the total conductance of the model, the performance of the algorithm can be significantly improved.
Pyrolysis of wastewater biosolids significantly reduces estrogenicity.
Hoffman, T C; Zitomer, D H; McNamara, P J
2016-11-01
Most wastewater treatment processes are not specifically designed to remove micropollutants. Many micropollutants are hydrophobic so they remain in the biosolids and are discharged to the environment through land-application of biosolids. Micropollutants encompass a broad range of organic chemicals, including estrogenic compounds (natural and synthetic) that reside in the environment, a.k.a. environmental estrogens. Public concern over land application of biosolids stemming from the occurrence of micropollutants hampers the value of biosolids which are important to wastewater treatment plants as a valuable by-product. This research evaluated pyrolysis, the partial decomposition of organic material in an oxygen-deprived system under high temperatures, as a biosolids treatment process that could remove estrogenic compounds from solids while producing a less hormonally active biochar for soil amendment. The estrogenicity, measured in estradiol equivalents (EEQ) by the yeast estrogen screen (YES) assay, of pyrolyzed biosolids was compared to primary and anaerobically digested biosolids. The estrogenic responses from primary solids and anaerobically digested solids were not statistically significantly different, but pyrolysis of anaerobically digested solids resulted in a significant reduction in EEQ; increasing pyrolysis temperature from 100°C to 500°C increased the removal of EEQ with greater than 95% removal occurring at or above 400°C. This research demonstrates that biosolids treatment with pyrolysis would substantially decrease (removal>95%) the estrogens associated with this biosolids product. Thus, pyrolysis of biosolids can be used to produce a valuable soil amendment product, biochar, that minimizes discharge of estrogens to the environment. PMID:27344259
Discovering sequence similarity by the algorithmic significance method
Milosavljevic, A.
1993-02-01
The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.
Discovering sequence similarity by the algorithmic significance method
Milosavljevic, A.
1993-02-01
The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.
Algorithms for Detecting Significantly Mutated Pathways in Cancer
NASA Astrophysics Data System (ADS)
Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.
Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We
A genetic algorithm to reduce stream channel cross section data
Berenbrock, C.
2006-01-01
A genetic algorithm (GA) was used to reduce cross section data for a hypothetical example consisting of 41 data points and for 10 cross sections on the Kootenai River. The number of data points for the Kootenai River cross sections ranged from about 500 to more than 2,500. The GA was applied to reduce the number of data points to a manageable dataset because most models and other software require fewer than 100 data points for management, manipulation, and analysis. Results indicated that the program successfully reduced the data. Fitness values from the genetic algorithm were lower (better) than those in a previous study that used standard procedures of reducing the cross section data. On average, fitnesses were 29 percent lower, and several were about 50 percent lower. Results also showed that cross sections produced by the genetic algorithm were representative of the original section and that near-optimal results could be obtained in a single run, even for large problems. Other data also can be reduced in a method similar to that for cross section data.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
NASA Astrophysics Data System (ADS)
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
The significance of sensory appeal for reduced meat consumption.
Tucker, Corrina A
2014-10-01
Reducing meat (over-)consumption as a way to help address environmental deterioration will require a range of strategies, and any such strategies will benefit from understanding how individuals might respond to various meat consumption practices. To investigate how New Zealanders perceive such a range of practices, in this instance in vitro meat, eating nose-to-tail, entomophagy and reducing meat consumption, focus groups involving a total of 69 participants were held around the country. While it is the damaging environmental implications of intensive farming practices and the projected continuation of increasing global consumer demand for meat products that has propelled this research, when asked to consider variations on the conventional meat-centric diet common to many New Zealanders, it was the sensory appeal of the areas considered that was deemed most problematic. While an ecological rationale for considering these 'meat' alternatives was recognised and considered important by most, transforming this value into action looks far less promising given the recurrent sensory objections to consuming different protein-based foods or of reducing meat consumption. This article considers the responses of focus group participants in relation to each of the dietary practices outlined, and offers suggestions on ways to encourage a more environmentally viable diet.
The significance of sensory appeal for reduced meat consumption.
Tucker, Corrina A
2014-10-01
Reducing meat (over-)consumption as a way to help address environmental deterioration will require a range of strategies, and any such strategies will benefit from understanding how individuals might respond to various meat consumption practices. To investigate how New Zealanders perceive such a range of practices, in this instance in vitro meat, eating nose-to-tail, entomophagy and reducing meat consumption, focus groups involving a total of 69 participants were held around the country. While it is the damaging environmental implications of intensive farming practices and the projected continuation of increasing global consumer demand for meat products that has propelled this research, when asked to consider variations on the conventional meat-centric diet common to many New Zealanders, it was the sensory appeal of the areas considered that was deemed most problematic. While an ecological rationale for considering these 'meat' alternatives was recognised and considered important by most, transforming this value into action looks far less promising given the recurrent sensory objections to consuming different protein-based foods or of reducing meat consumption. This article considers the responses of focus group participants in relation to each of the dietary practices outlined, and offers suggestions on ways to encourage a more environmentally viable diet. PMID:24953197
Reducing facet nucleation during algorithmic self-assembly.
Chen, Ho-Lin; Schulman, Rebecca; Goel, Ashish; Winfree, Erik
2007-09-01
Algorithmic self-assembly, a generalization of crystal growth, has been proposed as a mechanism for bottom-up fabrication of complex nanostructures and autonomous DNA computation. In principle, growth can be programmed by designing a set of molecular tiles with binding interactions that enforce assembly rules. In practice, however, errors during assembly cause undesired products, drastically reducing yields. Here we provide experimental evidence that assembly can be made more robust to errors by adding redundant tiles that "proofread" assembly. We construct DNA tile sets for two methods, uniform and snaked proofreading. While both tile sets are predicted to reduce errors during growth, the snaked proofreading tile set is also designed to reduce nucleation errors on crystal facets. Using atomic force microscopy to image growth of proofreading tiles on ribbon-like crystals presenting long facets, we show that under the physical conditions we studied the rate of facet nucleation is 4-fold smaller for snaked proofreading tile sets than for uniform proofreading tile sets.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
Tadalafil significantly reduces ischemia reperfusion injury in skin island flaps
Kayiran, Oguz; Cuzdan, Suat S.; Uysal, Afsin; Kocer, Ugur
2013-01-01
Introduction: Numerous pharmacological agents have been used to enhance the viability of flaps. Ischemia reperfusion (I/R) injury is an unwanted, sometimes devastating complication in reconstructive microsurgery. Tadalafil, a specific inhibitor of phosphodiesterase type 5 is mainly used for erectile dysfunction, and acts on vascular smooth muscles, platelets and leukocytes. Herein, the protective and therapeutical effect of tadalafil in I/R injury in rat skin flap model is evaluated. Materials and Methods: Sixty epigastric island flaps were used to create I/R model in 60 Wistar rats (non-ischemic group, ischemic group, medication group). Biochemical markers including total nitrite, malondialdehyde (MDA) and myeloperoxidase (MPO) were analysed. Necrosis rates were calculated and histopathologic evaluation was carried out. Results: MDA, MPO and total nitrite values were found elevated in the ischemic group, however there was an evident drop in the medication group. Histological results revealed that early inflammatory findings (oedema, neutrophil infiltration, necrosis rate) were observed lower with tadalafil administration. Moreover, statistical significance (P < 0.05) was recorded. Conclusions: We conclude that tadalafil has beneficial effects on epigastric island flaps against I/R injury. PMID:23960309
Colchicine Significantly Reduces Incident Cancer in Gout Male Patients
Kuo, Ming-Chun; Chang, Shun-Jen; Hsieh, Ming-Chia
2015-01-01
Abstract Patients with gout are more likely to develop most cancers than subjects without gout. Colchicine has been used for the treatment and prevention of gouty arthritis and has been reported to have an anticancer effect in vitro. However, to date no study has evaluated the relationship between colchicine use and incident cancers in patients with gout. This study enrolled male patients with gout identified in Taiwan's National Health Insurance Database for the years 1998 to 2011. Each gout patient was matched with 4 male controls by age and by month and year of first diagnosis, and was followed up until 2011. The study excluded those who were diagnosed with diabetes or any type of cancer within the year following enrollment. We calculated hazard ratio (HR), aged-adjusted standardized incidence ratio, and incidence of 1000 person-years analyses to evaluate cancer risk. A total of 24,050 male patients with gout and 76,129 male nongout controls were included. Patients with gout had a higher rate of incident all-cause cancers than controls (6.68% vs 6.43%, P = 0.006). A total of 13,679 patients with gout were defined as having been ever-users of colchicine and 10,371 patients with gout were defined as being never-users of colchicine. Ever-users of colchicine had a significantly lower HR of incident all-cause cancers than never-users of colchicine after adjustment for age (HR = 0.85, 95% CI = 0.77–0.94; P = 0.001). In conclusion, colchicine use was associated with a decreased risk of incident all-cause cancers in male Taiwanese patients with gout. PMID:26683907
The POP learning algorithms: reducing work in identifying fuzzy rules.
Quek, C; Zhou, R W
2001-12-01
A novel fuzzy neural network, the Pseudo Outer-Product based Fuzzy Neural Network (POPFNN), and its two fuzzy-rule-identification algorithms are proposed in this paper. They are the Pseudo Outer-Product (POP) learning and the Lazy Pseudo Outer-Product (LazyPOP) leaning algorithms. These two learning algorithms are used in POPFNN to identify relevant fuzzy rules. In contrast with other rule-learning algorithms, the proposed algorithms have many advantages, such as being fast, reliable, efficient, and easy to understand. POP learning is a simple one-pass learning algorithm. It essentially performs rule-selection. Hence, it suffers from the shortcoming of having to consider all the possible rules. The second algorithm, the LazyPOP learning algorithm, truly identifies the fuzzy rules which are relevant and does not use a rule-selection method whereby irrelevant fuzzy rules are eliminated from an initial rule set. In addition, it is able to adjust the structure of the fuzzy neural network. The proposed LazyPOP learning algorithm is able to delete invalid feature inputs according to the fuzzy rules that have been identified. Extensive experimental results and discussions are presented for a detailed analysis of the proposed algorithms.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Classification Algorithms for Big Data Analysis, a Map Reduce Approach
NASA Astrophysics Data System (ADS)
Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.
2015-03-01
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.
Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung
2015-01-01
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W. W. G.
2014-12-01
An algorithm is developed to reduce the computational burden of constructing the reduced stiffness and mass matrices for a reduced order groundwater model. A reduced order groundwater model can be developed by projecting the full groundwater model onto a subspace whose range spans the range of the full model space. This is done through the use of a projection matrix. Although reduced order groundwater models have been shown to be able to make accurate estimates of the full model solution at a greatly reduced dimension, the computational cost of projecting the stiffness and mass matrices onto the subspace of the reduced model can be very demanding. To alleviate this difficulty, an algorithm is developed that is able to reduce the effort and cost of constructing the reduced stiffness and mass matrices of the reduced model. The algorithm is based on the concept of approximating the value of a function at some point by use of the first-order Taylor series approximation. The developed algorithm is applied to both a 1-D test case and a 2-D test case. In both cases the algorithm is able to reduce the effort and cost of constructing the reduced order model by several orders of magnitude while losing little to no accuracy.
ALGORITHM FOR THE EVALUATION OF REDUCED WIGNER MATRICES
Prezeau, G.; Reinecke, M.
2010-10-15
Algorithms for the fast and exact computation of Wigner matrices are described and their application to a fast and massively parallel 4{pi} convolution code between a beam and a sky is also presented.
A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.
Steensland, Johan; Ray, Jaideep
2003-07-01
This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In many cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.
Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.
Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S
2013-01-01
The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.
Amudha, P; Karthik, S; Sivakumari, S
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Reducing aerodynamic vibration with piezoelectric actuators: a genetic algorithm optimization
NASA Astrophysics Data System (ADS)
Hu, Zhenning; Jakiela, Mark; Pitt, Dale M.; Burnham, Jay K.
2004-07-01
Modern high performance aircraft fly at high speeds and high angles of attack. This can result in "buffet" aerodynamics, an unsteady turbulent flow that causes vibrations of the wings, tails, and body of the aircraft. This can result in decreased performance and ride quality, and fatigue failures. We are experimenting with controlling these vibrations by using piezoceramic actuators attached to the inner and outer skin of the aircraft. In this project, a tail or wing is investigated. A "generic" tail finite element model is studied in which individual actuators are assumed to exactly cover individual finite elements. Various optimizations of the orientations and power consumed by these actuators are then performed. Real coded genetic algorithms are used to perform the optimizations and a design space approximation technique is used to minimize costly finite element runs. An important result is the identification of a power consumption threshold for the entire system. Below the threshold, vibration control performance of optimized systems decreases with decreasing values of power supplied to the entire system.
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
A fast algorithm to reduce gibbs ringing artifact in MRI.
Huang, Xin; Chen, Wufan
2005-01-01
In magnetic resonance imaging, a finite number of k-space data are often collected in order to decrease the acquisition time. The partial k-space data lead to the famous Gibbs ringing artifact with Fourier transform method. The Gegenbauer reconstruction method has been shown to effectively eliminate the Gibbs ringing artifact and restore high resolution. However, the disadvantages of using the Gegenbauer method are more computational time and complicated the choice of parameters. In this paper, we improve the Gegenbauer method by introducing the inverse polynomial reconstruction method and replacing the Gegenbauer polynomials with the Chebyshev polynomials. The new method effectively reduces the reconstructed error and computational cost without any requirement for selection of parameters. Additionally, we present an improved edge detection method which can achieve more accurate edge and make our new reconstruction method more efficient. The proposed method is verified with experiment of the artifact removal. PMID:17282451
D`Azevedo, E.F.; Romine, C.H.
1992-09-01
The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. In a distributed memory parallel environment, the computation and subsequent distribution of these two values requires two separate communication and synchronization phases. In this paper, we present a mathematically equivalent rearrangement of the standard algorithm that reduces the number of communication phases. We give a second derivation of the modified conjugate gradient algorithm in terms of the natural relationship with the underlying Lanczos process. We also present empirical evidence of the stability of this modified algorithm.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
[Parallel PLS algorithm using MapReduce and its aplication in spectral modeling].
Yang, Hui-Hua; Du, Ling-Ling; Li, Ling-Qiao; Tang, Tian-Biao; Guo, Tuo; Liang, Qiong-Lin; Wang, Yi-Ming; Luo, Guo-An
2012-09-01
Partial least squares (PLS) has been widely used in spectral analysis and modeling, and it is computation-intensive and time-demanding when dealing with massive data To solve this problem effectively, a novel parallel PLS using MapReduce is proposed, which consists of two procedures, the parallelization of data standardizing and the parallelization of principal component computing. Using NIR spectral modeling as an example, experiments were conducted on a Hadoop cluster, which is a collection of ordinary computers. The experimental results demonstrate that the parallel PLS algorithm proposed can handle massive spectra, can significantly cut down the modeling time, and gains a basically linear speedup, and can be easily scaled up. PMID:23240405
NASA Astrophysics Data System (ADS)
Williams, Arnold C.; Pachowicz, Peter W.
2004-09-01
Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.
Utilization of UV Curing Technology to Significantly Reduce the Manufacturing Cost of LIB Electrodes
Voelker, Gary; Arnold, John
2015-11-30
Previously identified novel binders and associated UV curing technology have been shown to reduce the time required to apply and finish electrode coatings from tens of minutes to less than one second. This revolutionary approach can result in dramatic increases in process speeds, significantly reduced capital (a factor of 10 to 20) and operating costs, reduced energy requirements, and reduced environmental concerns and costs due to the virtual elimination of harmful volatile organic solvents and associated solvent dryers and recovery systems. The accumulated advantages of higher speed, lower capital and operating costs, reduced footprint, lack of VOC recovery, and reduced energy cost is a reduction of 90% in the manufacturing cost of cathodes. When commercialized, the resulting cost reduction in Lithium batteries will allow storage device manufacturers to expand their sales in the market and thereby accrue the energy savings of broader utilization of HEVs, PHEVs and EVs in the U.S., and a broad technology export market is also envisioned.
New Classification Method Based on Support-Significant Association Rules Algorithm
NASA Astrophysics Data System (ADS)
Li, Guoxin; Shi, Wen
One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.
Reliable design of H-2 optimal reduced-order controllers via a homotopy algorithm
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.; Richter, Stephen; Davis, Larry D.
1992-01-01
Due to control processor limitations, the design of reduced-order controllers is an active area of research. Suboptimal methods based on truncating the order of the corresponding linear-quadratic-Gaussian (LQG) compensator tend to fail if the requested controller dimension is sufficiently small and/or the requested controller authority is sufficiently high. Also, traditional parameter optimization approaches have only local convergence properties. This paper discusses a homotopy algorithm for optimal reduced-order control that has global convergence properties. The exposition is for discrete-time systems. The algorithm has been implemented in MATLAB and is applied to a benchmark problem.
An explicit algebraic reduced order algorithm for lithium ion cell voltage prediction
NASA Astrophysics Data System (ADS)
Senthil Kumar, V.; Gambhire, Priya; Hariharan, Krishnan S.; Khandelwal, Ashish; Kolake, Subramanya Mayya; Oh, Dukjin; Doo, Seokgwang
2014-02-01
The detailed isothermal electrochemical model for a lithium ion cell has ten coupled partial differential equations to describe the cell behavior. In an earlier publication [Journal of Power Sources, 222, 426 (2013)], a reduced order model (ROM) was developed by reducing the detailed model to a set of five linear ordinary differential equations and nonlinear algebraic expressions, using uniform reaction rate, volume averaging and profile based approximations. An arbitrary current profile, involving charge, rest and discharge, is broken down into constant current and linearly varying current periods. The linearly varying current period results are generic, since it includes the constant current period results as well. Hence, the linear ordinary differential equations in ROM are solved for a linearly varying current period and an explicit algebraic algorithm is developed for lithium ion cell voltage prediction. While the existing battery management system (BMS) algorithms are equivalent circuit based and ordinary differential equations, the proposed algorithm is an explicit algebraic algorithm. These results are useful to develop a BMS algorithm for on-board applications in electric or hybrid vehicles, smart phones etc. This algorithm is simple enough for a spread-sheet implementation and is useful for rapid analysis of laboratory data.
NASA Astrophysics Data System (ADS)
Bamber, D.; Goodman, I. R.; Torrez, William C.; Nguyen, H. T.
2001-08-01
Conditional probability logics (CPL's), such as Adams', while producing many satisfactory results, do not agree with commonsense reasoning for a number of key entailment schemes, including transitivity and contraposition. Also, CPL's and bayesian techniques, often: (1) use restrictive independence/simplification assumptions; (2) lack a rationale behind choice of prior distribution; (3) require highly complex implementation calculations; (4) introduce ad hoc techniques. To address the above difficulties, a new CPL is being developed: CRANOF - Complexity Reducing Algorithm for Near Optimal Fusion -based upon three factors: (i) second order probability logic (SOPL), i.e., probability of probabilities within a bayesian framework; (ii) justified use of Dirichlet family priors, based on an extension of Lukacs' characterization theorem; and (iii) replacement of the theoretical optimal solution by a near optimal one where the complexity of computations is reduced significantly. A fundamental application of CRANOF to correlation and tracking is provided here through a generic example in a form similar to transitivity: two track histories are to be merged or left alone, based upon observed kinematic and non-kinematic attribute information and conditional probabilities connecting the observed data to the degrees of matching of attributes, as well as relating the matching of prescribed groups of attributes from each track history to the correlation level between the histories.
Reduced sensitivity algorithm for optical processors using constraints and ridge regression.
Casasent, D; Ghosh, A
1988-04-15
Optical linear algebra processors that involve solutions of linear algebraic equations have significant potential in adaptive and inference machines. We present an algorithm that includes constraints on the accuracy of the processor and improves the accuracy of the results obtained from such analog processors. The constraint algorithm matches the problem to the accuracy of the processor. Calculation of the adaptive weights in a phased array radar is used as a case study. Simulation results prove the benefits advertised. The desensitization of the calculated weights to computational errors in the processor is quantified. Ridge regression isused to determine the parameter needed in the algorithm.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Road Traffic Control Based on Genetic Algorithm for Reducing Traffic Congestion
NASA Astrophysics Data System (ADS)
Shigehiro, Yuji; Miyakawa, Takuya; Masuda, Tatsuya
In this paper, we propose a road traffic control method for reducing traffic congestion with genetic algorithm. In the not too distant future, the system which controls the routes of all vehicles in a certain area must be realized. The system should optimize the routes of all vehicles, however the solution space of this problem is enormous. Therefore we apply the genetic algorithm to this problem, by encoding the route of all vehicles to a fixed length chromosome. To improve the search performance, a new genetic operator called “path shortening” is also designed. The effectiveness of the proposed method is shown by the experiment.
Reducing the variability in random-phase initialized Gerchberg-Saxton Algorithm
NASA Astrophysics Data System (ADS)
Salgado-Remacha, Francisco Javier
2016-11-01
Gerchberg-Saxton Algorithm is a common tool for designing Computer Generated Holograms. There exist some standard functions for evaluating the quality of the final results. However, the use of randomized initial guess leads to different results, increasing the variability of the evaluation functions values. This fact is especially detrimental when the computing time is elevated. In this work, a new tool is presented, able to describe the fidelity of the results with a notably reduced variability after multiple attempts of the Gerchberg-Saxton Algorithm. This new tool results very helpful for topical fields such as 3D digital holography.
Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J.
1996-12-31
Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.
[SKLOF: a new algorithm to reduce the range of supernova candidates].
Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng
2015-01-01
Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection.
[SKLOF: a new algorithm to reduce the range of supernova candidates].
Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng
2015-01-01
Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection. PMID:25993860
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K
2016-02-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from to , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
NASA Astrophysics Data System (ADS)
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.
2016-02-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
Tiwari, P; Xie, Y; Chen, Y; Deasy, J
2014-06-01
Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
Zhou, Qi; Hong, Dan; Lu, Jun; Zheng, Defei; Ashwani, Neetica; Hu, Shaoyan
2015-04-01
In this study, we have analyzed both administrative and clinical data from our hospital during 2002 to 2012 to evaluate the influence of government medical policies on reducing abandonment treatment in pediatric patients with acute lymphoblastic leukemia. Two policies funding for the catastrophic diseases and the new rural cooperative medical care system (NRCMS) were initiated in 2005 and 2011, respectively. About 1151 children diagnosed with acute lymphoblastic leukemia were enrolled in our study during this period and 316 cases abandoned treatment. Statistical differences in sex, age, number of children in the family, and family financial status were observed. Of most importance, the medical insurance coverage was critical for reducing abandonment treatment. However, 92 cases abandoning treatment after relapse did not show significant difference either in medical insurance coverage or in duration from first complete remission. In conclusion, financial crisis was the main reason for abandoning treatment. Government-funded health care expenditure programs reduced families' economic burden and thereby reduced the abandonment rate with resultant increased overall survival.
Zhou, Qi; Hong, Dan; Lu, Jun; Zheng, Defei; Ashwani, Neetica
2015-01-01
In this study, we have analyzed both administrative and clinical data from our hospital during 2002 to 2012 to evaluate the influence of government medical policies on reducing abandonment treatment in pediatric patients with acute lymphoblastic leukemia. Two policies funding for the catastrophic diseases and the new rural cooperative medical care system (NRCMS) were initiated in 2005 and 2011, respectively. About 1151 children diagnosed with acute lymphoblastic leukemia were enrolled in our study during this period and 316 cases abandoned treatment. Statistical differences in sex, age, number of children in the family, and family financial status were observed. Of most importance, the medical insurance coverage was critical for reducing abandonment treatment. However, 92 cases abandoning treatment after relapse did not show significant difference either in medical insurance coverage or in duration from first complete remission. In conclusion, financial crisis was the main reason for abandoning treatment. Government-funded health care expenditure programs reduced families’ economic burden and thereby reduced the abandonment rate with resultant increased overall survival. PMID:25393454
Heimbauer, Lisa A.; Beran, Michael J.; Owren, Michael J.
2011-01-01
Summary A long-standing debate concerns whether humans are specialized for speech perception [1–7], which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content [2–4,7]. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words [8,9], asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuo-graphic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users [10]. Experiment 2 tested “impossibly unspeechlike” [3] sine-wave (SW) synthesis, which reduces speech to just three moving tones [11]. Although receiving only intermittent and non-contingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate, but improved in Experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human [12–14]. PMID:21723125
Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications
NASA Astrophysics Data System (ADS)
Song, Junehwa; Yeo, Boon-Lock
1997-12-01
The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.
Arumugam, Sudha; Lau, Christine SM; Chamberlain, Ronald S
2016-01-01
Objectives Effective postoperative pain management is crucial in the care of surgical patients. Opioids, which are commonly used in managing postoperative pain, have a potential for tolerance and addiction, along with sedating side effects. Gabapentin’s use as a multimodal analgesic regimen to treat neuropathic pain has been documented as having favorable side effects. This meta-analysis examined the use of preoperative gabapentin and its impact on postoperative opioid consumption. Materials and methods A comprehensive literature search was conducted to identify randomized control trials that evaluated preoperative gabapentin on postoperative opioid consumption. The outcomes of interest were cumulative opioid consumption following the surgery and the incidence of vomiting, somnolence, and nausea. Results A total of 1,793 patients involved in 17 randomized control trials formed the final analysis for this study. Postoperative opioid consumption was reduced when using gabapentin within the initial 24 hours following surgery (standard mean difference −1.35, 95% confidence interval [CI]: −1.96 to −0.73; P<0.001). There was a significant reduction in morphine, fentanyl, and tramadol consumption (P<0.05). While a significant increase in postoperative somnolence incidence was observed (relative risk 1.30, 95% CI: 1.10–1.54, P<0.05), there were no significant effects on postoperative vomiting and nausea. Conclusion The administration of preoperative gabapentin reduced the consumption of opioids during the initial 24 hours following surgery. The reduction in postoperative opioids with preoperative gabapentin increased postoperative somnolence, but no significant differences were observed in nausea and vomiting incidences. The results from this study demonstrate that gabapentin is more beneficial in mastectomy and spinal, abdominal, and thyroid surgeries. Gabapentin is an effective analgesic adjunct, and clinicians should consider its use in multimodal treatment
Arumugam, Sudha; Lau, Christine SM; Chamberlain, Ronald S
2016-01-01
Objectives Effective postoperative pain management is crucial in the care of surgical patients. Opioids, which are commonly used in managing postoperative pain, have a potential for tolerance and addiction, along with sedating side effects. Gabapentin’s use as a multimodal analgesic regimen to treat neuropathic pain has been documented as having favorable side effects. This meta-analysis examined the use of preoperative gabapentin and its impact on postoperative opioid consumption. Materials and methods A comprehensive literature search was conducted to identify randomized control trials that evaluated preoperative gabapentin on postoperative opioid consumption. The outcomes of interest were cumulative opioid consumption following the surgery and the incidence of vomiting, somnolence, and nausea. Results A total of 1,793 patients involved in 17 randomized control trials formed the final analysis for this study. Postoperative opioid consumption was reduced when using gabapentin within the initial 24 hours following surgery (standard mean difference −1.35, 95% confidence interval [CI]: −1.96 to −0.73; P<0.001). There was a significant reduction in morphine, fentanyl, and tramadol consumption (P<0.05). While a significant increase in postoperative somnolence incidence was observed (relative risk 1.30, 95% CI: 1.10–1.54, P<0.05), there were no significant effects on postoperative vomiting and nausea. Conclusion The administration of preoperative gabapentin reduced the consumption of opioids during the initial 24 hours following surgery. The reduction in postoperative opioids with preoperative gabapentin increased postoperative somnolence, but no significant differences were observed in nausea and vomiting incidences. The results from this study demonstrate that gabapentin is more beneficial in mastectomy and spinal, abdominal, and thyroid surgeries. Gabapentin is an effective analgesic adjunct, and clinicians should consider its use in multimodal treatment
Marroni, A; Bennett, P B; Cronje, F J; Cali-Corleo, R; Germonpre, P; Pieri, M; Bonuccelli, C; Balestra, C
2004-01-01
In spite of many modifications to decompression algorithms, the incidence of decompression sickness (DCS) in scuba divers has changed very little. The success of stage, compared to linear ascents, is well described yet theoretical changes in decompression ratios have diminished the importance of fast tissue gas tensions as critical for bubble generation. The most serious signs and symptoms of DCS involve the spinal cord, with a tissue half time of only 12.5 minutes. It is proposed that present decompression schedules do not permit sufficient gas elimination from such fast tissues, resulting in bubble formation. Further, it is hypothesized that introduction of a deep stop will significantly reduce fast tissue bubble formation and neurological DCS risk. A total of 181 dives were made to 82 fsw (25 m) by 22 volunteers. Two dives of 25 min and 20 min were made, with a 3 hr 30 min surface interval and according to 8 different ascent protocols. Ascent rates of 10, 33 or 60 fsw/min (3, 10, 18 m/min) were combined with no stops or a shallow stop at 20 fsw (6 m) or a deep stop at 50 fsw (15 m) and a shallow at 20 fsw (6 m). The highest bubbles scores (8.78/9.97), using the Spencer Scale (SS) and Extended Spencer Scale (ESS) respectively, were with the slowest ascent rate. This also showed the highest 5 min and 10 min tissue loads of 48% and 75%. The lowest bubble scores (1.79/2.50) were with an ascent rate of 33 fsw (10 m/min) and stops for 5 min at 50 fsw (15 m) and 20 fsw (6 m). This also showed the lowest 5 and 10 min tissue loads at 25% and 52% respectively. Thus, introduction of a deep stop significantly reduced Doppler detected bubbles together with tissue gas tensions in the 5 and 10 min tissues, which has implications for reducing the incidence of neurological DCS in divers.
Neurotrophin-3 significantly reduces sodium channel expression linked to neuropathic pain states.
Wilson-Gerwing, Tracy D; Stucky, Cheryl L; McComb, Geoffrey W; Verge, Valerie M K
2008-10-01
Neuropathic pain resulting from chronic constriction injury (CCI) is critically linked to sensitization of peripheral nociceptors. Voltage gated sodium channels are major contributors to this state and their expression can be upregulated by nerve growth factor (NGF). We have previously demonstrated that neurotrophin-3 (NT-3) acts antagonistically to NGF in modulation of aspects of CCI-induced changes in trkA-associated nociceptor phenotype and thermal hyperalgesia. Thus, we hypothesized that exposure of neurons to increased levels of NT-3 would reduce expression of Na(v)1.8 and Na(v)1.9 in DRG neurons subject to CCI. In adult male rats, Na(v)1.8 and Na(v)1.9 mRNAs are expressed at high levels in predominantly small to medium size neurons. One week following CCI, there is reduced incidence of neurons expressing detectable Na(v)1.8 and Na(v)1.9 mRNA, but without a significant decline in mean level of neuronal expression, and similar findings observed immunohistochemically. There is also increased accumulation/redistribution of channel protein in the nerve most apparent proximal to the first constriction site. Intrathecal infusion of NT-3 significantly attenuates neuronal expression of Na(v)1.8 and Na(v)1.9 mRNA contralateral and most notably, ipsilateral to CCI, with a similar impact on relative protein expression at the level of the neuron and constricted nerve. We also observe reduced expression of the common neurotrophin receptor p75 in response to CCI that is not reversed by NT-3 in small to medium sized neurons and may confer an enhanced ability of NT-3 to signal via trkA, as has been previously shown in other cell types. These findings are consistent with an analgesic role for NT-3. PMID:18601922
Increasing Redundancy Exponentially Reduces Error Rates during Algorithmic Self-Assembly.
Schulman, Rebecca; Wright, Christina; Winfree, Erik
2015-06-23
While biology demonstrates that molecules can reliably transfer information and compute, design principles for implementing complex molecular computations in vitro are still being developed. In electronic computers, large-scale computation is made possible by redundancy, which allows errors to be detected and corrected. Increasing the amount of redundancy can exponentially reduce errors. Here, we use algorithmic self-assembly, a generalization of crystal growth in which the self-assembly process executes a program for growing an object, to examine experimentally whether redundancy can analogously reduce the rate at which errors occur during molecular self-assembly. We designed DNA double-crossover molecules to algorithmically self-assemble ribbon crystals that repeatedly copy a short bitstring, and we measured the error rate when each bit is encoded by 1 molecule, or redundantly encoded by 2, 3, or 4 molecules. Under our experimental conditions, each additional level of redundancy decreases the bitwise error rate by a factor of roughly 3, with the 4-redundant encoding yielding an error rate less than 0.1%. While theory and simulation predict that larger improvements in error rates are possible, our results already suggest that by using sufficient redundancy it may be possible to algorithmically self-assemble micrometer-sized objects with programmable, nanometer-scale features.
A Genetic Algorithm for Learning Significant Phrase Patterns in Radiology Reports
Patton, Robert M; Potok, Thomas E; Beckerman, Barbara G; Treadwell, Jim N
2009-01-01
Radiologists disagree with each other over the characteristics and features of what constitutes a normal mammogram and the terminology to use in the associated radiology report. Recently, the focus has been on classifying abnormal or suspicious reports, but even this process needs further layers of clustering and gradation, so that individual lesions can be more effectively classified. Using a genetic algorithm, the approach described here successfully learns phrase patterns for two distinct classes of radiology reports (normal and abnormal). These patterns can then be used as a basis for automatically analyzing, categorizing, clustering, or retrieving relevant radiology reports for the user.
Code of Federal Regulations, 2010 CFR
2010-04-01
... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... significantly reducing the rate of future benefit accrual. The following questions and answers concern the... a plan amendment of an applicable pension plan that significantly reduces the rate of future...
Code of Federal Regulations, 2014 CFR
2014-04-01
... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... significantly reducing the rate of future benefit accrual. The following questions and answers concern the... a plan amendment of an applicable pension plan that significantly reduces the rate of future...
Long-term stable polymer solar cells with significantly reduced burn-in loss.
Kong, Jaemin; Song, Suhee; Yoo, Minji; Lee, Ga Young; Kwon, Obum; Park, Jin Kuen; Back, Hyungcheol; Kim, Geunjin; Lee, Seoung Ho; Suh, Hongsuk; Lee, Kwanghee
2014-01-01
The inferior long-term stability of polymer-based solar cells needs to be overcome for their commercialization to be viable. In particular, an abrupt decrease in performance during initial device operation, the so-called 'burn-in' loss, has been a major contributor to the short lifetime of polymer solar cells, fundamentally impeding polymer-based photovoltaic technology. In this study, we demonstrate polymer solar cells with significantly improved lifetime, in which an initial burn-in loss is substantially reduced. By isolating trap-embedded components from pristine photoactive polymers based on the unimodality of molecular weight distributions, we are able to selectively extract a trap-free, high-molecular-weight component. The resulting polymer component exhibits enhanced power conversion efficiency and long-term stability without abrupt initial burn-in degradation. Our discovery suggests a promising possibility for commercial viability of polymer-based photovoltaics towards real solar cell applications.
Huffman, Gerald P.
2012-11-13
A new method of producing liquid transportation fuels from coal and other hydrocarbons that significantly reduces carbon dioxide emissions by combining Fischer-Tropsch synthesis with catalytic dehydrogenation is claimed. Catalytic dehydrogenation (CDH) of the gaseous products (C1-C4) of Fischer-Tropsch synthesis (FTS) can produce large quantities of hydrogen while converting the carbon to multi-walled carbon nanotubes (MWCNT). Incorporation of CDH into a FTS-CDH plant converting coal to liquid fuels can eliminate all or most of the CO.sub.2 emissions from the water-gas shift (WGS) reaction that is currently used to elevate the H.sub.2 level of coal-derived syngas for FTS. Additionally, the FTS-CDH process saves large amounts of water used by the WGS reaction and produces a valuable by-product, MWCNT.
Lichter, David I.; Di Bacco, Alessandra; Blakemore, Stephen J.; Berger, Allison; Koenig, Erik; Bernard, Hugues; Trepicchio, William; Li, Bin; Neuwirth, Rachel; Chattopadhyay, Nibedita; Bolen, Joseph B.; Dorner, Andrew J.; van de Velde, Helgi; Ricci, Deborah; Jagannath, Sundar; Berenson, James R.; Richardson, Paul G.; Stadtmauer, Edward A.; Orlowski, Robert Z.; Lonial, Sagar; Anderson, Kenneth C.; Sonneveld, Pieter; San Miguel, Jesús F.; Esseltine, Dixie-Lee; Schu, Matthew
2014-01-01
Various translocations and mutations have been identified in myeloma, and certain aberrations, such as t(4;14) and del17, are linked with disease prognosis. To investigate mutational prevalence in myeloma and associations between mutations and patient outcomes, we tested a panel of 41 known oncogenes and tumor suppressor genes in tumor samples from 133 relapsed myeloma patients participating in phase 2 or 3 clinical trials of bortezomib. DNA mutations were identified in 14 genes. BRAF as well as RAS genes were mutated in a large proportion of cases (45.9%) and these mutations were mutually exclusive. New recurrent mutations were also identified, including in the PDGFRA and JAK3 genes. NRAS mutations were associated with a significantly lower response rate to single-agent bortezomib (7% vs 53% in patients with mutant vs wild-type NRAS, P = .00116, Bonferroni-corrected P = .016), as well as shorter time to progression in bortezomib-treated patients (P = .0058, Bonferroni-corrected P = .012). However, NRAS mutation did not impact outcome in patients treated with high-dose dexamethasone. KRAS mutation did not reduce sensitivity to bortezomib or dexamethasone. These findings identify a significant clinical impact of NRAS mutation in myeloma and demonstrate a clear example of functional differences between the KRAS and NRAS oncogenes. PMID:24335104
Nitrogen and phosphorous limitations significantly reduce future allowable CO2 emissions
NASA Astrophysics Data System (ADS)
Zhang, Qian; Wang, Ying-Ping; Matear, Richard; Pitman, Andy; Dai, Yongjiu
2014-05-01
Earth System Models (ESMs) can be used to diagnose the emissions of CO2 allowed in order to follow the representative concentration pathways (RCPs) that are consistent with different climate scenarios. By mass balance, the allowable emission is calculated as the sum of the changes in atmospheric CO2, land and ocean carbon pools. Only two ESMs used in the fifth assessment (AR5) of International Panel on Climate Change (IPCC) include nitrogen (N) limitation, and none include phosphorous (P) limitation. In this study we quantified the effects of N and P limitations on the allowable emissions using an ESM simulating land and ocean CO2 exchanges to the atmosphere in RCPs used for IPCC AR5. The model can run with carbon cycle alone (C only), carbon and nitrogen (CN) or carbon, nitrogen and phosphorus (CNP) cycles as its land configurations. We used the simulated land and ocean carbon accumulation rates from 1850 to 2100 to diagnose the allowable emissions for each of three simulations (C only, CN or CNP). These were then compared with the emissions estimated by the Integrated Assessment Models (IAMs) used to generate RCP2.6 and RCP8.5. N and P limitations on land in our ESM led to systematically lower land carbon uptake, and thus reduced allowable emissions by 69 Pg C (21%) for RCP2.6, and by 250 Pg C (13%) for RCP8.5 from 2006 to 2100. Our results demonstrated that including N and P limitations requires a greater reduction in human CO2 emissions than assumed in the IAMs used to generate the RCPs. Reference: Zhang, Q., Y. P. Wang, R. J. Matear, A. J. Pitman, and Y. J. Dai (2014), Nitrogen and phosphorous limitations significantly reduce future allowable CO2 emissions, Geophys. Res. Lett., 41, doi:10.1002/2013GL058352.
Sulakvelidze, Alexander
2013-10-01
Bacteriophages (also called 'phages') are viruses that kill bacteria. They are arguably the oldest (3 billion years old, by some estimates) and most ubiquitous (total number estimated to be 10(30) -10(32) ) known organisms on Earth. Phages play a key role in maintaining microbial balance in every ecosystem where bacteria exist, and they are part of the normal microflora of all fresh, unprocessed foods. Interest in various practical applications of bacteriophages has been gaining momentum recently, with perhaps the most attention focused on using them to improve food safety. That approach, called 'phage biocontrol', typically includes three main types of applications: (i) using phages to treat domesticated livestock in order to reduce their intestinal colonization with, and shedding of, specific bacterial pathogens; (ii) treatments for decontaminating inanimate surfaces in food-processing facilities and other food establishments, so that foods processed on those surfaces are not cross-contaminated with the targeted pathogens; and (iii) post-harvest treatments involving direct applications of phages onto the harvested foods. This mini-review primarily focuses on the last type of intervention, which has been gaining the most momentum recently. Indeed, the results of recent studies dealing with improving food safety, and several recent regulatory approvals of various commercial phage preparations developed for post-harvest food safety applications, strongly support the idea that lytic phages may provide a safe, environmentally-friendly, and effective approach for significantly reducing contamination of various foods with foodborne bacterial pathogens. However, some important technical and nontechnical problems may need to be addressed before phage biocontrol protocols can become an integral part of routine food safety intervention strategies implemented by food industries in the USA.
Sulakvelidze, Alexander
2013-10-01
Bacteriophages (also called 'phages') are viruses that kill bacteria. They are arguably the oldest (3 billion years old, by some estimates) and most ubiquitous (total number estimated to be 10(30) -10(32) ) known organisms on Earth. Phages play a key role in maintaining microbial balance in every ecosystem where bacteria exist, and they are part of the normal microflora of all fresh, unprocessed foods. Interest in various practical applications of bacteriophages has been gaining momentum recently, with perhaps the most attention focused on using them to improve food safety. That approach, called 'phage biocontrol', typically includes three main types of applications: (i) using phages to treat domesticated livestock in order to reduce their intestinal colonization with, and shedding of, specific bacterial pathogens; (ii) treatments for decontaminating inanimate surfaces in food-processing facilities and other food establishments, so that foods processed on those surfaces are not cross-contaminated with the targeted pathogens; and (iii) post-harvest treatments involving direct applications of phages onto the harvested foods. This mini-review primarily focuses on the last type of intervention, which has been gaining the most momentum recently. Indeed, the results of recent studies dealing with improving food safety, and several recent regulatory approvals of various commercial phage preparations developed for post-harvest food safety applications, strongly support the idea that lytic phages may provide a safe, environmentally-friendly, and effective approach for significantly reducing contamination of various foods with foodborne bacterial pathogens. However, some important technical and nontechnical problems may need to be addressed before phage biocontrol protocols can become an integral part of routine food safety intervention strategies implemented by food industries in the USA. PMID:23670852
Rae, Caroline D.; Davidson, Joanne E.; Maher, Anthony D.; Rowlands, Benjamin D.; Kashem, Mohammed A.; Nasrallah, Fatima A.; Rallapalli, Sundari K.; Cook, James M; Balcar, Vladimir J.
2014-01-01
Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we used an indirect approach, measuring the effect of alcohol on metabolism of [3-13C]pyruvate in the adult Guinea pig brain cortical tissue slice and comparing the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-13C]ethanol. Ethanol (10, 30 and 60 mM) significantly reduced metabolic flux into all measured isotopomers and reduced all metabolic pool sizes. The metabolic profiles of these three concentrations of ethanol were similar and clustered with that of the α4β3δ positive allosteric modulator DS2 (4-Chloro-N-[2-(2-thienyl)imidazo[1,2a]-pyridin-3-yl]benzamide). Ethanol at a very low concentration (0.1 mM) produced a metabolic profile which clustered with those from inhibitors of GABA uptake, and ligands showing affinity for α5, and to a lesser extent, α1-containing GABA(A)R. There was no measureable metabolism of [1,2-13C]ethanol with no significant incorporation of 13C from [1,2-13C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabeled ethanol. The reduction in metabolism seen in the presence of ethanol is therefore likely to be due to its actions at neurotransmitter receptors, particularly α4β3δ receptors, and not because ethanol is substituting as a substrate or because of the effects of ethanol catabolites acetaldehyde or acetate. We suggest that the stimulatory effects of very low concentrations of ethanol are due to release of GABA via GAT1 and the subsequent interaction of this GABA with local α5-containing, and to a lesser extent, α1-containing GABA(A)R. PMID:24313287
Guo, Li; Xu, Yan; Xu, Zhengfu; Jiang, Jingfeng
2015-10-01
Obtaining accurate ultrasonically estimated displacements along both axial (parallel to the acoustic beam) and lateral (perpendicular to the beam) directions is an important task for various clinical elastography applications (e.g., modulus reconstruction and temperature imaging). In this study, a partial differential equation (PDE)-based regularization algorithm was proposed to enhance motion tracking accuracy. More specifically, the proposed PDE-based algorithm, utilizing two-dimensional (2D) displacement estimates from a conventional elastography system, attempted to iteratively reduce noise contained in the original displacement estimates by mathematical regularization. In this study, tissue incompressibility was the physical constraint used by the above-mentioned mathematical regularization. This proposed algorithm was tested using computer-simulated data, a tissue-mimicking phantom, and in vivo breast lesion data. Computer simulation results demonstrated that the method significantly improved the accuracy of lateral tracking (e.g., a factor of 17 at 0.5% compression). From in vivo breast lesion data investigated, we have found that, as compared with the conventional method, higher quality axial and lateral strain images (e.g., at least 78% improvements among the estimated contrast-to-noise ratios of lateral strain images) were obtained. Our initial results demonstrated that this conceptually and computationally simple method could be useful for improving the image quality of ultrasound elastography with current clinical equipment as a post-processing tool.
Hira, Zena M; Trigeorgis, George; Gillies, Duncan F
2014-01-01
Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA) which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap. PMID:24595155
Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1991-01-01
The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.
Magnitude and significance of the higher-order reduced density matrix cumulants
NASA Astrophysics Data System (ADS)
Herbert, John M.
Using full configuration interaction wave functions for Be and LiH, in both minimal and extended basis sets, we examine the absolute magnitude and energetic significance of various contributions to the three-electron reduced density matrix (3-RDM) and its connected (size-consistent) component, the 3-RDM cumulant (3-RDMC). Minimal basis sets are shown to suppress the magnitude of the 3-RDMC in an artificial manner, whereas in extended basis sets, 3-RDMC matrix elements are often comparable in magnitude to the corresponding 3-RDM elements, even in cases where this result is not required by spin angular momentum coupling. Formal considerations suggest that these observations should generalize to higher-order p-RDMs and p-RDMCs (p > 3). This result is discussed within the context of electronic structure methods based on the contracted Schrödinger equation (CSE), as solution of the CSE relies on 3- and 4-RDM ?reconstruction functionals? that neglect the 3-RDMC, the 4-RDMC, or both. Although the 3-RDMC is responsible for at most 0.2% of the total electronic energy in Be and LiH, it accounts for up to 70% of the correlation energy, raising questions regarding whether (and how) the CSE can offer a useful computational methodology.
Adolescents under rocket fire: when are coping resources significant in reducing emotional distress?
Sagy, Shifra; Braun-Lewensohn, Orna
2009-12-01
Stress reactions and coping resources of adolescents in chronic and acute situations evoked by missile fire were examined. Data were gathered during August 2006 (Second Lebanon War) on a sample of 303 Israeli adolescents living in Northern Israel (acute state) and 114 youths from Sderot and the Negev, an area which has been exposed to frequent rocket attacks in the last seven years (chronic state). State anxiety and psychological distress were measured as stress reactions. Sense of coherence, family sense of coherence, sense of community and level of exposure were investigated as potential explanatory factors in reducing emotional distress. The overall magnitude of variance explanation was found to be different at each state: a relatively high amount explained variance of stress reactions was found in the chronic stress situation, but not in the acute state. These data support the value of developing a model that differentiates stress situations with the aim of understanding patterns of significant resources in moderating stress reactions in each state.
Upton, L. M.; Brock, P. M.; Churcher, T. S.; Ghani, A. C.; Gething, P. W.; Delves, M. J.; Sala, K. A.; Leroy, D.; Sinden, R. E.
2014-01-01
To achieve malarial elimination, we must employ interventions that reduce the exposure of human populations to infectious mosquitoes. To this end, numerous antimalarial drugs are under assessment in a variety of transmission-blocking assays which fail to measure the single crucial criteria of a successful intervention, namely impact on case incidence within a vertebrate population (reduction in reproductive number/effect size). Consequently, any reduction in new infections due to drug treatment (and how this may be influenced by differing transmission settings) is not currently examined, limiting the translation of any findings. We describe the use of a laboratory population model to assess how individual antimalarial drugs can impact the number of secondary Plasmodium berghei infections over a cycle of transmission. We examine the impact of multiple clinical and preclinical drugs on both insect and vertebrate populations at multiple transmission settings. Both primaquine (>6 mg/kg of body weight) and NITD609 (8.1 mg/kg) have significant impacts across multiple transmission settings, but artemether and lumefantrine (57 and 11.8 mg/kg), OZ439 (6.5 mg/kg), and primaquine (<1.25 mg/kg) demonstrated potent efficacy only at lower-transmission settings. While directly demonstrating the impact of antimalarial drug treatment on vertebrate populations, we additionally calculate effect size for each treatment, allowing for head-to-head comparison of the potential impact of individual drugs within epidemiologically relevant settings, supporting their usage within elimination campaigns. PMID:25385107
Sulfide-driven autotrophic denitrification significantly reduces N2O emissions.
Yang, Weiming; Zhao, Qing; Lu, Hui; Ding, Zhi; Meng, Liao; Chen, Guang-Hao
2016-03-01
The Sulfate reduction-Autotrophic denitrification-Nitrification Integrated (SANI) process build on anaerobic carbon conversion through biological sulfate reduction and autotrophic denitrification by using the sulfide byproduct from the previous reaction. This study confirmed extra decreases in N2O emissions from the sulfide-driven autotrophic denitrification by investigating N2O reduction, accumulation, and emission in the presence of different sulfide/nitrate (S/N) mass ratios at pH 7 in a long-term laboratory-scale granular sludge autotrophic denitrification reactor. The N2O reduction rate was linearly proportional to the sulfide concentration, which confirmed that no sulfide inhibition of N2O reductase occurred. At S/N = 5.0 g-S/g-N, this rate resulted by sulfide-driven autotrophic denitrifying granular sludge (average granule size = 701 μm) was 27.7 mg-N/g-VSS/h (i.e., 2 and 4 times greater than those at 2.5 and 0.8 g-S/g-N, respectively). Sulfide actually stimulates rather than inhibits N2O reduction no matter what granule size of sulfide-driven autotrophic denitrifying sludge engaged. The accumulations of N2O, nitrite and free nitrous acid (FNA) with average granule size 701 μm of sulfide-driven autotrophic denitrifying granular sludge engaged at S/N = 5.0 g-S/g-N were 4.7%, 11.4% and 4.2% relative to those at 3.0 g-S/g-N, respectively. The accumulation of FNA can inhibit N2O reduction and increase N2O accumulation during sulfide-driven autotrophic denitrification. In addition, the N2O gas emission level from the reactor significantly increased from 14.1 ± 0.5 ppmv (0.002% of the N load) to 3707.4 ± 36.7 ppmv (0.405% of the N load) as the S/N mass ratio in the influent decreased from 2.1 to 1.4 g-S/g-N over the course of the 120-day continuous monitoring period. Sulfide-driven autotrophic denitrification may significantly reduce greenhouse gas emissions from biological nutrient removal when sulfur conversion processes are applied. PMID
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Collaborative localization algorithms for wireless sensor networks with reduced localization error.
Sahoo, Prasan Kumar; Hwang, I-Shyan
2011-01-01
Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region.
Montgomery, Christopher J.; Yang, Chongguan; Parkinson, Alan R.; Chen, J.-Y.
2006-01-01
A genetic optimization algorithm has been applied to the selection of quasi-steady-state (QSS) species in reduced chemical kinetic mechanisms. The algorithm seeks to minimize the error between reduced and detailed chemistry for simple reactor calculations approximating conditions of interest for a computational fluid dynamics simulation. The genetic algorithm does not guarantee that the global optimum will be found, but much greater accuracy can be obtained than by choosing QSS species through a simple kinetic criterion or by human trial and error. The algorithm is demonstrated for methane-air combustion over a range of temperatures and stoichiometries and for homogeneous charge compression ignition engine combustion. The results are in excellent agreement with those predicted by the baseline mechanism. A factor of two reduction in the number of species was obtained for a skeletal mechanism that had already been greatly reduced from the parent detailed mechanism.
Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M.; Daniell, Tim J.
2012-01-01
The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate (NO3−) and production of the potent greenhouse gas, nitrous oxide (N2O). A number of factors are known to control these processes, including O2 concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N2O production from soils. PMID:23264770
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
Stevens, Andrew J.; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D.
2014-02-11
The use of high resolution imaging methods in the scanning transmission electron microscope (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example in the study of organic systems, in tomography and during in-situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high resolution STEM images. These experiments successively reduce the number of pixels in the image (thereby reducing the overall dose while maintaining the high resolution information) and show promising results for reconstructing images from this reduced set of randomly collected measurements. We show that this approach is valid for both atomic resolution images and nanometer resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these post acquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or alignment of the microscope itself.
ERIC Educational Resources Information Center
Mabel, Sanford
The program was set up to involve, on a continuing basis, the significant other of frequently-readmitted hospitalized psychiatric VA patients. The couples identified their characteristic strengths, and their maladaptive ways of functioning, and were expected to make use of alternative ways of behaving which were recommended by the staff. A…
Kinsman, J.D.
1998-12-31
Electric utility SO{sub 2} and NO{sub x} emissions have been reduced tremendously, beginning before the first deadlines (1995 for SO{sub 2} and 1996 for NO{sub x}) of the 1990 Clean Air Act Amendments. For the Acid Rain Program, EPA reports that: (1) all 445 affected facilities demonstrated 100 percent compliance for both pollutants and even exceeded the compliance targets; (2) the Acid Rain Program has been very successful; and (3) due to these and other controls, air quality has improved in the United States. Furthermore, the new 8-hour ozone standard, the new PM2.5 standards, the EPA`s 22-state regional NO{sub x} program, the Northeast state petitions for upwind NO{sub x} reductions and EPA`s regional haze proposal will likely lead to substantially greater reductions of utility SO{sub 2} and NO{sub x}.
Does maintaining a bottle of adhesive without the lid significantly reduce the solvent content?
Santana, Márcia Luciana Carregosa; Sousa Júnior, José Aginaldo de; Leal, Pollyana Caldeira; Faria-e-Silva, André Luis
2014-01-01
This study aimed to evaluate the effect of maintaining a bottle of adhesive without its lid on the solvent loss of the etch-and-rinse adhesive systems. Three 2-step etch-and-rinse adhesives with different solvents (acetone, ethanol or butanol) were used in this study. Drops of each adhesive were placed on an analytical balance and the adhesive mass was recorded until equilibrium was achieved (no significant mass alteration within time). The solvent content of each adhesive and evaporation rate of solvents were measured (n=3). Two bottles of each adhesive were weighted. The bottles were maintained without their lids for 8 h in a stove at 37 ºC, after which the mass loss was measured. Based on mass alteration of drops, acetone-based adhesive showed the highest solvent content (46.5%, CI 95%: 35.8-54.7) and evaporation rate (1.11 %/s, CI95%: 0.63-1.60), whereas ethanol-based adhesive had the lowest values (10.1%, CI95%: 4.3-16.0; 0.03 %/s CI95%: 0.01-0.05). However, none of the adhesives bottles exhibited significant mass loss after sitting for 8 h without their lids (% from initial content; acetone - 96.5, CI 95%: 91.8-101.5; ethanol - 99.4, CI 95%: 98.4-100.4; and butanol - 99.3, CI 95%: 98.1-100.5). In conclusion, maintaining the adhesive bottle without lid did not induce significant solvent loss, irrespective the concentration and evaporation rate of solvent. PMID:25590203
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945
Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu; Tian, Suyan
2016-01-01
Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945
Senska, Götz; Schröder, Hilal; Pütter, Carolin; Dost, Philipp
2012-01-01
Background The tonsillectomy is one of the most frequently performed surgical procedures. Given the comparatively frequent postsurgical bleeding associated with this procedure, particular attention has been paid to reduction of the postoperative bleeding rate. In 2006, we introduced routine suturing of the faucial pillars at our clinic to reduce postoperative haemorrhage. Methods Two groups from the years 2003–2005 (n = 1000) and 2007–2009 (n = 1000) have been compared. We included all patients who had an elective tonsillectomy due to a benign, non-acute inflammatory tonsil illness. In the years 2007–2009, we additionally sutured the faucial pillars after completing haemostasis. For primary haemostasis we used suture ligation and bipolar diathermy. Results The rate of bleeding requiring second surgery for haemostasis was 3.6% in 2003–2005 but only 2.0% in 2007–2009 (absolute risk reduction 1.6% (95% CI 0.22%–2.45%, p = 0.04)). The median surgery time—including adenoidectomy and paracentesis surgery—increased from 25 to 31 minutes (p<0.01). Conclusions We have been able to substantiate that suturing of the faucial pillars nearly halves the rate of postoperative haemorrhage. Surgery takes 8 minutes longer on average. Bleeding occurs later, mostly after 24 h. The limitations of this study relate to its retrospective character and all the potential biases related to observational studies. PMID:23118902
Stevens, Andrew; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D
2014-02-01
The use of high-resolution imaging methods in scanning transmission electron microscopy (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example, in the study of organic systems, in tomography and during in situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high-resolution STEM images. These computational algorithms have been applied to a set of images with a reduced number of sampled pixels in the image. For a reduction in the number of pixels down to 5% of the original image, the algorithms can recover the original image from the reduced data set. We show that this approach is valid for both atomic-resolution images and nanometer-resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these postacquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or the alignment of the microscope itself.
Nale, Janet Y.; Spencer, Janice; Hargreaves, Katherine R.; Buckley, Anthony M.; Trzepiński, Przemysław
2015-01-01
The microbiome dysbiosis caused by antibiotic treatment has been associated with both susceptibility to and relapse of Clostridium difficile infection (CDI). Bacteriophage (phage) therapy offers target specificity and dose amplification in situ, but few studies have focused on its use in CDI treatment. This mainly reflects the lack of strictly virulent phages that target this pathogen. While it is widely accepted that temperate phages are unsuitable for therapeutic purposes due to their transduction potential, analysis of seven C. difficile phages confirmed that this impact could be curtailed by the application of multiple phage types. Here, host range analysis of six myoviruses and one siphovirus was conducted on 80 strains representing 21 major epidemic and clinically severe ribotypes. The phages had complementary coverage, lysing 18 and 62 of the ribotypes and strains tested, respectively. Single-phage treatments of ribotype 076, 014/020, and 027 strains showed an initial reduction in the bacterial load followed by the emergence of phage-resistant colonies. However, these colonies remained susceptible to infection with an unrelated phage. In contrast, specific phage combinations caused the complete lysis of C. difficile in vitro and prevented the appearance of resistant/lysogenic clones. Using a hamster model, the oral delivery of optimized phage combinations resulted in reduced C. difficile colonization at 36 h postinfection. Interestingly, free phages were recovered from the bowel at this time. In a challenge model of the disease, phage treatment delayed the onset of symptoms by 33 h compared to the time of onset of symptoms in untreated animals. These data demonstrate the therapeutic potential of phage combinations to treat CDI. PMID:26643348
Cleanroom Maintenance Significantly Reduces Abundance but Not Diversity of Indoor Microbiomes
Mahnert, Alexander; Vaishampayan, Parag; Probst, Alexander J.; Auerbach, Anna; Moissl-Eichinger, Christine; Venkateswaran, Kasthuri; Berg, Gabriele
2015-01-01
Cleanrooms have been considered microbially-reduced environments and are used to protect human health and industrial product assembly. However, recent analyses have deciphered a rather broad diversity of microbes in cleanrooms, whose origin as well as physiological status has not been fully understood. Here, we examined the input of intact microbial cells from a surrounding built environment into a spacecraft assembly cleanroom by applying a molecular viability assay based on propidium monoazide (PMA). The controlled cleanroom (CCR) was characterized by ~6.2*103 16S rRNA gene copies of intact bacterial cells per m2 floor surface, which only represented 1% of the total community that could be captured via molecular assays without viability marker. This was in contrast to the uncontrolled adjoining facility (UAF) that had 12 times more living bacteria. Regarding diversity measures retrieved from 16S rRNA Illumina-tag analyzes, we observed, however, only a minor drop in the cleanroom facility allowing the conclusion that the number but not the diversity of microbes is strongly affected by cleaning procedures. Network analyses allowed tracking a substantial input of living microbes to the cleanroom and a potential enrichment of survival specialists like bacterial spore formers and archaeal halophiles and mesophiles. Moreover, the cleanroom harbored a unique community including 11 exclusive genera, e.g., Haloferax and Sporosarcina, which are herein suggested as indicators of cleanroom environments. In sum, our findings provide evidence that archaea are alive in cleanrooms and that cleaning efforts and cleanroom maintenance substantially decrease the number but not the diversity of indoor microbiomes. PMID:26273838
Cleanroom Maintenance Significantly Reduces Abundance but Not Diversity of Indoor Microbiomes.
Mahnert, Alexander; Vaishampayan, Parag; Probst, Alexander J; Auerbach, Anna; Moissl-Eichinger, Christine; Venkateswaran, Kasthuri; Berg, Gabriele
2015-01-01
Cleanrooms have been considered microbially-reduced environments and are used to protect human health and industrial product assembly. However, recent analyses have deciphered a rather broad diversity of microbes in cleanrooms, whose origin as well as physiological status has not been fully understood. Here, we examined the input of intact microbial cells from a surrounding built environment into a spacecraft assembly cleanroom by applying a molecular viability assay based on propidium monoazide (PMA). The controlled cleanroom (CCR) was characterized by ~6.2*103 16S rRNA gene copies of intact bacterial cells per m2 floor surface, which only represented 1% of the total community that could be captured via molecular assays without viability marker. This was in contrast to the uncontrolled adjoining facility (UAF) that had 12 times more living bacteria. Regarding diversity measures retrieved from 16S rRNA Illumina-tag analyzes, we observed, however, only a minor drop in the cleanroom facility allowing the conclusion that the number but not the diversity of microbes is strongly affected by cleaning procedures. Network analyses allowed tracking a substantial input of living microbes to the cleanroom and a potential enrichment of survival specialists like bacterial spore formers and archaeal halophiles and mesophiles. Moreover, the cleanroom harbored a unique community including 11 exclusive genera, e.g., Haloferax and Sporosarcina, which are herein suggested as indicators of cleanroom environments. In sum, our findings provide evidence that archaea are alive in cleanrooms and that cleaning efforts and cleanroom maintenance substantially decrease the number but not the diversity of indoor microbiomes.
Rifampicin and rifapentine significantly reduce concentrations of bedaquiline, a new anti-TB drug
Svensson, Elin M.; Murray, Stephen; Karlsson, Mats O.; Dooley, Kelly E.
2015-01-01
Objectives Bedaquiline is the first drug of a new class approved for the treatment of TB in decades. Bedaquiline is metabolized by cytochrome P450 (CYP) 3A4 to a less-active M2 metabolite. Its terminal half-life is extremely long (5–6 months), complicating evaluations of drug–drug interactions. Rifampicin and rifapentine, two anti-TB drugs now being optimized to shorten TB treatment duration, are potent inducers of CYP3A4. This analysis aimed to predict the effect of repeated doses of rifampicin or rifapentine on the steady-state pharmacokinetics of bedaquiline and its M2 metabolite from single-dose data using a model-based approach. Methods Pharmacokinetic data for bedaquiline and M2 were obtained from a Phase I study involving 32 individuals each receiving two doses of bedaquiline, alone or together with multiple-dose rifampicin or rifapentine. Sampling was performed over 14 days following each bedaquiline dose. Pharmacokinetic analyses were performed using non-linear mixed-effects modelling. Models were used to simulate potential dose adjustments. Results Rifamycin co-administration increased bedaquiline clearance substantially: 4.78-fold [relative standard error (RSE) 9.10%] with rifampicin and 3.96-fold (RSE 5.00%) with rifapentine. Induction of M2 clearance was equally strong. Average steady-state concentrations of bedaquiline and M2 are predicted to decrease by 79% and 75% when given with rifampicin or rifapentine, respectively. Simulations indicated that increasing the bedaquiline dosage to mitigate the interaction would yield elevated M2 concentrations during the first treatment weeks. Conclusions Rifamycin antibiotics reduce bedaquiline concentrations substantially. In line with current treatment guidelines for drug-susceptible TB, concomitant use is not recommended, even with dose adjustment. PMID:25535219
Significantly reduced thermal diffusivity of free-standing two-layer graphene in graphene foam.
Lin, Huan; Xu, Shen; Wang, Xinwei; Mei, Ning
2013-10-18
We report on a thermal diffusivity study of suspended graphene foam (GF) using the transient electro-thermal technique. Our Raman study confirms the GF is composed of two-layer graphene. By measuring GF of different lengths, we are able to exclude the radiation effect. Using Schuetz's model, the intrinsic thermal diffusivity of the free-standing two-layer graphene is determined with a high accuracy without using knowledge of the porosity of the GF. The intrinsic thermal diffusivity of the two-layer graphene is determined at 1.16-2.22 × 10(-4) m(2) s(-1). The corresponding intrinsic thermal conductivity is 182-349 W m(-1) K(-1), about one order of magnitude lower than those reported for single-layer graphene. Extensive surface impurity defects, wrinkles and rough edges are observed under a scanning electron microscope for the studied GF. These structural defects induce substantial phonon scattering and explain the observed significant thermal conductivity reduction. Our thermal diffusivity characterization of GF provides an advanced way to look into the thermal transport capacity of free-standing graphene with high accuracy and ease of experimental implementation. PMID:24060813
Significantly reduced thermal diffusivity of free-standing two-layer graphene in graphene foam
NASA Astrophysics Data System (ADS)
Lin, Huan; Xu, Shen; Wang, Xinwei; Mei, Ning
2013-10-01
We report on a thermal diffusivity study of suspended graphene foam (GF) using the transient electro-thermal technique. Our Raman study confirms the GF is composed of two-layer graphene. By measuring GF of different lengths, we are able to exclude the radiation effect. Using Schuetz’s model, the intrinsic thermal diffusivity of the free-standing two-layer graphene is determined with a high accuracy without using knowledge of the porosity of the GF. The intrinsic thermal diffusivity of the two-layer graphene is determined at 1.16-2.22 × 10-4 m2 s-1. The corresponding intrinsic thermal conductivity is 182-349 W m-1 K-1, about one order of magnitude lower than those reported for single-layer graphene. Extensive surface impurity defects, wrinkles and rough edges are observed under a scanning electron microscope for the studied GF. These structural defects induce substantial phonon scattering and explain the observed significant thermal conductivity reduction. Our thermal diffusivity characterization of GF provides an advanced way to look into the thermal transport capacity of free-standing graphene with high accuracy and ease of experimental implementation.
Significantly reduced thermal diffusivity of free-standing two-layer graphene in graphene foam.
Lin, Huan; Xu, Shen; Wang, Xinwei; Mei, Ning
2013-10-18
We report on a thermal diffusivity study of suspended graphene foam (GF) using the transient electro-thermal technique. Our Raman study confirms the GF is composed of two-layer graphene. By measuring GF of different lengths, we are able to exclude the radiation effect. Using Schuetz's model, the intrinsic thermal diffusivity of the free-standing two-layer graphene is determined with a high accuracy without using knowledge of the porosity of the GF. The intrinsic thermal diffusivity of the two-layer graphene is determined at 1.16-2.22 × 10(-4) m(2) s(-1). The corresponding intrinsic thermal conductivity is 182-349 W m(-1) K(-1), about one order of magnitude lower than those reported for single-layer graphene. Extensive surface impurity defects, wrinkles and rough edges are observed under a scanning electron microscope for the studied GF. These structural defects induce substantial phonon scattering and explain the observed significant thermal conductivity reduction. Our thermal diffusivity characterization of GF provides an advanced way to look into the thermal transport capacity of free-standing graphene with high accuracy and ease of experimental implementation.
Wang, Yongbo; Gao, Xiang; Pedram, Pardis; Shahidi, Mariam; Du, Jianling; Yi, Yanqing; Gulliver, Wayne; Zhang, Hongwei; Sun, Guang
2016-01-01
Selenium (Se) is a trace element which plays an important role in adipocyte hypertrophy and adipogenesis. Some studies suggest that variations in serum Se may be associated with obesity. However, there are few studies examining the relationship between dietary Se and obesity, and findings are inconsistent. We aimed to investigate the association between dietary Se intake and a panel of obesity measurements with systematic control of major confounding factors. A total of 3214 subjects participated in the study. Dietary Se intake was determined from the Willett food frequency questionnaire. Body composition was measured using dual-energy X-ray absorptiometry. Obese men and women had the lowest dietary Se intake, being 24% to 31% lower than corresponding normal weight men and women, classified by both BMI and body fat percentage. Moreover, subjects with the highest dietary Se intake had the lowest BMI, waist circumference, and trunk, android, gynoid and total body fat percentages, with a clear dose-dependent inverse relationship observed in both gender groups. Furthermore, significant negative associations discovered between dietary Se intake and obesity measurements were independent of age, total dietary calorie intake, physical activity, smoking, alcohol, medication, and menopausal status. Dietary Se intake alone may account for 9%–27% of the observed variations in body fat percentage. The findings from this study strongly suggest that high dietary Se intake is associated with a beneficial body composition profile. PMID:26742059
Ashrafian, Hutan; Toma, Tania; Harling, Leanne; Kerr, Karen; Athanasiou, Thanos; Darzi, Ara
2014-09-01
The global epidemic of obesity continues to escalate. Obesity accounts for an increasing proportion of the international socioeconomic burden of noncommunicable disease. Online social networking services provide an effective medium through which information may be exchanged between obese and overweight patients and their health care providers, potentially contributing to superior weight-loss outcomes. We performed a systematic review and meta-analysis to assess the role of these services in modifying body mass index (BMI). Our analysis of twelve studies found that interventions using social networking services produced a modest but significant 0.64 percent reduction in BMI from baseline for the 941 people who participated in the studies' interventions. We recommend that social networking services that target obesity should be the subject of further clinical trials. Additionally, we recommend that policy makers adopt reforms that promote the use of anti-obesity social networking services, facilitate multistakeholder partnerships in such services, and create a supportive environment to confront obesity and its associated noncommunicable diseases.
Colchicine Significantly Reduces Incident Cancer in Gout Male Patients: A 12-Year Cohort Study.
Kuo, Ming-Chun; Chang, Shun-Jen; Hsieh, Ming-Chia
2015-12-01
Patients with gout are more likely to develop most cancers than subjects without gout. Colchicine has been used for the treatment and prevention of gouty arthritis and has been reported to have an anticancer effect in vitro. However, to date no study has evaluated the relationship between colchicine use and incident cancers in patients with gout. This study enrolled male patients with gout identified in Taiwan's National Health Insurance Database for the years 1998 to 2011. Each gout patient was matched with 4 male controls by age and by month and year of first diagnosis, and was followed up until 2011. The study excluded those who were diagnosed with diabetes or any type of cancer within the year following enrollment. We calculated hazard ratio (HR), aged-adjusted standardized incidence ratio, and incidence of 1000 person-years analyses to evaluate cancer risk. A total of 24,050 male patients with gout and 76,129 male nongout controls were included. Patients with gout had a higher rate of incident all-cause cancers than controls (6.68% vs 6.43%, P = 0.006). A total of 13,679 patients with gout were defined as having been ever-users of colchicine and 10,371 patients with gout were defined as being never-users of colchicine. Ever-users of colchicine had a significantly lower HR of incident all-cause cancers than never-users of colchicine after adjustment for age (HR = 0.85, 95% CI = 0.77-0.94; P = 0.001). In conclusion, colchicine use was associated with a decreased risk of incident all-cause cancers in male Taiwanese patients with gout.
Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-08-01
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).
Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David
2013-08-01
Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T). PMID:23927246
Thyroid function appears to be significantly reduced in Space-borne MDS mice
NASA Astrophysics Data System (ADS)
Saverio Ambesi-Impiombato, Francesco; Curcio, Francesco; Fontanini, Elisabetta; Perrella, Giuseppina; Spelat, Renza; Zambito, Anna Maria; Damaskopoulou, Eleni; Peverini, Manola; Albi, Elisabetta
It is known that prolonged space flights induced changes in human cardiovascular, muscu-loskeletal and nervous systems whose function is regulated by the thyroid gland but, until now, no data were reported about thyroid damage during space missions. We have demonstrated in vitro that, during space missions (Italian Soyuz Mission "ENEIDE" in 2005, Shuttle STS-120 "ESPERIA" in 2007), thyroid in vitro cultured cells did not respond to thyroid stimulating hor-mone (TSH) treatment; they appeared healthy and alive, despite their being in a pro-apopotic state characterised by a variation of sphingomyelin metabolism and consequent increase in ce-ramide content. The insensitivity to TSH was largely due to a rearrangement of specific cell membrane microdomains, acting as platforms for TSH-receptor (TEXUS-44 mission in 2008). To study if these effects were present also in vivo, as part of the Mouse Drawer System (MDS) Tissue Sharing Program, we performed experiments in mice maintained onboard the Interna-tional Space Station during the long-duration (90 days) exploration mission STS-129. After return to earth, the thyroids isolated from the 3 animals were in part immediately frozen to study the morphological modification in space and in part immediately used to study the effect of TSH treatment. For this purpose small fragments of tissue were treated with 10-7 or 10-8 M TSH for 1 hour by using untreated fragments as controls. Then the fragments were fixed with absolute ethanol for 10 min at room temperature and centrifuged for 20 min. at 3000 x g. The supernatants were used for cAMP analysis whereas the pellet were used for protein amount determination and for immunoblotting analysis of TSH-receptor, sphingomyelinase and sphingomyelin-synthase. The results showed a modification of the thyroid structure and also the values of cAMP production after treatment with 10-7 M TSH for 1 hour were significantly lower than those obtained in Earth's gravity. The treatment with TSH
Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique.
Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique. PMID:27635253
2014-01-01
Background IL-17A is a pro-inflammatory cytokine that is normally associated with autoimmune arthritis and other pro-inflammatory conditions. Recently, IL-17A has emerged as a critical factor in enhancing breast cancer (BC)-associated metastases. We generated immune competent arthritic mouse models that develop spontaneous BC-associated bone and lung metastasis. Using these models, we have previously shown that neutralization of IL-17A resulted in significant reduction in metastasis. However, the underlying mechanism/s remains unknown. Methods We have utilized two previously published mouse models for this study: 1) the pro-arthritic mouse model (designated SKG) injected with metastatic BC cell line (4T1) in the mammary fat pad, and 2) the PyV MT mice that develop spontaneous mammary gland tumors injected with type II collagen to induce autoimmune arthritis. Mice were treated with anti-IL-17A neutralizing antibody and monitored for metastasis and assessed for pro-inflammatory cytokines and chemokines associated with BC-associated metastasis. Results We first corroborate our previous finding that in vivo neutralization of IL-17A significantly reduced metastasis to the bones and lungs in both models. Next, we report that treatment with anti-IL17A antibody significantly reduced the expression of a key chemokine, CXCL12 (also known as stromal derived factor-1 (SDF - 1)) in the bones and lungs of treated mice. CXCL12 is a ligand for CXCR4 (expressed on BC cells) and their interaction is known to be critical for metastasis. Interestingly, levels of CXCR4 in the tumor remained unchanged with treatment. Consequently, protein lysates derived from the bones and lungs of treated mice were significantly less chemotactic for the BC cells than lysates from untreated mice; and addition of exogenous SDF-1 to the lysates from treated mice completely restored BC cell migration. In addition, cytokines such as IL-6 and M-CSF were significantly reduced in the lung and bone lysates
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Violante-Carvalho, Nelson
2005-12-01
Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.
Angus, Simon D; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means
Sen, Soman; Johnston, Charles; Greenhalgh, David; Palmieri, Tina
2016-01-01
Ventilator-associated pneumonia (VAP) is a common cause of morbidity and mortality for critically ill burn patients. Prevention of VAP through bundled preventative measures may reduce the risk and incidence of VAP in burn patients. A retrospective chart review was performed of all mechanically ventilated adult (age ≥ 18 years) burn patients before and after VAP prevention bundle implementation. Data collected included age, TBSA, gender, diagnosis of inhalation injury, mechanism of injury, comorbid illnesses, length of mechanical ventilation, length of hospital stay, development of VAP, discharge disposition, and mortality. Burn patients with VAP had larger burn injuries (47.6 ± 22.2 vs 23.9 ± 23.01), more inhalation injuries (44.6% vs 27%), prolonged mechanical ventilation, and longer intensive care unit (ICU) and hospital stays. Mortality was also higher in burn patients who developed VAP (34% vs 19%). On multivariate regression analysis, TBSA and ventilator days were independent risk factors for VAP. In 2010, a VAP prevention bundle was implemented in the burn ICU and overseen by a nurse champion. Compliance with bundle implementation was more than 95%. By 2012, independent of age, TBSA, inhalation injury, ventilator days, ICU and hospital length of stay, VAP prevention bundles resulted in a significantly reduced risk of developing VAP (odds ratio of 0.15). Burn patients with an inhalation injury and a large burn injury are at increased risk of developing VAP. The incidence and risk of VAP can be significantly reduced in burn patients with VAP prevention bundles.
A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems
NASA Astrophysics Data System (ADS)
Chan, Tsun-Shan; Kuo, Jen-Chih; Wu, An-Yeu (Andy)
2002-12-01
The discrete multitone (DMT) modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL) and very-high-speed digital subscriber lines (VDSL). Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs). In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.
NASA Astrophysics Data System (ADS)
Inaniwa, Taku; Kanematsu, Nobuyuki; Furukawa, Takuji; Noda, Koji
2011-03-01
A 'patch-field' strategy is often used for tumors with large volumes exceeding the available field size in passive irradiations with ion beams. Range and setup errors can cause hot and cold spots at the field junction within the target. Such errors will also displace the field to miss the target periphery. With scanned ion beams with fluence modulation, the two junctional fields can be overlapped rather than patched, which may potentially reduce the sensitivity to these uncertainties. In this study, we have developed such a robust optimization algorithm. This algorithm is composed of the following two steps: (1) expanding the target volume with margins against the uncertainties, and (2) solving the inverse problem where the terms suppressing the dose gradient of individual fields are added into the objective function. The validity of this algorithm is demonstrated through simulation studies for two extreme cases of two fields with unidirectional and opposing geometries and for a prostate-cancer case. With the proposed algorithm, we can obtain a more robust plan with minimized influence of range and setup uncertainties than the conventional plan. Compared to conventional optimization, the calculation time for the robust optimization increased by a factor of approximately 3.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2015-12-01
We develop an experimental design algorithm to select locations for a network of observation wells that provide the maximum robust information about unknown hydraulic conductivity in a confined, anisotropic aquifer. Since the information that a design provides is dependent on an aquifer's hydraulic conductivity, a robust design is one that provides the maximum information in the worst-case scenario. The design can be formulated as a max-min optimization problem. The problem is generally non-convex, non-differentiable, and contains integer variables. We use a Genetic Algorithm (GA) to perform the combinatorial search. We employ proper orthogonal decomposition (POD) to reduce the dimension of the groundwater model, thereby reducing the computational burden posed by employing a GA. The GA algorithm exhaustively searches for the robust design across a set of hydraulic conductivities and finds an approximate design (called the High Frequency Observation Well Design) through a Monte Carlo-type search. The results from a small-scale 1-D test case validate the proposed methodology. We then apply the methodology to a realistically-scaled 2-D test case.
Cardenas, Erick; Leigh, Mary Beth; Marsh, Terence; Tiedje, James M.; Wu, Wei-min; Luo, Jian; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Criddle, Craig; Carley, Jack M; Carroll, Sue L; Gentry, Terry J; Watson, David B; Gu, Baohua; Jardine, Philip M; Zhou, Jizhong
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 M and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared.
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K; Jardine, Philip M; Zhou, Jizhong; Criddle, Craig S; Marsh, Terence L; Tiedje, James M
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared. PMID:20729318
Cardenas, Erick; Wu, Wei-min; Leigh, Mary Beth; Carley, Jack M; Carroll, Sue L; Gentry, Terry; Luo, Jian; Watson, David B; Gu, Baohua; Ginder-Vogel, Matthew A.; Kitanidis, Peter K.; Jardine, Philip; Kelly, Shelly D; Zhou, Jizhong; Criddle, Craig; Marsh, Terence; Tiedje, James
2010-08-01
Massively parallel sequencing has provided a more affordable and high throughput method to study microbial communities, although it has been mostly used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium (VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee, USA. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 {micro}M, and created geochemical gradients in electron donors from the inner loop injection well towards the outer loop and down-gradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical created conditions. Castellaniella, and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity; while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. Abundance of these bacteria as well as the Fe(III)- and U(VI)-reducer Geobacter correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to the electron donor addition and by the groundwater flow path. A false discovery rate approach was implemented to discard false positives by chance given the large amount of data compared.
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Jardine, Philip M.; Zhou, Jizhong; Criddle, Craig S.; Marsh, Terence L.; Tiedje, James M.
2010-01-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared. PMID:20729318
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K; Jardine, Philip M; Zhou, Jizhong; Criddle, Craig S; Marsh, Terence L; Tiedje, James M
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared.
Perera, Meenu N; Abuladze, Tamar; Li, Manrong; Woolston, Joelle; Sulakvelidze, Alexander
2015-12-01
ListShield™, a commercially available bacteriophage cocktail that specifically targets Listeria monocytogenes, was evaluated as a bio-control agent for L. monocytogenes in various Ready-To-Eat foods. ListShield™ treatment of experimentally contaminated lettuce, cheese, smoked salmon, and frozen entrèes significantly reduced (p < 0.05) L. monocytogenes contamination by 91% (1.1 log), 82% (0.7 log), 90% (1.0 log), and 99% (2.2 log), respectively. ListShield™ application, alone or combined with an antioxidant/anti-browning solution, resulted in a statistically significant (p < 0.001) 93% (1.1 log) reduction of L. monocytogenes contamination on apple slices after 24 h at 4 °C. Treatment of smoked salmon from a commercial processing facility with ListShield™ eliminated L. monocytogenes (no detectable L. monocytogenes) in both the naturally contaminated and experimentally contaminated salmon fillets. The organoleptic quality of foods was not affected by application of ListShield™, as no differences in the color, taste, or appearance were detectable. Bio-control of L. monocytogenes with lytic bacteriophage preparations such as ListShield™ can offer an environmentally-friendly, green approach for reducing the risk of listeriosis associated with the consumption of various foods that may be contaminated with L. monocytogenes.
Zeng, Yi; Chen, Huashuai; Ni, Ting; Ruan, Rongping; Nie, Chao; Liu, Xiaomin; Feng, Lei; Zhang, Fengyu; Lu, Jiehua; Li, Jianxin; Li, Yang; Tao, Wei; Gregory, Simon G; Gottschalk, William; Lutz, Michael W; Land, Kenneth C; Yashin, Anatoli; Tan, Qihua; Yang, Ze; Bolund, Lars; Ming, Qi; Yang, Huanming; Min, Junxia; Willcox, D Craig; Willcox, Bradley J; Gu, Jun; Hauser, Elizabeth; Tian, Xiao-Li; Vaupel, James W
2016-06-01
On the basis of the genotypic/phenotypic data from Chinese Longitudinal Healthy Longevity Survey (CLHLS) and Cox proportional hazard model, the present study demonstrates that interactions between carrying FOXO1A-209 genotypes and tea drinking are significantly associated with lower risk of mortality at advanced ages. Such a significant association is replicated in two independent Han Chinese CLHLS cohorts (p = 0.028-0.048 in the discovery and replication cohorts, and p = 0.003-0.016 in the combined dataset). We found the associations between tea drinking and reduced mortality are much stronger among carriers of the FOXO1A-209 genotype compared to non-carriers, and drinking tea is associated with a reversal of the negative effects of carrying FOXO1A-209 minor alleles, that is, from a substantially increased mortality risk to substantially reduced mortality risk at advanced ages. The impacts are considerably stronger among those who carry two copies of the FOXO1A minor allele than those who carry one copy. On the basis of previously reported experiments on human cell models concerning FOXO1A-by-tea-compounds interactions, we speculate that results in the present study indicate that tea drinking may inhibit FOXO1A-209 gene expression and its biological functions, which reduces the negative impacts of FOXO1A-209 gene on longevity (as reported in the literature) and offers protection against mortality risk at oldest-old ages. Our empirical findings imply that the health outcomes of particular nutritional interventions, including tea drinking, may, in part, depend upon individual genetic profiles, and the research on the effects of nutrigenomics interactions could potentially be useful for rejuvenation therapies in the clinic or associated healthy aging intervention programs.
Zeng, Yi; Chen, Huashuai; Ni, Ting; Ruan, Rongping; Nie, Chao; Liu, Xiaomin; Feng, Lei; Zhang, Fengyu; Lu, Jiehua; Li, Jianxin; Li, Yang; Tao, Wei; Gregory, Simon G; Gottschalk, William; Lutz, Michael W; Land, Kenneth C; Yashin, Anatoli; Tan, Qihua; Yang, Ze; Bolund, Lars; Ming, Qi; Yang, Huanming; Min, Junxia; Willcox, D Craig; Willcox, Bradley J; Gu, Jun; Hauser, Elizabeth; Tian, Xiao-Li; Vaupel, James W
2016-06-01
On the basis of the genotypic/phenotypic data from Chinese Longitudinal Healthy Longevity Survey (CLHLS) and Cox proportional hazard model, the present study demonstrates that interactions between carrying FOXO1A-209 genotypes and tea drinking are significantly associated with lower risk of mortality at advanced ages. Such a significant association is replicated in two independent Han Chinese CLHLS cohorts (p = 0.028-0.048 in the discovery and replication cohorts, and p = 0.003-0.016 in the combined dataset). We found the associations between tea drinking and reduced mortality are much stronger among carriers of the FOXO1A-209 genotype compared to non-carriers, and drinking tea is associated with a reversal of the negative effects of carrying FOXO1A-209 minor alleles, that is, from a substantially increased mortality risk to substantially reduced mortality risk at advanced ages. The impacts are considerably stronger among those who carry two copies of the FOXO1A minor allele than those who carry one copy. On the basis of previously reported experiments on human cell models concerning FOXO1A-by-tea-compounds interactions, we speculate that results in the present study indicate that tea drinking may inhibit FOXO1A-209 gene expression and its biological functions, which reduces the negative impacts of FOXO1A-209 gene on longevity (as reported in the literature) and offers protection against mortality risk at oldest-old ages. Our empirical findings imply that the health outcomes of particular nutritional interventions, including tea drinking, may, in part, depend upon individual genetic profiles, and the research on the effects of nutrigenomics interactions could potentially be useful for rejuvenation therapies in the clinic or associated healthy aging intervention programs. PMID:26414954
NASA Astrophysics Data System (ADS)
Gu, Xuejun; Kim, Hyun K.; Masciotti, James; Hielscher, Andreas H.
2009-02-01
Computational speed and available memory size on a single processor are two limiting factors when using the frequency-domain equation of radiative transport (FD-ERT) as a forward and inverse model to reconstruct three-dimensional (3D) tomographic images. In this work, we report on a parallel, multiprocessor reducedspace sequential quadratic programming (RSQP) approach to improve computational speed and reduce memory requirement. To evaluate and quantify the performance of the code, we performed simulation studies employing a 3D numerical mouse model. Furthermore, we tested the algorithm with experimental data obtained from tumor bearing mice.
NASA Astrophysics Data System (ADS)
Andò, Bruno; Carbone, Daniele
2004-05-01
Gravity measurements are utilized at active volcanoes to detect mass changes linked to magma transfer processes and thus to recognize forerunners to paroxysmal volcanic events. Continuous gravity measurements are now increasingly performed at sites very close to active craters, where there is the greatest chance to detect meaningful gravity changes. Unfortunately, especially when used against the adverse environmental conditions usually encountered at such places, gravimeters have been proved to be affected by meteorological parameters, mainly by changes in the atmospheric temperature. The pseudo-signal generated by these perturbations is often stronger than the signal generated by actual changes in the gravity field. Thus, the implementation of well-performing algorithms for reducing the gravity signal for the effect of meteorological parameters is vital to obtain sequences useful from the volcano surveillance standpoint. In the present paper, a Neuro-Fuzzy algorithm, which was already proved to accomplish the required task satisfactorily, is tested over a data set from three gravimeters which worked continuously for about 50 days at a site far away from active zones, where changes due to actual fluctuation of the gravity field are expected to be within a few microgal. After accomplishing the reduction of the gravity series, residuals are within about 15 μGal peak-to-peak, thus confirming the capabilities of the Neuro-Fuzzy algorithm under test of performing the required task satisfactorily.
Liu, Shengyan; Dozois, Matthew D; Chang, Chu Ning; Ahmad, Aaminah; Ng, Deborah L T; Hileeto, Denise; Liang, Huiyuan; Reyad, Matthew-Mina; Boyd, Shelley; Jones, Lyndon W; Gu, Frank X
2016-09-01
Eye diseases, such as dry eye syndrome, are commonly treated with eye drop formulations. However, eye drop formulations require frequent dosing with high drug concentrations due to poor ocular surface retention, which leads to poor patient compliance and high risks of side effects. We developed a mucoadhesive nanoparticle eye drop delivery platform to prolong the ocular retention of topical drugs, thus enabling treatment of eye diseases using reduced dosage. Using fluorescent imaging on rabbit eyes, we showed ocular retention of the fluorescent dye delivered through these nanoparticles beyond 24 h while free dyes were mostly cleared from the ocular surface within 3 h after administration. Utilizing the prolonged retention of the nanoparticles, we demonstrated effective treatment of experimentally induced dry eye in mice by delivering cyclosporin A (CsA) bound to this delivery system. The once a week dosing of 0.005 to 0.01% CsA in NP eye drop formulation demonstrated both the elimination of the inflammation signs and the recovery of ocular surface goblet cells after a month. Thrice daily administration of RESTASIS on mice only showed elimination without recovering the ocular surface goblet cells. The mucoadhesive nanoparticle eye drop platform demonstrated prolonged ocular surface retention and effective treatment of dry eye conditions with up to 50- to 100-fold reduction in overall dosage of CsA compared to RESTASIS, which may significantly reduce side effects and, by extending the interdosing interval, improve patient compliance. PMID:27482595
Perera, Meenu N; Abuladze, Tamar; Li, Manrong; Woolston, Joelle; Sulakvelidze, Alexander
2015-12-01
ListShield™, a commercially available bacteriophage cocktail that specifically targets Listeria monocytogenes, was evaluated as a bio-control agent for L. monocytogenes in various Ready-To-Eat foods. ListShield™ treatment of experimentally contaminated lettuce, cheese, smoked salmon, and frozen entrèes significantly reduced (p < 0.05) L. monocytogenes contamination by 91% (1.1 log), 82% (0.7 log), 90% (1.0 log), and 99% (2.2 log), respectively. ListShield™ application, alone or combined with an antioxidant/anti-browning solution, resulted in a statistically significant (p < 0.001) 93% (1.1 log) reduction of L. monocytogenes contamination on apple slices after 24 h at 4 °C. Treatment of smoked salmon from a commercial processing facility with ListShield™ eliminated L. monocytogenes (no detectable L. monocytogenes) in both the naturally contaminated and experimentally contaminated salmon fillets. The organoleptic quality of foods was not affected by application of ListShield™, as no differences in the color, taste, or appearance were detectable. Bio-control of L. monocytogenes with lytic bacteriophage preparations such as ListShield™ can offer an environmentally-friendly, green approach for reducing the risk of listeriosis associated with the consumption of various foods that may be contaminated with L. monocytogenes. PMID:26338115
Papoiu, Alexandru D P; Valdes-Rodriguez, Rodrigo; Nattkemper, Leigh A; Chan, Yiong-Huak; Hahn, Gary S; Yosipovitch, Gil
2013-09-01
The aim of this double-blinded, vehicle-controlled study was to test the antipruritic efficacy of topical strontium to relieve a nonhistaminergic form of itch that would be clinically relevant for chronic pruritic diseases. Itch induced with cowhage is mediated by PAR2 receptors which are considered to play a major role in itch of atopic dermatitis and possibly other acute and chronic pruritic conditions. The topical strontium hydrogel formulation (TriCalm®) was tested in a head-to-head comparison with 2 common topical formulations marketed as antipruritics: hydrocortisone and diphenhydramine, for their ability to relieve cowhage-induced itch. Topically-applied strontium salts were previously found to be effective for reducing histamine-induced and IgE-mediated itch in humans. However, histamine is not considered the critical mediator in the majority of skin diseases presenting with chronic pruritus. The current study enrolled 32 healthy subjects in which itch was induced with cowhage before and after skin treatment with a gel containing 4% SrCl2, control vehicle, topical 1% hydrocortisone and topical 2% diphenhydramine. Strontium significantly reduced the peak intensity and duration of cowhage-induced itch when compared to the control itch curve, and was significantly superior to the other two over-the-counter antipruritic agents and its own vehicle in antipruritic effect. We hereby show that a 4% topical strontium formulation has a robust antipruritic effect, not only against histamine-mediated itch, but also for non-histaminergic pruritus induced via the PAR2 pathway, using cowhage. PMID:23474847
NASA Astrophysics Data System (ADS)
Zhang, Chao; Zhang, Jing; Su, Yanjie; Xu, Minghan; Yang, Zhi; Zhang, Yafei
2014-02-01
We have demonstrated a facile and low-cost approach to synthesize ZnO nanowire (NW)/reduced graphene oxide (RGO) nanocomposites, in which ZnO NWs and graphene oxide (GO) were produced in large scale separately and then hybridized into ZnO NW/RGO nanocomposites by mechanical mixing and low-temperature thermal reduction. Rhodamine 6G (Rh6G) was used as a model dye to evaluate the photocatalytic properties of ZnO NW/RGO nanocomposites. The obtained nanocomposites show significantly enhanced photocatalytic performance, which took only 10 min to decompose over 98% Rh6G. Finally the mechanism of the great enhancement about photocatalytic activity of ZnO NW/RGO nanocomposites is studied. It is mainly attributed to that RGO nanosheets can transfer the electrons of ZnO NWs excited by ultraviolet (UV) irradiation, increase electron migration efficiency, and then longer the lifetime of the holes in ZnO NWs. The high charge separation efficiency of photo-generated electron-hole pairs directly leads to the lower recombination rate of ZnO NW/RGO nanocomposites, makes more effective electrons and holes to participate the radical reactions with Rh6G, thus significantly improving the photocatalytic properties. The high degradation efficiency makes the ZnO NW/RGO nanocomposites promising candidates in the application of environmental pollutant and wastewater treatment.
NASA Astrophysics Data System (ADS)
Frolov, Sergey; Baptista, António M.; Zhang, Yinglong; Seaton, Charles
2009-02-01
A data assimilation method was used to estimate the variability of three ecologically significant features of the Columbia River estuary and plume: the size of the plume, the orientation of the plume, and the length of the salinity intrusion in the estuary. Our data assimilation method was based on a reduced-dimension Kalman filter that enables fast data assimilation of nonlinear dynamics in the estuary and plume. The assimilated data included measurements of salinity, temperature, and water levels at 13 stations in the estuary and at five moorings in the plume. Our experimental results showed that data assimilation played a significant role in controlling the magnitude and timing of dynamic events in the Columbia River estuary and plume, such as events of extreme salinity intrusion and events of regime transitions in the plume. Data assimilation also changed the response of the salinity intrusion length to variations in the Columbia River discharge, hence imposing a new dynamic on the simulated estuary. The validation of the assimilated solution with independent data showed that these corrections were likely realistic, because the assimilated model was closer to the true ocean than the original, non-assimilated model.
Hassan, Femeena; Geethalakshmi, V; Jeeva, J Charles; Babu, M Remya
2013-02-01
Combined effect of lime and drying on bacteria of public health significance in Edible Oyster (Crassostrea madrasensis) from Munambam coastal belt (Kerala, India) were studied (without depuration). Samples were examined for Total Plate Count (TPC), Staphylococcus aureus (hygiene indicator), Total coliforms, Faecal coliforms, Escherichia coli, (faecal indicator) Faecal Streptococci (faecal indicator), Salmonella, Vibrio cholera and Listeria monocytogenes. The fresh oyster meat though did not confirm to the specifications laid by National shellfish sanitation programme (NSSP), after treatment with lime with and without drying found to show significant reduction in counts and meet the required standards. Prevalence of faecal indicators in the fresh sample indicated faecal pollution in the area. The isolation of potentially pathogenic bacteria, V. parahaemolyticus in fresh sample indicates high risk of people consuming and handling oysters in raw and semi processed form and also it may lead to cross contamination. The present study indicates that treatment with natural organic product like lime and simple preservation technique, drying can effectively reduce the bacterial load. The study also revealed that TPC of water and soil collected from the site from where oysters were collected was less than from the meat. PMID:24425910
Hashimoto, Takeshi; Yokokawa, Takumi; Endo, Yuriko; Iwanaka, Nobumasa; Higashida, Kazuhiko; Taguchi, Sadayoshi
2013-10-11
Highlights: •Long-term hypoxia decreased the size of LDs and lipid storage in 3T3-L1 adipocytes. •Long-term hypoxia increased basal lipolysis in 3T3-L1 adipocytes. •Hypoxia decreased lipid-associated proteins in 3T3-L1 adipocytes. •Hypoxia decreased basal glucose uptake and lipogenic proteins in 3T3-L1 adipocytes. •Hypoxia-mediated lipogenesis may be an attractive therapeutic target against obesity. -- Abstract: Background: A previous study has demonstrated that endurance training under hypoxia results in a greater reduction in body fat mass compared to exercise under normoxia. However, the cellular and molecular mechanisms that underlie this hypoxia-mediated reduction in fat mass remain uncertain. Here, we examine the effects of modest hypoxia on adipocyte function. Methods: Differentiated 3T3-L1 adipocytes were incubated at 5% O{sub 2} for 1 week (long-term hypoxia, HL) or one day (short-term hypoxia, HS) and compared with a normoxia control (NC). Results: HL, but not HS, resulted in a significant reduction in lipid droplet size and triglyceride content (by 50%) compared to NC (p < 0.01). As estimated by glycerol release, isoproterenol-induced lipolysis was significantly lowered by hypoxia, whereas the release of free fatty acids under the basal condition was prominently enhanced with HL compared to NC or HS (p < 0.01). Lipolysis-associated proteins, such as perilipin 1 and hormone-sensitive lipase, were unchanged, whereas adipose triglyceride lipase and its activator protein CGI-58 were decreased with HL in comparison to NC. Interestingly, such lipogenic proteins as fatty acid synthase, lipin-1, and peroxisome proliferator-activated receptor gamma were decreased. Furthermore, the uptake of glucose, the major precursor of 3-glycerol phosphate for triglyceride synthesis, was significantly reduced in HL compared to NC or HS (p < 0.01). Conclusion: We conclude that hypoxia has a direct impact on reducing the triglyceride content and lipid droplet size via
Documentation for subroutine REDUC3, an algorithm for the linear filtering of gridded magnetic data
Blakely, Richard J.
1977-01-01
Subroutine REDUC3 transforms a total field anomaly h1(x,y) , measured on a horizontal and rectangular grid, into a new anomaly h2(x,y). This new anomaly is produced by the same source as h1(x,y) , but (1) is observed at a different elevation, (2) has a source with a different direction of magnetization, and/or (3) has a different direction of residual field. Case 1 is tantamount to upward or downward continuation. Cases 2 and 3 are 'reduction to the pole', if the new inclinations of both the magnetization and regional field are 90 degrees. REDUC3 is a filtering operation applied in the wave-number domain. It first Fourier transforms h1(x,y) , multiplies by the appropriate filter, and inverse Fourier transforms the result to obtain h2(x,y). No assumptions are required about the shape of the source or how the intensity of magnetization varies within it.
Intelligent speckle reducing anisotropic diffusion algorithm for automated 3-D ultrasound images.
Wu, Jun; Wang, Yuanyuan; Yu, Jinhua; Shi, Xinling; Zhang, Junhua; Chen, Yue; Pang, Yun
2015-02-01
A novel 3-D filtering method is presented for speckle reduction and detail preservation in automated 3-D ultrasound images. First, texture features of an image are analyzed by using the improved quadtree (QT) decomposition. Then, the optimal homogeneous and the obvious heterogeneous regions are selected from QT decomposition results. Finally, diffusion parameters and diffusion process are automatically decided based on the properties of these two selected regions. The computing time needed for 2-D speckle reduction is very short. However, the computing time required for 3-D speckle reduction is often hundreds of times longer than 2-D speckle reduction. This may limit its potential application in practice. Because this new filter can adaptively adjust the time step of iteration, the computation time is reduced effectively. Both synthetic and real 3-D ultrasound images are used to evaluate the proposed filter. It is shown that this filter is superior to other methods in both practicality and efficiency. PMID:26366596
Rose, N; Andraud, M; Bigault, L; Jestin, A; Grasland, B
2016-07-19
Transmission characteristics of PCV2 have been compared between vaccinated and non-vaccinated pigs in experimental conditions. Twenty-four Specific Pathogen Free (SPF) piglets, vaccinated against PCV2 at 3weeks of age (PCV2a recombinant CAP protein-based vaccine), were inoculated at 15days post-vaccination with a PCV2b inoculum (6⋅10(5) TCID50), and put in contact with 24 vaccinated SPF piglets during 42days post-inoculation. Those piglets were shared in six replicates of a contact trial involving 4 inoculated piglets mingled with 4 susceptible SPF piglets. Two replicates of a similar contact trial were made with non-vaccinated pigs. Non vaccinated animals received a placebo at vaccination time and were inoculated the same way and at the same time as the vaccinated group. All the animals were monitored twice weekly using quantitative real-time PCR and ELISA for serology until 42days post-inoculation. The frequency of infection and the PCV2 genome load in sera of the vaccinated pigs were significantly reduced compared to the non-vaccinated animals. The duration of infectiousness was significantly different between vaccinated and non-vaccinated groups (16.6days [14.7;18.4] and 26.6days [22.9;30.4] respectively). The transmission rate was also considerably decreased in vaccinated pigs (β=0.09 [0.05-0.14] compared to β=0.19 [0.11-0.32] in non-vaccinated pigs). This led to an estimated reproduction ratio of 1.5 [95% CI 0.8 - 2.2] in vaccinated animals versus 5.1 [95% CI 2.5 - 8.2] in non-vaccinated pigs when merging data of this experiment with previous trials carried out in same conditions. PMID:27318416
Radenahmad, Nisaudah; Saleh, Farid; Sawangjaroen, Kitja; Vongvatcharanon, Uraporn; Subhadhirasakul, Patchara; Rundorn, Wilart; Withyachumnarnkul, Boonsirm; Connor, James R
2011-03-01
Brains from ovariectomised (ovx) rats can display features similar to those observed in menopausal women with Alzheimer's disease (AD), and oestrogen seems to play a key role. Preliminary studies on young coconut juice (YCJ) have reported the presence of oestrogen-like components in it. The aim of the study was to investigate the effects of YCJ on the AD pathological changes in the brains of ovx rats. Rat groups included sham-operated, ovx, ovx+oestradiol benzoate (EB) and ovx+YCJ. Brain sections (4 μm) were taken and were immunostained with β-amyloid (Aβ) 1-42, glial fibrillary acidic protein (GFAP) (an intermediate neurofilament of astrocytes) and Tau-1 antibodies. Aβ 1-42, GFAP and Tau-1 are considered as reliable biomarkers of amyloidosis, astrogliosis and tauopathy (neurofibrillary tangles), respectively, which in turn are characteristic features associated with AD. The serum oestradiol (E2) level was measured using a chemiluminescent immunoassay technique. YCJ restored the serum E2 to levels significantly (P < 0·001) higher than that of the ovx group, and even that of the sham group. Aβ deposition was significantly (P < 0·0001) reduced in the cerebral cortex of the YCJ group, as compared with the ovx group and with the sham and ovx+EB groups (P < 0·01). A similar trend was observed in relation to GFAP expression in the cerebral cortex and to Tau-1 expression in the hippocampus. This is a novel study demonstrating that YCJ could have positive future implications in the prevention and treatment of AD in menopausal women.
Lintas, C; Sacco, R; Garbett, K; Mirnics, K; Militerni, R; Bravaccio, C; Curatolo, P; Manzi, B; Schneider, C; Melmed, R; Elia, M; Pascucci, T; Puglisi-Allegra, S; Reichelt, K-L; Persico, A M
2009-07-01
Protein kinase C enzymes play an important role in signal transduction, regulation of gene expression and control of cell division and differentiation. The fsI and betaII isoenzymes result from the alternative splicing of the PKCbeta gene (PRKCB1), previously found to be associated with autism. We performed a family-based association study in 229 simplex and 5 multiplex families, and a postmortem study of PRKCB1 gene expression in temporocortical gray matter (BA41/42) of 11 autistic patients and controls. PRKCB1 gene haplotypes are significantly associated with autism (P<0.05) and have the autistic endophenotype of enhanced oligopeptiduria (P<0.05). Temporocortical PRKCB1 gene expression was reduced on average by 35 and 31% for the PRKCB1-1 and PRKCB1-2 isoforms (P<0.01 and <0.05, respectively) according to qPCR. Protein amounts measured for the PKCbetaII isoform were similarly decreased by 35% (P=0.05). Decreased gene expression characterized patients carrying the 'normal' PRKCB1 alleles, whereas patients homozygous for the autism-associated alleles displayed mRNA levels comparable to those of controls. Whole genome expression analysis unveiled a partial disruption in the coordinated expression of PKCbeta-driven genes, including several cytokines. These results confirm the association between autism and PRKCB1 gene variants, point toward PKCbeta roles in altered epithelial permeability, demonstrate a significant downregulation of brain PRKCB1 gene expression in autism and suggest that it could represent a compensatory adjustment aimed at limiting an ongoing dysreactive immune process. Altogether, these data underscore potential PKCbeta roles in autism pathogenesis and spur interest in the identification and functional characterization of PRKCB1 gene variants conferring autism vulnerability.
Rose, N; Andraud, M; Bigault, L; Jestin, A; Grasland, B
2016-07-19
Transmission characteristics of PCV2 have been compared between vaccinated and non-vaccinated pigs in experimental conditions. Twenty-four Specific Pathogen Free (SPF) piglets, vaccinated against PCV2 at 3weeks of age (PCV2a recombinant CAP protein-based vaccine), were inoculated at 15days post-vaccination with a PCV2b inoculum (6⋅10(5) TCID50), and put in contact with 24 vaccinated SPF piglets during 42days post-inoculation. Those piglets were shared in six replicates of a contact trial involving 4 inoculated piglets mingled with 4 susceptible SPF piglets. Two replicates of a similar contact trial were made with non-vaccinated pigs. Non vaccinated animals received a placebo at vaccination time and were inoculated the same way and at the same time as the vaccinated group. All the animals were monitored twice weekly using quantitative real-time PCR and ELISA for serology until 42days post-inoculation. The frequency of infection and the PCV2 genome load in sera of the vaccinated pigs were significantly reduced compared to the non-vaccinated animals. The duration of infectiousness was significantly different between vaccinated and non-vaccinated groups (16.6days [14.7;18.4] and 26.6days [22.9;30.4] respectively). The transmission rate was also considerably decreased in vaccinated pigs (β=0.09 [0.05-0.14] compared to β=0.19 [0.11-0.32] in non-vaccinated pigs). This led to an estimated reproduction ratio of 1.5 [95% CI 0.8 - 2.2] in vaccinated animals versus 5.1 [95% CI 2.5 - 8.2] in non-vaccinated pigs when merging data of this experiment with previous trials carried out in same conditions.
Aihara, Hiroyuki; Ryou, Marvin; Kumar, Nitin; Ryan, Michele B.; Thompson, Christopher C.
2016-01-01
Background and study aims In endoscopic submucosal dissection (ESD), effective countertraction may overcome the current drawbacks of longer procedure times and increased technical demands. The objective of this study was to compare the efficacy of ESD using a novel magnetic countertraction device with that of the traditional technique. Methods Each ESD was performed on simulated gastric lesions of 30mm diameter created at five different locations. In total, 10 ESDs were performed using this novel device and 10 were performed by the standard technique. Results The magnetic countertraction device allowed directional tissue manipulation and exposure of the submucosal space. The total procedure time was 605 ± 303.7 seconds in the countertraction group vs. 1082 ± 515.9 seconds in the control group (P=0.021). Conclusions This study demonstrated that using a novel magnetic countertraction device during ESD is technically feasible and enables the operator to dynamically manipulate countertraction such that the submucosal layer is visualized directly. Use of this device significantly reduced procedure time compared with conventional ESD techniques. PMID:24573770
NASA Astrophysics Data System (ADS)
Zhao, Chenglong; LeBrun, Thomas W.
2015-08-01
Gold nanoparticles (GNP) have wide applications ranging from nanoscale heating to cancer therapy and biological sensing. Optical trapping of GNPs as small as 18 nm has been successfully achieved with laser power as high as 855 mW, but such high powers can damage trapped particles (particularly biological systems) as well heat the fluid, thereby destabilizing the trap. In this article, we show that counter propagating beams (CPB) can successfully trap GNP with laser powers reduced by a factor of 50 compared to that with a single beam. The trapping position of a GNP inside a counter-propagating trap can be easily modulated by either changing the relative power or position of the two beams. Furthermore, we find that under our conditions while a single-beam most stably traps a single particle, the counter-propagating beam can more easily trap multiple particles. This (CPB) trap is compatible with the feedback control system we recently demonstrated to increase the trapping lifetimes of nanoparticles by more than an order of magnitude. Thus, we believe that the future development of advanced trapping techniques combining counter-propagating traps together with control systems should significantly extend the capabilities of optical manipulation of nanoparticles for prototyping and testing 3D nanodevices and bio-sensing.
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
Verma, Pankaj Kumar; Verma, Shikha; Meher, Alok Kumar; Pande, Veena; Mallick, Shekhar; Bansiwal, Amit Kumar; Tripathi, Rudra Deo; Dhankher, Om Parkash; Chakrabarty, Debasis
2016-09-01
Arsenic (As) is an acute poison and class I carcinogen, can cause a serious health risk. Staple crops like rice are the primary source of As contamination in human food. Rice grown on As contaminated areas accumulates higher As in their edible parts. Based on our previous transcriptome data, two rice glutaredoxins (OsGrx_C7 and OsGrx_C2.1) were identified that showed up-regulated expression during As stress. Here, we report OsGrx_C7 and OsGrx_C2.1 from rice involved in the regulation of intracellular arsenite (AsIII). To elucidate the mechanism of OsGrx mediated As tolerance, both OsGrxs were cloned and expressed in Escherichia coli (Δars) and Saccharomyces cerevisiae mutant strains (Δycf1, Δacr3). The expression of OsGrxs increased As tolerance in E. coli (Δars) mutant strain (up to 4 mM AsV and up to 0.6 mM AsIII). During AsIII exposure, S. cerevisiae (Δacr3) harboring OsGrx_C7 and OsGrx_C2.1 have lower intracellular AsIII accumulation (up to 30.43% and 24.90%, respectively), compared to vector control. Arsenic accumulation in As-sensitive S. cerevisiae mutant (Δycf1) also reduced significantly on exposure to inorganic As. The expression of OsGrxs in yeast maintained intracellular GSH pool and increased extracellular GSH concentration. Purified OsGrxs displays in vitro GSH-disulfide oxidoreductase, glutathione reductase and arsenate reductase activities. Also, both OsGrxs are involved in AsIII extrusion by altering the Fps1 transcripts in yeast and protect the cell by maintaining cellular GSH pool. Thus, our results strongly suggest that OsGrxs play a crucial role in the maintenance of the intracellular GSH pool and redox status of the cell during both AsV and AsIII stress and might be involved in regulating intracellular AsIII levels by modulation of aquaporin expression and functions. PMID:27174139
Verma, Pankaj Kumar; Verma, Shikha; Meher, Alok Kumar; Pande, Veena; Mallick, Shekhar; Bansiwal, Amit Kumar; Tripathi, Rudra Deo; Dhankher, Om Parkash; Chakrabarty, Debasis
2016-09-01
Arsenic (As) is an acute poison and class I carcinogen, can cause a serious health risk. Staple crops like rice are the primary source of As contamination in human food. Rice grown on As contaminated areas accumulates higher As in their edible parts. Based on our previous transcriptome data, two rice glutaredoxins (OsGrx_C7 and OsGrx_C2.1) were identified that showed up-regulated expression during As stress. Here, we report OsGrx_C7 and OsGrx_C2.1 from rice involved in the regulation of intracellular arsenite (AsIII). To elucidate the mechanism of OsGrx mediated As tolerance, both OsGrxs were cloned and expressed in Escherichia coli (Δars) and Saccharomyces cerevisiae mutant strains (Δycf1, Δacr3). The expression of OsGrxs increased As tolerance in E. coli (Δars) mutant strain (up to 4 mM AsV and up to 0.6 mM AsIII). During AsIII exposure, S. cerevisiae (Δacr3) harboring OsGrx_C7 and OsGrx_C2.1 have lower intracellular AsIII accumulation (up to 30.43% and 24.90%, respectively), compared to vector control. Arsenic accumulation in As-sensitive S. cerevisiae mutant (Δycf1) also reduced significantly on exposure to inorganic As. The expression of OsGrxs in yeast maintained intracellular GSH pool and increased extracellular GSH concentration. Purified OsGrxs displays in vitro GSH-disulfide oxidoreductase, glutathione reductase and arsenate reductase activities. Also, both OsGrxs are involved in AsIII extrusion by altering the Fps1 transcripts in yeast and protect the cell by maintaining cellular GSH pool. Thus, our results strongly suggest that OsGrxs play a crucial role in the maintenance of the intracellular GSH pool and redox status of the cell during both AsV and AsIII stress and might be involved in regulating intracellular AsIII levels by modulation of aquaporin expression and functions.
NASA Astrophysics Data System (ADS)
Davis, J. A.; Smith, R. L.; Bohlke, J. K.; Jemison, N.; Xiang, H.; Repert, D. A.; Yuan, X.; Williams, K. H.
2015-12-01
The occurrence of naturally reduced zones is common in alluvial aquifers in the western U.S.A. due to the burial of woody debris in flood plains. Such reduced zones are usually heterogeneously dispersed in these aquifers and characterized by high concentrations of organic carbon, reduced mineral phases, and reduced forms of metals, including uranium(IV). The persistence of high concentrations of dissolved uranium(VI) at uranium-contaminated aquifers on the Colorado Plateau has been attributed to slow oxidation of insoluble uranium(IV) mineral phases found in association with these reducing zones, although there is little understanding of the relative importance of various potential oxidants. Four field experiments were conducted within an alluvial aquifer adjacent to the Colorado River near Rifle, CO, wherein groundwater associated with the naturally reduced zones was pumped into a gas-impermeable tank, mixed with a conservative tracer (Br-), bubbled with a gas phase composed of 97% O2 and 3% CO2, and then returned to the subsurface in the same well from which it was withdrawn. Within minutes of re-injection of the oxygenated groundwater, dissolved uranium(VI) concentrations increased from less than 1 μM to greater than 2.5 μM, demonstrating that oxygen can be an important oxidant for uranium in such field systems if supplied to the naturally reduced zones. Dissolved Fe(II) concentrations decreased to the detection limit, but increases in sulfate could not be detected due to high background concentrations. Changes in nitrogen species concentrations were variable. The results contrast with other laboratory and field results in which oxygen was introduced to systems containing high concentrations of mackinawite (FeS), rather than the more crystalline iron sulfides found in aged, naturally reduced zones. The flux of oxygen to the naturally reduced zones in the alluvial aquifers occurs mainly through interactions between groundwater and gas phases at the water table
Code of Federal Regulations, 2011 CFR
2011-04-01
... any participant who separates from service after December 31, 2009, and before January 1, 2015, will..., 2015), the amendment does not result in a reduction that is significant because the amount of...
Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh
2015-09-01
The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays.
Tirosh, Nitzan; Nevo, Uri
2013-08-01
Changes in the diffusion weighted MRI (DWI) signal were observed to be correlated with neuronal activity during chemically induced brain activity, epileptic seizures, or visual stimulation. These changes suggest a possible reduction in water displacement that accompanies neuronal activity, but were possibly affected by other physiological mechanisms such as blood oxygenation level and blood flow. We developed an imaging experiment of an excised and vital newborn rat spinal cord to examine the effect of neuronal function on the displacement of water molecules as measured by DWI signal. This approach provides a DWI experiment of a vital mammalian CNS tissue in the absence of some of the systemic sources of noise. We detected a significant and reproducible drop with an average value of 19.5 ± 1.6% (mean ± SE) upon activation. The drop repeated itself in three orthogonal directions. ADC values corresponded to an oblate anisotropy. This result was validated by high resolution DWI of a fixed tissue, imaged with an ultra-high field MRI. The results support our working hypothesis that water displacement is affected by neuronal activation. These results further imply that water displacement might serve as a potential marker for brain function, and that, although commonly viewed as wholly electrochemical, neuronal activity includes a significant mechanical dimension that affects water displacement.
Gupta, Deepak; Srirajakalidindi, Arvind; Wang, Hong
2012-07-01
EndoSheath bronchoscopy (Vision Sciences, Inc.) uses a sterile, disposable microbial barrier that may meet the growing needs for safe, efficient, and cost effective flexible bronchoscopy. The purpose of this open-label comparative study was to compare and calculate the costs-per-airway-procedure of the reusable fiberscope when used with and without EndoSheath(®) Technology; and to record the turnover time from the completion of the use of each scope until its readiness again for the next use. Seventy-five new patients' airways requiring airway maneuvers and manipulations with Vision Sciences, Inc., reusable fiberscope with EndoSheath(®) Technology were evaluated for the costs comparisons with reassessed historical costs data for Olympus scope assisted tracheal intubations. As compared to costs of an intubation ($158.50) with Olympus scope at our institute, the intubation costs with Vision Sciences, Inc., reusable fiberscope with EndoSheath technology was $81.50 (P < 0.001). The mean turnover time was 5.44 min with EndoSheath technology as compared to previously reported 30 min with Olympus fiberscope (P < 0.001). Based on our institutional experience, Vision Sciences, Inc., reusable fiberscope with EndoSheath technology is significantly cost effective as compared to the Olympus scope with significantly improved turnover times.
Tozer, G. M.; Prise, V. E.; Bell, K. M.; Dennis, M. F.; Stratford, M. R.; Chaplin, D. J.
1996-01-01
The effect of nitric oxide-dependent vasodilators on vascular resistance of tumours and normal tissue was determined with the aim of modifying tumour blood flow for therapeutic benefit. Isolated preparations of the rat P22 tumour and normal rat hindlimb were perfused ex vivo. The effects on tissue vascular resistance of administration of sodium nitroprusside (SNP) and the diazeniumdiolate (or NONO-ate) NOC-7, vasodilators which act via direct release of nitric oxide (NO), were compared with the effects of acetylcholine (ACh), a vasodilator which acts primarily via receptor stimulation of endothelial cells to release NO in the form of endothelium-derived relaxing factor (EDRF). SNP and NOC-7 effectively dilated tumour blood vessels after preconstriction with phenylephrine (PE) or potassium chloride (KCl) as indicated by a decrease in vascular resistance. SNP also effectively dilated normal rat hindlimb vessels after PE/KCl constriction. Vasodilatation in the tumour preparations was accompanied by a significant rise in nitrite levels measured in the tumour effluent. ACh induced a significant vasodilation in the normal hindlimb but an anomalous vasoconstriction in the tumour. This result suggests that tumours, unlike normal tissues are incapable of releasing NO (EDRF) in response to ACh. Capacity for EDRF production may represent a difference between tumour and normal tissue blood vessels, which could be exploited for selective pharmacological manipulation of tumour blood flow. PMID:8980396
Kanamori, Keiko; Ross, Brian D.
2013-01-01
Summary Rats were given unilateral kainate injection into hippocampal CA3 region, and the effect of chronic electrographic seizures on extracellular glutamine (GLNECF) was examined in those with low and steady levels of extracellular glutamate (GLUECF). GLNECF, collected by microdialysis in awake rats for 5 h, decreased to 62 ± 4.4% of the initial concentration (n = 6). This change correlated with the frequency and magnitude of seizure activity, and occurred in the ipsilateral but not in contralateral hippocampus, nor in kainate-injected rats that did not undergo seizure (n = 6). Hippocampal intracellular GLN did not differ between the Seizure and No-Seizure Groups. These results suggested an intriguing possibility that seizure-induced decrease of GLNECF reflects not decreased GLN efflux into the extracellular fluid, but increased uptake into neurons. To examine this possibility, neuronal uptake of GLNECF was inhibited in vivo by intrahippocampal perfusion of 2-(methylamino)isobutyrate, a competitive and reversible inhibitor of the sodium-coupled neutral amino acid transporter (SNAT) subtypes 1 and 2, as demonstrated by 1.8 ± 0.17 fold elevation of GLNECF (n = 7). The frequency of electrographic seizures during uptake inhibition was reduced to 35 ± 7% (n = 7) of the frequency in pre-perfusion period, and returned to 88 ± 9% in the post-perfusion period. These novel in vivo results strongly suggest that, in this well-established animal model of temporal-lobe epilepsy, the observed seizure-induced decrease of GLNECF reflects its increased uptake into neurons to sustain enhanced glutamatergic epileptiform activity, thereby demonstrating a possible new target for anti-seizure therapies. PMID:24070846
Andretta, I; Pomar, C; Rivest, J; Pomar, J; Radünz, J
2016-07-01
This study was developed to assess the impact on performance, nutrient balance, serum parameters and feeding costs resulting from the switching of conventional to precision-feeding programs for growing-finishing pigs. A total of 70 pigs (30.4±2.2 kg BW) were used in a performance trial (84 days). The five treatments used in this experiment were a three-phase group-feeding program (control) obtained with fixed blending proportions of feeds A (high nutrient density) and B (low nutrient density); against four individual daily-phase feeding programs in which the blending proportions of feeds A and B were updated daily to meet 110%, 100%, 90% or 80% of the lysine requirements estimated using a mathematical model. Feed intake was recorded automatically by a computerized device in the feeders, and the pigs were weighed weekly during the project. Body composition traits were estimated by scanning with an ultrasound device and densitometer every 28 days. Nitrogen and phosphorus excretions were calculated by the difference between retention (obtained from densitometer measurements) and intake. Feeding costs were assessed using 2013 ingredient cost data. Feed intake, feed efficiency, back fat thickness, body fat mass and serum contents of total protein and phosphorus were similar among treatments. Feeding pigs in a daily-basis program providing 110%, 100% or 90% of the estimated individual lysine requirements also did not influence BW, body protein mass, weight gain and nitrogen retention in comparison with the animals in the group-feeding program. However, feeding pigs individually with diets tailored to match 100% of nutrient requirements made it possible to reduce (P<0.05) digestible lysine intake by 26%, estimated nitrogen excretion by 30% and feeding costs by US$7.60/pig (-10%) relative to group feeding. Precision feeding is an effective approach to make pig production more sustainable without compromising growth performance.
Berenbrock, Charles E.
2015-01-01
The effects of reduced cross-sectional data points on steady-flow profiles were also determined. Thirty-five cross sections of the original steady-flow model of the Kootenai River were used. These two methods were tested for all cross sections with each cross section resolution reduced to 10, 20 and 30 data points, that is, six tests were completed for each of the thirty-five cross sections. Generally, differences from the original water-surface elevation were smaller as the number of data points in reduced cross sections increased, but this was not always the case, especially in the braided reach. Differences were smaller for reduced cross sections developed by the genetic algorithm method than the standard algorithm method.
Perez-Martin, Eva; Weiss, Marcelo; Diaz-San Segundo, Fayna; Pacheco, Juan M.; Arzt, Jonathan; Grubman, Marvin J.
2012-01-01
Interferons (IFNs) are the first line of defense against viral infections. Although type I and II IFNs have proven effective to inhibit foot-and-mouth disease virus (FMDV) replication in swine, a similar approach had only limited efficacy in cattle. Recently, a new family of IFNs, type III IFN or IFN-λ, has been identified in human, mouse, chicken, and swine. We have identified bovine IFN-λ3 (boIFN-λ3), also known as interleukin 28B (IL-28B), and demonstrated that expression of this molecule using a recombinant replication-defective human adenovirus type 5 (Ad5) vector, Ad5-boIFN-λ3, exhibited antiviral activity against FMDV in bovine cell culture. Furthermore, inoculation of cattle with Ad5-boIFN-λ3 induced systemic antiviral activity and upregulation of IFN-stimulated gene expression in the upper respiratory airways and skin. In the present study, we demonstrated that disease could be delayed for at least 6 days when cattle were inoculated with Ad5-boIFN-λ3 and challenged 24 h later by intradermolingual inoculation with FMDV. Furthermore, the delay in the appearance of disease was significantly prolonged when treated cattle were challenged by aerosolization of FMDV, using a method that resembles the natural route of infection. No clinical signs of FMD, viremia, or viral shedding in nasal swabs was found in the Ad5-boIFN-λ3-treated animals for at least 9 days postchallenge. Our results indicate that boIFN-λ3 plays a critical role in the innate immune response of cattle against FMDV. To this end, this work represents the most successful biotherapeutic strategy so far tested to control FMDV in cattle. PMID:22301155
Fattal, Ittai; Shental, Noam; Ben-Dor, Shifra; Molad, Yair; Gabrielli, Armando; Pokroy-Shapira, Elisheva; Oren, Shirly; Livneh, Avi; Langevitz, Pnina; Zandman-Goddard, Gisele; Sarig, Ofer; Margalit, Raanan; Gafter, Uzi; Domany, Eytan; Cohen, Irun R
2015-11-01
In the course of investigating anti-DNA autoantibodies, we examined IgM and IgG antibodies to poly-G and other oligonucleotides in the sera of healthy persons and those diagnosed with systemic lupus erythematosus (SLE), scleroderma (SSc), or pemphigus vulgaris (PV); we used an antigen microarray and informatic analysis. We now report that all of the 135 humans studied, irrespective of health or autoimmune disease, manifested relatively high amounts of IgG antibodies binding to the 20-mer G oligonucleotide (G20); no participants entirely lacked this reactivity. IgG antibodies to homo-nucleotides A20, C20 or T20 were present only in the sera of SLE patients who were positive for antibodies to dsDNA. The prevalence of anti-G20 antibodies led us to survey human, mouse and Drosophila melanogaster (fruit fly) genomes for runs of T20 and G20 or more: runs of T20 appear > 170,000 times compared with only 93 runs of G20 or more in the human genome; of these runs, 40 were close to brain-associated genes. Mouse and fruit fly genomes showed significantly lower T20/G20 ratios than did human genomes. Moreover, sera from both healthy and SLE mice contained relatively little or no anti-G20 antibodies; so natural anti-G20 antibodies appear to be characteristic of humans. These unexpected observations invite investigation of the immune functions of anti-G20 antibodies in human health and disease and of runs of G20 in the human genome.
NASA Technical Reports Server (NTRS)
Herman, G. C.
1986-01-01
A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.
De Backer, Charlotte J S; Hudders, Liselot
2014-01-01
This study explores vegetarians' and semi-vegetarians' motives for reducing their meat intake. Participants are categorized as vegetarians (remove all meat from their diet); semi-vegetarians (significantly reduce meat intake: at least three days a week); or light semi-vegetarians (mildly reduce meat intake: once or twice a week). Most differences appear between vegetarians and both groups of semi-vegetarians. Animal-rights and ecological concerns, together with taste preferences, predict vegetarianism, while an increase in health motives increases the odds of being semi-vegetarian. Even within each group, subgroups with different motives appear, and it is recommended that future researchers pay more attention to these differences.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI
NASA Astrophysics Data System (ADS)
Bechet, P.; Mitran, R.; Munteanu, M.
2013-08-01
Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.
Technology Transfer Automated Retrieval System (TEKTRAN)
This pilot study tested whether varying protein source and quantity in a reduced energy diet would result in significant differences in weight, body composition, and renin angiotensin aldosterone system activity in midlife adults. Eighteen subjects enrolled in a 5 month weight reduction study, invol...
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
NASA Astrophysics Data System (ADS)
Tamascelli, D.; Rosenbach, R.; Plenio, M. B.
2015-06-01
When the amount of entanglement in a quantum system is limited, the relevant dynamics of the system is restricted to a very small part of the state space. When restricted to this subspace the description of the system becomes efficient in the system size. A class of algorithms, exemplified by the time-evolving block-decimation (TEBD) algorithm, make use of this observation by selecting the relevant subspace through a decimation technique relying on the singular value decomposition (SVD). In these algorithms, the complexity of each time-evolution step is dominated by the SVD. Here we show that, by applying a randomized version of the SVD routine (RRSVD), the power law governing the computational complexity of TEBD is lowered by one degree, resulting in a considerable speed-up. We exemplify the potential gains in efficiency at the hand of some real world examples to which TEBD can be successfully applied and demonstrate that for those systems RRSVD delivers results as accurate as state-of-the-art deterministic SVD routines.
Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong
2014-01-01
Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136
De Backer, Charlotte J S; Hudders, Liselot
2014-01-01
This study explores vegetarians' and semi-vegetarians' motives for reducing their meat intake. Participants are categorized as vegetarians (remove all meat from their diet); semi-vegetarians (significantly reduce meat intake: at least three days a week); or light semi-vegetarians (mildly reduce meat intake: once or twice a week). Most differences appear between vegetarians and both groups of semi-vegetarians. Animal-rights and ecological concerns, together with taste preferences, predict vegetarianism, while an increase in health motives increases the odds of being semi-vegetarian. Even within each group, subgroups with different motives appear, and it is recommended that future researchers pay more attention to these differences. PMID:25357269
NASA Technical Reports Server (NTRS)
Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David
2015-01-01
) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure
Yarde, Danielle N; Lorenzo-Arteaga, Kristina; Corley, Kevin P; Cabrera, Monina; Sarvetnick, Nora E
2014-10-01
Type 1 diabetes (T1D) is a chronic disease caused by autoimmune destruction of insulin-producing pancreatic β-cells. T1D is typically diagnosed in children, but information regarding immune cell subsets in juveniles with T1D is scarce. Therefore, we studied various lymphocytic populations found in the peripheral blood of juveniles with T1D compared to age-matched controls (ages 2-17). One population of interest is the CD28(-) CD8(+) T cell subset, which are late-differentiated cells also described as suppressors. These cells are altered in a number of disease states and have been shown to be reduced in adults with T1D. We found that the proportion of CD28(-) cells within the CD8(+) T cell population is significantly reduced in juvenile type 1 diabetics. Furthermore, this reduction is not correlated with age in T1D juveniles, although a significant negative correlation between proportion CD28(-) CD8(+) T cells and age was observed in the healthy controls. Finally, correlation analysis revealed a significant and negative correlation between the proportion of CD28(-) CD8(+) T cells and T1D disease duration. These findings show that the CD28(-) CD8(+) T cell population is perturbed following onset of disease and may prove to be a valuable marker for monitoring the progression of T1D.
Almeida, Jorge R. C.; Akkal, Dalila; Hassel, Stefanie; Travis, Michael J.; Banihashemi, Layla; Kerr, Natalie; Kupfer, David J.; Phillips, Mary L.
2009-01-01
Neuroimaging studies in bipolar disorder report gray matter volume (GMV) abnormalities in neural regions implicated in emotion regulation, including ventral/orbital medial prefrontal cortex (OMPFC) GMV decreases and, more inconsistently, amygdala GMV increases. We aimed to examine OMPFC and amygdala GMV in bipolar disorder, type 1 patients (BPI) versus healthy control participants (HC), and examine potential confounding effects of gender, clinical and illness history variables and psychotropic medication upon any group differences that were demonstrated in OMPFC and amygdala GMV. Images were acquired from 27 BPI (17 euthymic, 10 depressed) and 28 age- and gender-matched HC in a 3T Siemens scanner. Data were analyzed with SPM5 using voxel-based morphometry to first examine main effects of diagnostic group and gender upon whole brain (WB) GMV. Post hoc analyses were subsequently performed to examine the extent to which clinical and illness history variables and psychotropic medication contributed to GMV abnormalities in BPI in a priori and non-a priori regions demonstrated by the above VBM analyses. Here, SPPSS was used to examine the effects of these variables on magnitude of GMV in these a priori and non-a priori regions in BPI versus HC. BPI showed reduced GMV in two regions established a priori: bilateral posteromedial rectal gyrus (PMRG), but no amygdala GMV abnormalities. BPI also showed reduced GMV in two non-a priori regions: left parahippocampal gyrus and left putamen. For left PMRG GMV, there was a significant group by gender by trait anxiety interaction. GMV was significantly reduced in male low trait anxiety BPI versus male low trait anxiety HC, and in high versus low trait anxiety male BPI. Our findings show in BPI significant effects of male gender and high trait anxiety on GMV reduction in left PMRG, part of the OMPFC medial prefrontal network implicated in visceromotor and emotion regulation. PMID:19101126
Wright, H F; Hall, S; Hames, A; Hardiman, J; Mills, R; Mills, D S
2015-08-01
This study describes the impact of pet dogs on stress of primary carers of children with Autism Spectrum Disorder (ASD). Stress levels of 38 primary carers acquiring a dog and 24 controls not acquiring a dog were sampled at: Pre-intervention (17 weeks before acquiring a dog), post-intervention (3-10 weeks after acquisition) and follow-up (25-40 weeks after acquisition), using the Parenting Stress Index. Analysis revealed significant improvements in the intervention compared to the control group for Total Stress, Parental Distress and Difficult Child. A significant number of parents in the intervention group moved from clinically high to normal levels of Parental Distress. The results highlight the potential of pet dogs to reduce stress in primary carers of children with an ASD. PMID:25832799
Kirabo, Annet; Park, Sung O.; Wamsley, Heather L.; Gali, Meghanath; Baskin, Rebekah; Reinhard, Mary K.; Zhao, Zhizhuang J.; Bisht, Kirpal S.; Keserű, György M.; Cogle, Christopher R.; Sayeski, Peter P.
2013-01-01
Philadelphia chromosome–negative myeloproliferative neoplasms, including polycythemia vera, essential thrombocytosis, and myelofibrosis, are disorders characterized by abnormal hematopoiesis. Among these myeloproliferative neoplasms, myelofibrosis has the most unfavorable prognosis. Furthermore, currently available therapies for myelofibrosis have little to no efficacy in the bone marrow and hence, are palliative. We recently developed a Janus kinase 2 (Jak2) small molecule inhibitor called G6 and found that it exhibits marked efficacy in a xenograft model of Jak2-V617F–mediated hyperplasia and a transgenic mouse model of Jak2-V617F–mediated polycythemia vera/essential thrombocytosis. However, its efficacy in Jak2-mediated myelofibrosis has not previously been examined. Here, we hypothesized that G6 would be efficacious in Jak2-V617F–mediated myelofibrosis. To test this, mice expressing the human Jak2-V617F cDNA under the control of the vav promoter were administered G6 or vehicle control solution, and efficacy was determined by measuring parameters within the peripheral blood, liver, spleen, and bone marrow. We found that G6 significantly reduced extramedullary hematopoiesis in the liver and splenomegaly. In the bone marrow, G6 significantly reduced pathogenic Jak/STAT signaling by 53%, megakaryocytic hyperplasia by 70%, and the Jak2 mutant burden by 68%. Furthermore, G6 significantly improved the myeloid to erythroid ratio and significantly reversed the myelofibrosis. Collectively, these results indicate that G6 is efficacious in Jak2-V617F–mediated myelofibrosis, and given its bone marrow efficacy, it may alter the natural history of this disease. PMID:22796437
NASA Technical Reports Server (NTRS)
Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen
2015-01-01
integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW
Lee, Bor-Jen; Yen, Chi-Hua; Hsu, Hui-Chen; Lin, Jui-Yuan; Hsia, Simon; Lin, Ping-Ting
2012-10-01
Coronary artery disease (CAD) is the leading cause of death worldwide. The purpose of this study was to investigate the relationship between plasma levels of coenzyme Q10 and vitamin B-6 and the risk of CAD. Patients with at least 50% stenosis of one major coronary artery identified by cardiac catheterization were assigned to the case group (n = 45). The control group (n = 89) comprised healthy individuals with normal blood biochemistry. The plasma concentrations of coenzyme Q10 and vitamin B-6 (pyridoxal 5'-phosphate) and the lipid profiles of the participants were measured. Subjects with CAD had significantly lower plasma levels of coenzyme Q10 and vitamin B-6 compared to the control group. The plasma coenzyme Q10 concentration (β = 1.06, P = .02) and the ratio of coenzyme Q10 to total cholesterol (β = .28, P = .01) were positively correlated with vitamin B-6 status. Subjects with higher coenzyme Q10 concentration (≥516.0 nmol/L) had a significantly lower risk of CAD, even after adjusting for the risk factors for CAD. Subjects with higher pyridoxal 5'-phosphate concentration (≥59.7 nmol/L) also had a significantly lower risk of CAD, but the relationship lost its statistical significance after adjusting for the risk factors of CAD. There was a significant correlation between the plasma levels of coenzyme Q10 and vitamin B-6 and a reduced risk of CAD. Further study is needed to examine the benefits of administering coenzyme Q10 in combination with vitamin B-6 to CAD patients, especially those with low coenzyme Q10 level.
Hoogsteen, Ilse J. . E-mail: i.hoogsteen@rther.umcn.nl; Pop, Lucas A.M.; Marres, Henri A.M.; Hoogen, Franciscus J.A. van den; Kaanders, Johannes H.A.M.
2006-01-01
Purpose: To evaluate the prognostic significance of hemoglobin (Hb) levels measured before and during treatment with accelerated radiotherapy with carbogen and nicotinamide (ARCON). Methods and Materials: Two hundred fifteen patients with locally advanced tumors of the head and neck were included in a phase II trial of ARCON. This treatment regimen combines accelerated radiotherapy for reduction of repopulation with carbogen breathing and nicotinamide to reduce hypoxia. In these patients, Hb levels were measured before, during, and after radiotherapy. Results: Preirradiation and postirradiation Hb levels were available for 206 and 195 patients respectively. Hb levels below normal were most frequently seen among patients with T4 (p < 0.001) and N2 (p < 0.01) disease. Patients with a larynx tumor had significantly higher Hb levels (p < 0.01) than other tumor sites. During radiotherapy, 69 patients experienced a decrease in Hb level. In a multivariate analysis there was no prognostic impact of Hb level on locoregional control, disease-free survival, and overall survival. Primary tumor site was independently prognostic for locoregional control (p = 0.018), and gender was the only prognostic factor for disease-free and overall survival (p < 0.05). High locoregional control rates were obtained for tumors of the larynx (77%) and oropharynx (72%). Conclusion: Hemoglobin level was not found to be of prognostic significance for outcome in patients with squamous cell carcinoma of the head and neck after oxygen-modifying treatment with ARCON.
Musavian, Hanieh S; Krebs, Niels H; Nonboe, Ulf; Corry, Janet E L; Purnell, Graham
2014-04-17
Steam or hot water decontamination treatment of broiler carcasses is hampered by process limitations due to prolonged treatment times and adverse changes to the epidermis. In this study, a combination of steam with ultrasound (SonoSteam®) was investigated on naturally contaminated broilers that were processed at conventional slaughter speeds of 8,500 birds per hour in a Danish broiler plant. Industrial-scale SonoSteam equipment was installed in the evisceration room, before the inside/outside carcass washer. The SonoSteam treatment was evaluated in two separate trials performed on two different dates. Numbers of naturally occurring Campylobacter spp. and TVC were determined from paired samples of skin excised from opposite sides of the breast of the same carcass, before and after treatments. Sampling was performed at two different points on the line: i) before and after the SonoSteam treatment and ii) before the SonoSteam treatment and after 80 min of air chilling. A total of 44 carcasses were examined in the two trials. Results from the first trial showed that the mean initial Campylobacter contamination level of 2.35 log₁₀ CFU was significantly reduced (n=12, p<0.001) to 1.40 log₁₀ CFU after treatment. A significant reduction (n=11, p<0.001) was also observed with samples analyzed before SonoSteam treatment (2.64 log₁₀ CFU) and after air chilling (1.44 log₁₀ CFU). In the second trial, significant reductions (n=10, p<0.05) were obtained for carcasses analyzed before (mean level of 2.23 log₁₀ CFU) and after the treatment (mean level of 1.36 log₁₀ CFU). Significant reductions (n=11, p<0.01) were also found for Campylobacter numbers analyzed before the SonoSteam treatment (2.02 log₁₀ CFU) and after the air chilling treatment (1.37 log₁₀ CFU). The effect of air chilling without SonoSteam treatment was determined using 12 carcasses pre- and postchill. Results showed insignificant reductions of 0.09 log₁₀ from a mean initial level of
Oguntibeju, Oluwafemi O; Meyer, Samantha; Aboua, Yapo G; Goboza, Mediline
2016-01-01
Background. Hypoxis hemerocallidea is a native plant that grows in the Southern African regions and is well known for its beneficial medicinal effects in the treatment of diabetes, cancer, and high blood pressure. Aim. This study evaluated the effects of Hypoxis hemerocallidea on oxidative stress biomarkers, hepatic injury, and other selected biomarkers in the liver and kidneys of healthy nondiabetic and streptozotocin- (STZ-) induced diabetic male Wistar rats. Materials and Methods. Rats were injected intraperitoneally with 50 mg/kg of STZ to induce diabetes. The plant extract-Hypoxis hemerocallidea (200 mg/kg or 800 mg/kg) aqueous solution was administered (daily) orally for 6 weeks. Antioxidant activities were analysed using a Multiskan Spectrum plate reader while other serum biomarkers were measured using the RANDOX chemistry analyser. Results. Both dosages (200 mg/kg and 800 mg/kg) of Hypoxis hemerocallidea significantly reduced the blood glucose levels in STZ-induced diabetic groups. Activities of liver enzymes were increased in the diabetic control and in the diabetic group treated with 800 mg/kg, whereas the 200 mg/kg dosage ameliorated hepatic injury. In the hepatic tissue, the oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP), catalase, and total glutathione were reduced in the diabetic control group. However treatment with both doses improved the antioxidant status. The FRAP and the catalase activities in the kidney were elevated in the STZ-induced diabetic group treated with 800 mg/kg of the extract possibly due to compensatory responses. Conclusion. Hypoxis hemerocallidea demonstrated antihyperglycemic and antioxidant effects especially in the liver tissue. PMID:27403200
Oguntibeju, Oluwafemi O.; Meyer, Samantha; Aboua, Yapo G.; Goboza, Mediline
2016-01-01
Background. Hypoxis hemerocallidea is a native plant that grows in the Southern African regions and is well known for its beneficial medicinal effects in the treatment of diabetes, cancer, and high blood pressure. Aim. This study evaluated the effects of Hypoxis hemerocallidea on oxidative stress biomarkers, hepatic injury, and other selected biomarkers in the liver and kidneys of healthy nondiabetic and streptozotocin- (STZ-) induced diabetic male Wistar rats. Materials and Methods. Rats were injected intraperitoneally with 50 mg/kg of STZ to induce diabetes. The plant extract-Hypoxis hemerocallidea (200 mg/kg or 800 mg/kg) aqueous solution was administered (daily) orally for 6 weeks. Antioxidant activities were analysed using a Multiskan Spectrum plate reader while other serum biomarkers were measured using the RANDOX chemistry analyser. Results. Both dosages (200 mg/kg and 800 mg/kg) of Hypoxis hemerocallidea significantly reduced the blood glucose levels in STZ-induced diabetic groups. Activities of liver enzymes were increased in the diabetic control and in the diabetic group treated with 800 mg/kg, whereas the 200 mg/kg dosage ameliorated hepatic injury. In the hepatic tissue, the oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP), catalase, and total glutathione were reduced in the diabetic control group. However treatment with both doses improved the antioxidant status. The FRAP and the catalase activities in the kidney were elevated in the STZ-induced diabetic group treated with 800 mg/kg of the extract possibly due to compensatory responses. Conclusion. Hypoxis hemerocallidea demonstrated antihyperglycemic and antioxidant effects especially in the liver tissue. PMID:27403200
Papoiu, Alexandru DP; Chaudhry, Hunza; Hayes, Erin C; Chan, Yiong-Huak; Herbst, Kenneth D
2015-01-01
Background Itch is one of the most frequent skin complaints and its treatment is challenging. From a neurophysiological perspective, two distinct peripheral and spinothalamic pathways have been described for itch transmission: a histaminergic pathway and a nonhistaminergic pathway mediated by protease-activated receptors (PAR)2 and 4. The nonhistaminergic itch pathway can be activated exogenously by spicules of cowhage, a tropical plant that releases a cysteine protease named mucunain that binds to and activates PAR2 and PAR4. Purpose This study was conducted to assess the antipruritic effect of a novel over-the-counter (OTC) steroid-free topical hydrogel formulation, TriCalm®, in reducing itch intensity and duration, when itch was induced with cowhage, and compared it with two other commonly used OTC anti-itch drugs. Study participants and methods This double-blinded, vehicle-controlled, randomized, crossover study recorded itch intensity and duration in 48 healthy subjects before and after skin treatment with TriCalm hydrogel, 2% diphenhydramine, 1% hydrocortisone, and hydrogel vehicle, used as a vehicle control. Results TriCalm hydrogel significantly reduced the peak intensity and duration of cowhage-induced itch when compared to the control itch curve, and was significantly superior to the two other OTC antipruritic agents and its own vehicle in antipruritic effect. TriCalm hydrogel was eight times more effective than 1% hydrocortisone and almost six times more effective than 2% diphenhydramine in antipruritic action, as evaluated by the reduction of area under the curve. Conclusion TriCalm hydrogel has a robust antipruritic effect against nonhistaminergic pruritus induced via the PAR2 pathway, and therefore it could represent a promising treatment option for itch. PMID:25941445
Liu, Gangjun; Tan, Ou; Gao, Simon S.; Pechauer, Alex D.; Lee, ByungKun; Lu, Chen D.; Fujimoto, James G.; Huang, David
2015-01-01
We propose methods to align interferograms affected by trigger jitter to a reference interferogram based on the information (amplitude/phase) at a fixed-pattern noise location to reduce residual fixed-pattern noise and improve the phase stability of swept source optical coherence tomography (SS-OCT) systems. One proposed method achieved this by introducing a wavenumber shift (k-shift) in the interferograms of interest and searching for the k-shift that minimized the fixed-pattern noise amplitude. The other method calculated the relative k-shift using the phase information at the residual fixed-pattern noise location. Repeating this wavenumber alignment procedure for all A-lines of interest produced fixed-pattern noise free and phase stable OCT images. A system incorporating these correction routines was used for human retina OCT and Doppler OCT imaging. The results from the two methods were compared, and it was found that the intensity-based method provided better results. PMID:25969023
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Rivinius, Rasmus; Helmschrott, Matthias; Ruhparwar, Arjang; Schmack, Bastian; Erbel, Christian; Gleissner, Christian A; Akhavanpoor, Mohammadreza; Frankenstein, Lutz; Darche, Fabrice F; Schweizer, Patrick A; Thomas, Dierk; Ehlermann, Philipp; Bruckner, Tom; Katus, Hugo A; Doesch, Andreas O
2016-01-01
Background Amiodarone is a frequently used antiarrhythmic drug in patients with end-stage heart failure. Given its long half-life, pre-transplant use of amiodarone has been controversially discussed, with divergent results regarding morbidity and mortality after heart transplantation (HTX). Aim The aim of this study was to investigate the effects of long-term use of amiodarone before HTX on early post-transplant atrial fibrillation (AF) and mortality after HTX. Methods Five hundred and thirty patients (age ≥18 years) receiving HTX between June 1989 and December 2012 were included in this retrospective single-center study. Patients with long-term use of amiodarone before HTX (≥1 year) were compared to those without long-term use (none or <1 year of amiodarone). Primary outcomes were early post-transplant AF and mortality after HTX. The Kaplan–Meier estimator using log-rank tests was applied for freedom from early post-transplant AF and survival. Results Of the 530 patients, 74 (14.0%) received long-term amiodarone therapy, with a mean duration of 32.3±26.3 months. Mean daily dose was 223.0±75.0 mg. Indications included AF, Wolff–Parkinson–White syndrome, ventricular tachycardia, and ventricular fibrillation. Patients with long-term use of amiodarone before HTX had significantly lower rates of early post-transplant AF (P=0.0105). Further, Kaplan–Meier analysis of freedom from early post-transplant AF showed significantly lower rates of AF in this group (P=0.0123). There was no statistically significant difference between patients with and without long-term use of amiodarone prior to HTX in 1-year (P=0.8596), 2-year (P=0.8620), 5-year (P=0.2737), or overall follow-up mortality after HTX (P=0.1049). Moreover, Kaplan–Meier survival analysis showed no statistically significant difference in overall survival (P=0.1786). Conclusion Long-term use of amiodarone in patients before HTX significantly reduces early post-transplant AF and is not associated with
Anselmi, Mariella; Buonfrate, Dora; Guevara Espinoza, Angel; Prandi, Rosanna; Marquez, Monica; Gobbo, Maria; Montresor, Antonio; Albonico, Marco; Racines Orbe, Marcia; Bisoffi, Zeno
2015-01-01
Objectives To evaluate the effect of ivermectin mass drug administration on strongyloidiasis and other soil transmitted helminthiases. Methods We conducted a retrospective analysis of data collected in Esmeraldas (Ecuador) during surveys conducted in areas where ivermectin was annually administered to the entire population for the control of onchocerciasis. Data from 5 surveys, conducted between 1990 (before the start of the distribution of ivermectin) and 2013 (six years after the interruption of the intervention) were analyzed. The surveys also comprised areas where ivermectin was not distributed because onchocerciasis was not endemic. Different laboratory techniques were used in the different surveys (direct fecal smear, formol-ether concentration, IFAT and IVD ELISA for Strongyloides stercoralis). Results In the areas where ivermectin was distributed the strongyloidiasis prevalence fell from 6.8% in 1990 to zero in 1996 and 1999. In 2013 prevalence in children was zero with stool examination and 1.3% with serology, in adult 0.7% and 2.7%. In areas not covered by ivermectin distribution the prevalence was 23.5% and 16.1% in 1996 and 1999, respectively. In 2013 the prevalence was 0.6% with fecal exam and 9.3% with serology in children and 2.3% and 17.9% in adults. Regarding other soil transmitted helminthiases: in areas where ivermectin was distributed the prevalence of T. trichiura was significantly reduced, while A. lumbricoides and hookworms were seemingly unaffected. Conclusions Periodic mass distribution of ivermectin had a significant impact on the prevalence of strongyloidiasis, less on trichuriasis and apparently no effect on ascariasis and hookworm infections. PMID:26540412
Mai, Volker; Ukhanova, Maria; Reinhard, Mary K; Li, Manrong; Sulakvelidze, Alexander
2015-01-01
We used a mouse model to establish safety and efficacy of a bacteriophage cocktail, ShigActive™, in reducing fecal Shigella counts after oral challenge with a susceptible strain. Groups of inbred C57BL/6J mice challenged with Shigella sonnei strain S43-NalAcR were treated with a phage cocktail (ShigActive™) composed of 5 lytic Shigella bacteriophages and ampicillin. The treatments were administered (i) 1 h after, (ii) 3 h after, (iii) 1 h before and after, and (iv) 1 h before bacterial challenge. The treatment regimens elicited a 10- to 100-fold reduction in the CFU's of the challenge strain in fecal and cecum specimens compared to untreated control mice, (P < 0.05). ShigActiveTM treatment was at least as effective as treatment with ampicillin but had a significantly less impact on the gut microbiota. Long-term safety studies did not identify any side effects or distortions in overall gut microbiota associated with bacteriophage administration. Shigella phages may be therapeutically effective in a “classical phage therapy” approach, at least during the early stages after Shigella ingestion. Oral prophylactic “phagebiotic” administration of lytic bacteriophages may help to maintain a healthy gut microbiota by killing specifically targeted bacterial pathogens in the GI tract, without deleterious side effects and without altering the normal gut microbiota. PMID:26909243
Marzano, Shin-Yi Lee; Hobbs, Houston A.; Nelson, Berlin D.; Hartman, Glen L.; Eastburn, Darin M.; McCoppin, Nancy K.
2015-01-01
ABSTRACT A recombinant strain of Sclerotinia sclerotiorum hypovirus 2 (SsHV2) was identified from a North American Sclerotinia sclerotiorum isolate (328) from lettuce (Lactuca sativa L.) by high-throughput sequencing of total RNA. The 5′- and 3′-terminal regions of the genome were determined by rapid amplification of cDNA ends. The assembled nucleotide sequence was up to 92% identical to two recently reported SsHV2 strains but contained a deletion near its 5′ terminus of more than 1.2 kb relative to the other SsHV2 strains and an insertion of 524 nucleotides (nt) that was distantly related to Valsa ceratosperma hypovirus 1. This suggests that the new isolate is a heterologous recombinant of SsHV2 with a yet-uncharacterized hypovirus. We named the new strain Sclerotinia sclerotiorum hypovirus 2 Lactuca (SsHV2L) and deposited the sequence in GenBank with accession number KF898354. Sclerotinia sclerotiorum isolate 328 was coinfected with a strain of Sclerotinia sclerotiorum endornavirus 1 and was debilitated compared to cultures of the same isolate that had been cured of virus infection by cycloheximide treatment and hyphal tipping. To determine whether SsHV2L alone could induce hypovirulence in S. sclerotiorum, a full-length cDNA of the 14,538-nt viral genome was cloned. Transcripts corresponding to the viral RNA were synthesized in vitro and transfected into a virus-free isolate of S. sclerotiorum, DK3. Isolate DK3 transfected with SsHV2L was hypovirulent on soybean and lettuce and exhibited delayed maturation of sclerotia relative to virus-free DK3, completing Koch's postulates for the association of hypovirulence with SsHV2L. IMPORTANCE A cosmopolitan fungus, Sclerotinia sclerotiorum infects more than 400 plant species and causes a plant disease known as white mold that produces significant yield losses in major crops annually. Mycoviruses have been used successfully to reduce losses caused by fungal plant pathogens, but definitive relationships between
Dise, J; Liang, X; Lin, L; Teo, B
2014-06-15
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions from day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic
Vasily, David B; Bradle, Jeanna; Rudio, Catharine; Calderhead, R Glen
2016-01-01
Background and Aims: For any committed athlete, getting back to conditioning and participation post-injury (return to play [RTP]) needs to be as swift as possible. The effects of near-infrared light-emitting diode (LED) therapy on pain control, blood flow enhancement and relaxation of muscle spasm (all aspects in the treatment of musculoskeletal injury) have attracted attention. The present pilot study was undertaken to assess the role of 830 nm LED phototherapy in safely accelerating RTP in injured university athletes. Subjects and Methods: Over a 15-month period, a total of 395 injuries including sprains, strains, ligament damage, tendonitis and contusions were treated with 1,669 sessions of 830 nm LED phototherapy (mean of 4.3 treatments per injury, range 2 – 6). Efficacy was measured with pain attenuation on a visual analog scale (VAS) and the RTP period compared with historically-based anticipated RTP with conventional therapeutic intervention. Results: A full set of treatment sessions and follow-up data was able to be recorded in 65 informed and consenting subjects who achieved pain relief on the VAS of up to 6 points in from 2–6 sessions. The average LED-mediated RTP in the 65 subjects was significantly shorter at 9.6 days, compared with the mean anticipated RTP of 19.23 days (p = 0.0066, paired two-tailed Student's t-test). A subjective satisfaction survey was carried out among the 112 students with injuries incurred from January to May, 2015. Eighty-eight (78.5%) were either very satisfied or satisfied, and only 8 (7.2%) were dissatisfied. Conclusions: For any motivated athlete, RTP may be the most important factor postinjury based on the resolution of pain and inflammation and repair to tissue trauma. 830 nm LED phototherapy significantly and safely reduced the RTP in dedicated university athletes over a wide range of injuries with no adverse events. One limitation of the present study was the subjective nature of the assessments, and the lack of any
Courtin, Fabrice; Camara, Mamadou; Rayaisse, Jean-Baptiste; Kagbadouno, Moise; Dama, Emilie; Camara, Oumou; Traoré, Ibrahima S.; Rouamba, Jérémi; Peylhard, Moana; Somda, Martin B.; Leno, Mamadou; Lehane, Mike J.; Torr, Steve J.; Solano, Philippe; Jamonneau, Vincent; Bucheton, Bruno
2015-01-01
Background Control of gambiense sleeping sickness, a neglected tropical disease targeted for elimination by 2020, relies mainly on mass screening of populations at risk and treatment of cases. This strategy is however challenged by the existence of undetected reservoirs of parasites that contribute to the maintenance of transmission. In this study, performed in the Boffa disease focus of Guinea, we evaluated the value of adding vector control to medical surveys and measured its impact on disease burden. Methods The focus was divided into two parts (screen and treat in the western part; screen and treat plus vector control in the eastern part) separated by the Rio Pongo river. Population census and baseline entomological data were collected from the entire focus at the beginning of the study and insecticide impregnated targets were deployed on the eastern bank only. Medical surveys were performed in both areas in 2012 and 2013. Findings In the vector control area, there was an 80% decrease in tsetse density, resulting in a significant decrease of human tsetse contacts, and a decrease of disease prevalence (from 0.3% to 0.1%; p=0.01), and an almost nil incidence of new infections (<0.1%). In contrast, incidence was 10 times higher in the area without vector control (>1%, p<0.0001) with a disease prevalence increasing slightly (from 0.5 to 0.7%, p=0.34). Interpretation Combining medical and vector control was decisive in reducing T. b. gambiense transmission and in speeding up progress towards elimination. Similar strategies could be applied in other foci. PMID:26267667
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Geisler, Matt; Kleczkowski, Leszek A; Karpinski, Stanislaw
2006-02-01
Short motifs of many cis-regulatory elements (CREs) can be found in the promoters of most Arabidopsis genes, and this raises the question of how their presence can confer specific regulation. We developed a universal algorithm to test the biological significance of CREs by first identifying every Arabidopsis gene with a CRE and then statistically correlating the presence or absence of the element with the gene expression profile on multiple DNA microarrays. This algorithm was successfully verified for previously characterized abscisic acid, ethylene, sucrose and drought responsive CREs in Arabidopsis, showing that the presence of these elements indeed correlates with treatment-specific gene induction. Later, we used standard motif sampling methods to identify 128 putative motifs induced by excess light, reactive oxygen species and sucrose. Our algorithm was able to filter 20 out of 128 novel CREs which significantly correlated with gene induction by either heat, reactive oxygen species and/or sucrose. The position, orientation and sequence specificity of CREs was tested in silicio by analyzing the expression of genes with naturally occurring sequence variations. In three novel CREs the forward orientation correlated with sucrose induction and the reverse orientation with sucrose suppression. The functionality of the predicted novel CREs was experimentally confirmed using Arabidopsis cell-suspension cultures transformed with short promoter fragments or artificial promoters fused with the GUS reporter gene. Our genome-wide analysis opens up new possibilities for in silicio verification of the biological significance of newly discovered CREs, and allows for subsequent selection of such CREs for experimental studies.
Aldrich, Noel D; Reicks, Marla M; Sibley, Shalamar D; Redmon, J Bruce; Thomas, William; Raatz, Susan K
2011-01-01
We hypothesized that a whey protein diet would result in greater weight loss and improved body compositioncompared to standard weight loss diets. Weight change, body composition, and renin angiotensin aldosterone system activity in midlife adults was compared between diet groups. Eighteen subjects enrolled ina5 month study of8 weeks controlled food intake followed by 12 weeks ad libitum intake. Subjects were randomized to one of three treatment groups: control diet (CD) (55% carbohydrate: 15% protein: 30% fat), mixed protein (MP) (40% carbohydrate: 30% protein: 30% fat), or whey protein (WP) (40% carbohydrate: 15% mixed protein: 15% whey protein: 30% fat). Measurements included weight, metabolic measures, body composition by dual energy x-ray absorptiometry (DXA), and resting energy expenditure. No statistically significant differences in total weight loss or total fat loss were observed between treatments, however, a trend toward greater total weight loss (p = 0.08) and total fat loss (p=0.09) was observed in the WP group compared to the CD group. Fat loss in the leg and gynoid regions was greater (p < 0.05) in the WP group than the CD group. No RAAS mediated response was observed, but a decrease in systolic blood pressure was significantly greater (p <0.05) in the WP group compared to the CD group. In summary, increased whey protein intake did not result in statistically significant differences in weight loss or in total fat loss, but significant differences in regional fat loss and in decreased blood pressure were observed in the WP group. PMID:21419314
Fabbro, Shay; Schaller, Kristin; Seeds, Nicholas W
2011-09-01
Amyloid-beta (Aβ) plaques are a hallmark of Alzheimer's disease. Several proteases including plasmin are thought to promote proteolytic cleavage and clearance of Aβ from brain. The activity of both plasmin and tissue plasminogen activator are reduced in Alzheimer's disease brain, while the tissue plasminogen activator inhibitor neuroserpin is up-regulated. Here, the relationship of tissue plasminogen activator and neuroserpin to Aβ levels is explored in mouse models. Aβ(1-42) peptide injected into the frontal cortex of tissue plasminogen activator knockout mice is slow to disappear compared to wildtype mice, whereas neuroserpin knockout mice show a rapid clearance of Aβ(1-42). The relationship of neuroserpin and tissue plasminogen activator to Aβ plaque formation was studied further by knocking-out neuroserpin in the human amyloid precursor protein-J20 transgenic mouse. Compared to the J20-transgenic mouse, the neuroserpin-deficient J20-transgenic mice have a dramatic reduction of Aβ peptides, fewer and smaller plaques, and more active tissue plasminogen activator associated with plaques. Furthermore, neuroserpin-deficient J20-transgenic mice have near normal performances in the Morris water maze, in contrast to the spatial memory defects seen in J20-transgenic mice. These results support the concept that neuroserpin inhibition of tissue plasminogen activator plays an important role both in the accumulation of brain amyloid plaques and loss of cognitive abilities.
Rae, Caroline D; Davidson, Joanne E; Maher, Anthony D; Rowlands, Benjamin D; Kashem, Mohammed A; Nasrallah, Fatima A; Rallapalli, Sundari K; Cook, James M; Balcar, Vladimir J
2014-04-01
Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we measured the effect of alcohol on metabolism of [3-¹³C]pyruvate in the adult Guinea pig brain cortical tissue slice and compared the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-¹³C]ethanol. Analyses of metabolic profile clusters suggest that the significant reductions in metabolism induced by ethanol (10, 30 and 60 mM) are via action at neurotransmitter receptors, particularly α4β3δ receptors, whereas very low concentrations of ethanol may produce metabolic responses owing to release of GABA via GABA transporter 1 (GAT1) and the subsequent interaction of this GABA with local α5- or α1-containing GABA(A)R. There was no measureable metabolism of [1,2-¹³C]ethanol with no significant incorporation of ¹³C from [1,2-¹³C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabelled ethanol.
Sinn, Brandon T; Kelly, Lawrence M; Freudenstein, John V
2015-08-01
The drivers of angiosperm diversity have long been sought and the flower-arthropod association has often been invoked as the most powerful driver of the angiosperm radiation. We now know that features that influence arthropod interactions cannot only affect the diversification of lineages, but also expedite or constrain their rate of extinction, which can equally influence the observed asymmetric richness of extant angiosperm lineages. The genus Asarum (Aristolochiaceae; ∼100 species) is widely distributed in north temperate forests, with substantial vegetative and floral divergence between its three major clades, Euasarum, Geotaenium, and Heterotropa. We used Binary-State Speciation and Extinction Model (BiSSE) Net Diversification tests of character state distributions on a Maximum Likelihood phylogram and a Coalescent Bayesian species tree, inferred from seven chloroplast markers and nuclear rDNA, to test for signal of asymmetric diversification, character state transition, and extinction rates of floral and vegetative characters. We found that reduction in vegetative growth, loss of autonomous self-pollination, and the presence of putative fungal-mimicking floral structures are significantly correlated with increased diversification in Asarum. No significant difference in model likelihood was identified between symmetric and asymmetric rates of character state transitions or extinction. We conclude that the flowers of the Heterotropa clade may have converged on some aspects of basidiomycete sporocarp morphology and that brood-site mimicry, coupled with a reduction in vegetative growth and the loss of autonomous self-pollination, may have driven diversification within Asarum.
Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; de Oliveira, Ana Paula Ligeiro
2015-01-01
Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J-660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD.
Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; Ligeiro de Oliveira, Ana Paula
2015-01-01
Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J—660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD. PMID:26322981
Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; de Oliveira, Ana Paula Ligeiro
2015-01-01
Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J-660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD. PMID:26322981
Junka, Adam F; Szymczyk, Patrycja; Secewicz, Anna; Pawlak, Andrzej; Smutnicka, Danuta; Ziółkowski, Grzegorz; Bartoszewicz, Marzenna; Chlebus, Edward
2016-01-01
In our previous work we reported the impact of hydrofluoric and nitric acid used for chemical polishing of Ti-6Al-7Nb scaffolds on decrease of the number of Staphylococcus aureus biofilm forming cells. Herein, we tested impact of the aforementioned substances on biofilm of Gram-negative microorganism, Pseudomonas aeruginosa, dangerous pathogen responsible for plethora of implant-related infections. The Ti-6Al-7Nb scaffolds were manufactured using Selective Laser Melting method. Scaffolds were subjected to chemical polishing using a mixture of nitric acid and fluoride or left intact (control group). Pseudomonal biofilm was allowed to form on scaffolds for 24 hours and was removed by mechanical vortex shaking. The number of pseudomonal cells was estimated by means of quantitative culture and Scanning Electron Microscopy. The presence of nitric acid and fluoride on scaffold surfaces was assessed by means of IR and rentgen spetorscopy. Quantitative data were analysed using the Mann-Whitney test (P ≤ 0.05). Our results indicate that application of chemical polishing correlates with significant drop of biofilm-forming pseudomonal cells on the manufactured Ti-6Al-7Nb scaffolds ( p = 0.0133, Mann-Whitney test) compared to the number of biofilm-forming cells on non-polished scaffolds. As X-ray photoelectron spectroscopy revealed the presence of fluoride and nitrogen on the surface of scaffold, we speculate that drop of biofilm forming cells may be caused by biofilm-supressing activity of these two elements.
O'Neill, Edward; Richardson-Weber, Leslie; McCormack, Gina; Uhl, Lynne; Haspel, Richard L
2009-08-01
Phlebotomy errors leading to incompatible transfusions are a leading cause of transfusion-related morbidity and mortality. Our institution's specimen-labeling policy requires the collection date, 2 unique patient identifiers, and the ability to identify the phlebotomist. This policy, however, was initially strictly enforced only by the blood bank. In fiscal year 2005, following an educational campaign on proper specimen labeling, all clinical laboratories began strictly adhering to the specimen-labeling policy. Compared with the preceding 4 years, in the 3 years following policy implementation, the incidence of wrong blood in tube (WBIT) and mislabeled specimens detected by the blood bank decreased by 73.5% (0.034% to 0.009%; P < or = .0001) and by 84.6% (0.026% to 0.004%; P < or = .0001), respectively. During a short period, a simple, low-cost educational initiative and policy change can lead to statistically significant decreases in WBIT and mislabeled specimens received by the blood bank. PMID:19605809
Junka, Adam F; Szymczyk, Patrycja; Secewicz, Anna; Pawlak, Andrzej; Smutnicka, Danuta; Ziółkowski, Grzegorz; Bartoszewicz, Marzenna; Chlebus, Edward
2016-01-01
In our previous work we reported the impact of hydrofluoric and nitric acid used for chemical polishing of Ti-6Al-7Nb scaffolds on decrease of the number of Staphylococcus aureus biofilm forming cells. Herein, we tested impact of the aforementioned substances on biofilm of Gram-negative microorganism, Pseudomonas aeruginosa, dangerous pathogen responsible for plethora of implant-related infections. The Ti-6Al-7Nb scaffolds were manufactured using Selective Laser Melting method. Scaffolds were subjected to chemical polishing using a mixture of nitric acid and fluoride or left intact (control group). Pseudomonal biofilm was allowed to form on scaffolds for 24 hours and was removed by mechanical vortex shaking. The number of pseudomonal cells was estimated by means of quantitative culture and Scanning Electron Microscopy. The presence of nitric acid and fluoride on scaffold surfaces was assessed by means of IR and rentgen spetorscopy. Quantitative data were analysed using the Mann-Whitney test (P ≤ 0.05). Our results indicate that application of chemical polishing correlates with significant drop of biofilm-forming pseudomonal cells on the manufactured Ti-6Al-7Nb scaffolds ( p = 0.0133, Mann-Whitney test) compared to the number of biofilm-forming cells on non-polished scaffolds. As X-ray photoelectron spectroscopy revealed the presence of fluoride and nitrogen on the surface of scaffold, we speculate that drop of biofilm forming cells may be caused by biofilm-supressing activity of these two elements. PMID:27150429
Uclés Moreno, Ana; Herrera López, Sonia; Reichert, Barbara; Lozano Fernández, Ana; Hernando Guil, María Dolores; Fernández-Alba, Amadeo Rodríguez
2015-01-20
This manuscript reports a new pesticide residue analysis method employing a microflow-liquid chromatography system coupled to a triple quadrupole mass spectrometer (microflow-LC-ESI-QqQ-MS). This uses an electrospray ionization source with a narrow tip emitter to generate smaller droplets. A validation study was undertaken to establish performance characteristics for this new approach on 90 pesticide residues, including their degradation products, in three commodities (tomato, pepper, and orange). The significant benefits of the microflow-LC-MS/MS-based method were a high sensitivity gain and a notable reduction in matrix effects delivered by a dilution of the sample (up to 30-fold); this is as a result of competition reduction between the matrix compounds and analytes for charge during ionization. Overall robustness and a capability to withstand long analytical runs using the microflow-LC-MS system have been demonstrated (for 100 consecutive injections without any maintenance being required). Quality controls based on the results of internal standards added at the samples' extraction, dilution, and injection steps were also satisfactory. The LOQ values were mostly 5 μg kg(-1) for almost all pesticide residues. Other benefits were a substantial reduction in solvent usage and waste disposal as well as a decrease in the run-time. The method was successfully applied in the routine analysis of 50 fruit and vegetable samples labeled as organically produced. PMID:25495653
Gabriel, Anne F; Marcus, Marco A E; Honig, Wiel M M; Joosten, Elbert A J
2010-01-22
The influence of the environment on clinical post-operative pain received recently more attention in human. A very common paradigm in experimental pain research to model the effect of housing conditions is the enriched environment (EE). During EE-housing, rats are housed in a large cage (i.e. social stimulation), usually containing additional tools like running wheels (i.e. physical stimulation). Interestingly, only postsurgical housing effect on post-operative pain was developed during clinical and experimental studies while little is known on the influence of preoperative housing. In this study, our aim was to investigate the influence of housing conditions prior to an operation on the development of post-operative pain, using a rat model of carrageenan-induced inflammatory pain. Four housing conditions were used: a 3-week pre-housing in standard conditions (S-) followed by a post-housing in an EE; a 3-week pre-housing in EE followed by a post-operation S-housing; a pre- and post-housing in EE; a pre- and post-S-housing. The development of mechanical allodynia was assessed by the means of the von Frey test, preoperatively and at day post-operative (DPO) 1, 3, 7, 10, 14, 17, 21, 24 and 28. Our results show that a 3-week preoperative exposure to EE leads to a significant reduction in the duration of the carrageenan-induced mechanical allodynia, comparable with a post-operative exposure to EE. Strikingly, when rats were housed in EE prior to as well as after the carrageenan injection into the knee, mechanical allodynia lasted only 2 weeks, as compared to 4 weeks in S-housed rats.
Algorithms for improved performance in cryptographic protocols.
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
Sampling Within k-Means Algorithm to Cluster Large Datasets
Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George
2011-08-01
Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
Chen, Zhenhua; Chen, Xun; Wu, Wei
2013-04-28
In this paper, by applying the reduced density matrix (RDM) approach for nonorthogonal orbitals developed in the first paper of this series, efficient algorithms for matrix elements between VB structures and energy gradients in valence bond self-consistent field (VBSCF) method were presented. Both algorithms scale only as nm(4) for integral transformation and d(2)n(β)(2) for VB matrix elements and 3-RDM evaluation, while the computational costs of other procedures are negligible, where n, m, d, and n(β )are the numbers of variable occupied active orbitals, basis functions, determinants, and active β electrons, respectively. Using tensor properties of the energy gradients with respect to the orbital coefficients presented in the first paper of this series, a partial orthogonal auxiliary orbital set was introduced to reduce the computational cost of VBSCF calculation in which orbitals are flexibly defined. Test calculations on the Diels-Alder reaction of butadiene and ethylene have shown that the novel algorithm is very efficient for VBSCF calculations. PMID:23635124
Improved local linearization algorithm for solving the quaternion equations
NASA Technical Reports Server (NTRS)
Yen, K.; Cook, G.
1980-01-01
The objective of this paper is to develop a new and more accurate local linearization algorithm for numerically solving sets of linear time-varying differential equations. Of special interest is the application of this algorithm to the quaternion rate equations. The results are compared, both analytically and experimentally, with previous results using local linearization methods. The new algorithm requires approximately one-third more calculations per step than the previously developed local linearization algorithm; however, this disadvantage could be reduced by using parallel implementation. For some cases the new algorithm yields significant improvement in accuracy, even with an enlarged sampling interval. The reverse is true in other cases. The errors depend on the values of angular velocity, angular acceleration, and integration step size. One important result is that for the worst case the new algorithm can guarantee eigenvalues nearer the region of stability than can the previously developed algorithm.
NASA Astrophysics Data System (ADS)
Yu, Ke; Niu, Yujuan; Bai, Yuanyuan; Zhou, Yongcun; Wang, Hong
2013-03-01
Homogeneous ceramics-polymer nanocomposites comprising core-shell structured BaTiO3/SiO2 nanoparticles and a poly(vinylidene fluoride) polymer matrix have been prepared. The nanocomposite of 2 vol. % BaTiO3/SiO2 nanoparticles exhibits 46% reduced energy loss compared to that of BaTiO3 nanoparticles, and an energy density of 6.28 J/cm3, under an applied electric field of 340 MV/m. Coating SiO2 layers on the surface of BaTiO3 nanoparticles significantly reduces the energy loss of the nanocomposites under high applied electric field via reducing the Maxwell-Wagner-Sillars interfacial polarization and space charge polarization.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Development and Evaluation of Algorithms for Breath Alcohol Screening
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-01-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576
Development and Evaluation of Algorithms for Breath Alcohol Screening.
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-01-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone. PMID:27043576
Adaptive computation algorithm for RBF neural network.
Han, Hong-Gui; Qiao, Jun-Fei
2012-02-01
A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Kushner, Steven; Han, David; Oscar-Berman, Marlene; William Downs, B; Madigan, Margaret A; Giordano, John; Beley, Thomas; Jones, Scott; Barh, Debmayla; Simpatico, Thomas; Dushaj, Kristina; Lohmann, Raquel; Braverman, Eric R; Schoenthaler, Stephen; Ellison, David; Blum, Kenneth
2013-01-01
It is well established that inherited human aldehyde dehydrogenase 2 (ALDH-2) deficiency reduces the risk for alcoholism. Kudzu plants and extracts have been used for 1,000 years in traditional Chinese medicine to treat alcoholism. Kudzu contains daidzin, which inhibits ALDH-2 and suppresses heavy drinking in rodents. Decreased drinking due to ALDH-2 inhibition is attributed to aversive properties of acetaldehyde accumulated during alcohol consumption. However not all of the anti-alcohol properties of diadzin are due to inhibition of ALDH-2. This is in agreement with our earlier work showing significant interaction effects of both pyrozole (ALDH-2 inhibitor) and methyl-pyrozole (non-inhibitor) and ethanol’s depressant effects. Moreover, it has been suggested that selective ALDH 2 inhibitors reduce craving for alcohol by increasing dopamine in the nucleus accumbens (NAc). In addition there is significant evidence related to the role of the genetics of bitter receptors (TAS2R) and its stimulation as an aversive mechanism against alcohol intake. The inclusion of bitters such as Gentian & Tangerine Peel in Declinol provides stimulation of gut TAS2R receptors which is potentially synergistic with the effects of Kudzu. Finally the addition of Radix Bupleuri in the Declinol formula may have some protective benefits not only in terms of ethanol induced liver toxicity but neurochemical actions involving endorphins, dopamine and epinephrine. With this information as a rationale, we report herein that this combination significantly reduced Alcohol Use Disorders Identification Test (AUDIT) scores administered to ten heavy drinkers (M=8, F=2; 43.2 ± 14.6 years) attending a recovery program. Specifically, from the pre-post comparison of the AUD scores, it was found that the score of every participant decreased after the intervention which ranged from 1 to 31. The decrease in the scores was found to be statistically significant with the p-value of 0.00298 (two-sided paired
Kushner, Steven; Han, David; Oscar-Berman, Marlene; William Downs, B; Madigan, Margaret A; Giordano, John; Beley, Thomas; Jones, Scott; Barh, Debmayla; Simpatico, Thomas; Dushaj, Kristina; Lohmann, Raquel; Braverman, Eric R; Schoenthaler, Stephen; Ellison, David; Blum, Kenneth
2013-07-01
It is well established that inherited human aldehyde dehydrogenase 2 (ALDH-2) deficiency reduces the risk for alcoholism. Kudzu plants and extracts have been used for 1,000 years in traditional Chinese medicine to treat alcoholism. Kudzu contains daidzin, which inhibits ALDH-2 and suppresses heavy drinking in rodents. Decreased drinking due to ALDH-2 inhibition is attributed to aversive properties of acetaldehyde accumulated during alcohol consumption. However not all of the anti-alcohol properties of diadzin are due to inhibition of ALDH-2. This is in agreement with our earlier work showing significant interaction effects of both pyrozole (ALDH-2 inhibitor) and methyl-pyrozole (non-inhibitor) and ethanol's depressant effects. Moreover, it has been suggested that selective ALDH 2 inhibitors reduce craving for alcohol by increasing dopamine in the nucleus accumbens (NAc). In addition there is significant evidence related to the role of the genetics of bitter receptors (TAS2R) and its stimulation as an aversive mechanism against alcohol intake. The inclusion of bitters such as Gentian & Tangerine Peel in Declinol provides stimulation of gut TAS2R receptors which is potentially synergistic with the effects of Kudzu. Finally the addition of Radix Bupleuri in the Declinol formula may have some protective benefits not only in terms of ethanol induced liver toxicity but neurochemical actions involving endorphins, dopamine and epinephrine. With this information as a rationale, we report herein that this combination significantly reduced Alcohol Use Disorders Identification Test (AUDIT) scores administered to ten heavy drinkers (M=8, F=2; 43.2 ± 14.6 years) attending a recovery program. Specifically, from the pre-post comparison of the AUD scores, it was found that the score of every participant decreased after the intervention which ranged from 1 to 31. The decrease in the scores was found to be statistically significant with the p-value of 0.00298 (two-sided paired test
Christensen, Eidi; Mørk, Cato; Foss, Olav Andreas
2011-01-01
Topical photodynamic therapy (PDT) has limitations in the treatment of thick skin tumours. The aim of the study was to evaluate the effect of pre-PDT deep curettage on tumour thickness in thick (≥2 mm) basal cell carcinoma (BCC). Additionally, 3-month treatment outcome and change of tumour thickness from diagnosis to treatment were investigated. At diagnosis, mean tumour thickness was 2.3 mm (range 2.0–4.0). Pre- and post-curettage biopsies were taken from each tumour prior to PDT. Of 32 verified BCCs, tumour thickness was reduced by 50% after deep curettage (P ≤ 0.001). Mean tumour thickness was also reduced from diagnosis to treatment. At 3-month followup, complete tumour response was found in 93% and the cosmetic outcome was rated excellent or good in 100% of cases. In conclusion, deep curettage significantly reduces BCC thickness and may with topical PDT provide a favourable clinical and cosmetic short-term outcome. PMID:22191035
Myers, Janet J; Shade, Starley B; Rose, Carol Dawson; Koester, Kimberly; Maiorana, Andre; Malitz, Faye E; Bie, Jennifer; Kang-Dufour, Mi-Suk; Morin, Stephen F
2010-06-01
To support expanded prevention services for people living with HIV, the US Health Resources and Services Administration (HRSA) sponsored a 5-year initiative to test whether interventions delivered in clinical settings were effective in reducing HIV transmission risk among HIV-infected patients. Across 13 demonstration sites, patients were randomized to one of four conditions. All interventions were associated with reduced unprotected vaginal and/or anal intercourse with persons of HIV-uninfected or unknown status among the 3,556 participating patients. Compared to the standard of care, patients assigned to receive interventions from medical care providers reported a significant decrease in risk after 12 months of participation. Patients receiving prevention services from health educators, social workers or paraprofessional HIV-infected peers reported significant reduction in risk at 6 months, but not at 12 months. While clinics have a choice of effective models for implementing prevention programs for their HIV-infected patients, medical provider-delivered methods are comparatively robust.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
a Distributed Polygon Retrieval Algorithm Using Mapreduce
NASA Astrophysics Data System (ADS)
Guo, Q.; Palanisamy, B.; Karimi, H. A.
2015-07-01
The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms.
Reduced Basis Method for Nanodevices Simulation
Pau, George Shu Heng
2008-05-23
Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.
Semioptimal practicable algorithmic cooling
NASA Astrophysics Data System (ADS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Cholette, Jill M; Powers, Karen S; Alfieris, George M; Angona, Ronald; Henrichs, Kelly F; Masel, Debra; Swartz, Michael F; Daugherty, L. Eugene; Belmont, Kevin; Blumberg, Neil
2013-01-01
Objective To evaluate whether transfusion of cell saver salvaged, stored at the bedside for up to 24 hours, would decrease the number of post-operative allogeneic RBC transfusions and donor exposures, and possibly improve clinical outcomes. Design Prospective, randomized, controlled, clinical trial. Setting Pediatric cardiac intensive care unit. Patients Infants <20kg (n = 106) presenting for cardiac surgery with cardiopulmonary bypass. Interventions Subjects were randomized to a cell saver transfusion group where cell saver blood was available for transfusion up to 24 hours post-collection, or to a control group. Cell saver subjects received cell saver blood for volume replacement and/or RBC transfusions. Control subjects received crystalloid or albumin for volume replacement and RBCs for anemia. Blood product transfusions, donor exposures, and clinical outcomes were compared between groups. Measurements and Main Results Children randomized to the cell saver group had significantly fewer RBC transfusions (cell saver: 0.19 ± 0.44 v. control: 0.75 ± 1.2; p = 0.003) and coagulant product transfusions in the first 48 hours post-op (cell saver: 0.09 ± 0.45 v. control: 0.62 ± 1.4; p = 0.013), and significantly fewer donor exposures (cell saver: 0.60 ± 1.4 v. control: 2.3 ± 4.8; p =0.019). This difference persisted over the first week post-op, but did not reach statistical significance (cell saver: 0.64 ± 1.24 v. control: 1.1 ± 1.4; p =0.07). There were no significant clinical outcome differences. Conclusion Cell saver blood can be safely stored at the bedside for immediate transfusion for 24 hours post-collection. Administration of cell saver blood significantly reduces the number of RBC and coagulant product transfusions and donor exposures in the immediate post-operative period. Reduction of blood product transfusions has the potential to reduce transfusion-associated complications and decrease post-operative morbidity. Larger studies are needed to determine
Efficient implementation of the adaptive scale pixel decomposition algorithm
NASA Astrophysics Data System (ADS)
Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.
2016-08-01
Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Kumakech, Edward; Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören
2016-01-01
The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15-24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01-0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages.
Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören
2016-01-01
The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15–24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01–0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages. PMID:27482705
Kumakech, Edward; Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören
2016-01-01
The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15-24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01-0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages. PMID:27482705
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Wang, Dian; Zhang, Qiang; Eisenberg, Burton L.; Kane, John M.; Li, X. Allen; Lucas, David; Petersen, Ivy A.; DeLaney, Thomas F.; Freeman, Carolyn R.; Finkelstein, Steven E.; Hitchcock, Ying J.; Bedi, Manpreet; Singh, Anurag K.; Dundas, George; Kirsch, David G.
2015-01-01
Purpose We performed a multi-institutional prospective phase II trial to assess late toxicities in patients with extremity soft tissue sarcoma (STS) treated with preoperative image-guided radiation therapy (IGRT) to a reduced target volume. Patients and Methods Patients with extremity STS received IGRT with (cohort A) or without (cohort B) chemotherapy followed by limb-sparing resection. Daily pretreatment images were coregistered with digitally reconstructed radiographs so that the patient position could be adjusted before each treatment. All patients received IGRT to reduced tumor volumes according to strict protocol guidelines. Late toxicities were assessed at 2 years. Results In all, 98 patients were accrued (cohort A, 12; cohort B, 86). Cohort A was closed prematurely because of poor accrual and is not reported. Seventy-nine eligible patients from cohort B form the basis of this report. At a median follow-up of 3.6 years, five patients did not have surgery because of disease progression. There were five local treatment failures, all of which were in field. Of the 57 patients assessed for late toxicities at 2 years, 10.5% experienced at least one grade ≥ 2 toxicity as compared with 37% of patients in the National Cancer Institute of Canada SR2 (CAN-NCIC-SR2: Phase III Randomized Study of Pre- vs Postoperative Radiotherapy in Curable Extremity Soft Tissue Sarcoma) trial receiving preoperative radiation therapy without IGRT (P < .001). Conclusion The significant reduction of late toxicities in patients with extremity STS who were treated with preoperative IGRT and absence of marginal-field recurrences suggest that the target volumes used in the Radiation Therapy Oncology Group RTOG-0630 (A Phase II Trial of Image-Guided Preoperative Radiotherapy for Primary Soft Tissue Sarcomas of the Extremity) study are appropriate for preoperative IGRT for extremity STS. PMID:25667281
Juneja, B; Gilland, D; Hintenlang, D; Doxsee, K; Bova, F
2014-06-15
Purpose: In Compton Backscatter Imaging (CBI), the source and detector reside on the same side of the patient. We previously demonstrated the applicability of CBI systems for medical purposes using an industrial system. To assist in post-processing images from a CBI system, a forward model based on radiation absorption and scatter principles has been developed. Methods: The forward model was developed in C++ using raytracing to track particles. The algorithm accepts phantoms of any size and resolution to calculate the fraction of incident photons scattered back to the detector, and can perform these calculations for any detector geometry and source specification. To validate the model, results were compared to MCNP-X, which is a Monte Carlo based simulation software, for various combinations of source specifications, detector geometries, and phantom compositions. Results: The model verified that the backscatter signal to the detector was based on three interaction probabilities: a) attenuation of photons going into the phantom, b) Compton scatter of photons toward the detector, and c) attenuation of photons coming out of the phantom. The results from the MCNP-X simulations and the forward model varied from 1 to 5%. This difference was less than 1% for energies higher than 30 keV, but was up to 4% for lower energies. At 50 keV, the difference was less than 1% for multiple detector widths and for both homogeneous and heterogeneous phantoms. Conclusion: As part of the optimization of a medical CBI system, an efficient and accurate forward model was constructed in C++ to estimate the output of CBI system. The model characterized individual components contributing to CBI output and increased computational efficiency over Monte Carlo simulations. It is now used in the development of novel post-processing algorithms that reduce image blur by reversing undesired contribution from outside the region of interest.
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
PCB drill path optimization by combinatorial cuckoo search algorithm.
Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
PCB drill path optimization by combinatorial cuckoo search algorithm.
Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process.
PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm
Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.
2014-01-01
Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Convergence behavior of a new DSMC algorithm.
Gallis, Michail A.; Rader, Daniel John; Torczynski, John Robert; Bird, Graeme A.
2008-10-01
The convergence rate of a new direct simulation Monte Carlo (DSMC) method, termed 'sophisticated DSMC', is investigated for one-dimensional Fourier flow. An argon-like hard-sphere gas at 273.15K and 266.644Pa is confined between two parallel, fully accommodating walls 1mm apart that have unequal temperatures. The simulations are performed using a one-dimensional implementation of the sophisticated DSMC algorithm. In harmony with previous work, the primary convergence metric studied is the ratio of the DSMC-calculated thermal conductivity to its corresponding infinite-approximation Chapman-Enskog theoretical value. As discretization errors are reduced, the sophisticated DSMC algorithm is shown to approach the theoretical values to high precision. The convergence behavior of sophisticated DSMC is compared to that of original DSMC. The convergence of the new algorithm in a three-dimensional implementation is also characterized. Implementations using transient adaptive sub-cells and virtual sub-cells are compared. The new algorithm is shown to significantly reduce the computational resources required for a DSMC simulation to achieve a particular level of accuracy, thus improving the efficiency of the method by a factor of 2.
Active Control of Automotive Intake Noise under Rapid Acceleration using the Co-FXLMS Algorithm
NASA Astrophysics Data System (ADS)
Lee, Hae-Jin; Lee, Gyeong-Tae; Oh, Jae-Eung
The method of reducing automotive intake noise can be classified by passive and active control techniques. However, passive control has a limited effect of noise reduction at low frequency range (below 500 Hz) and is limited by the space of the engine room. However, active control can overcome these passive control limitations. The active control technique mostly uses the Least-Mean-Square (LMS) algorithm, because the LMS algorithm can easily obtain the complex transfer function in real-time, particularly when the Filtered-X LMS (FXLMS) algorithm is applied to an active noise control (ANC) system. However, the convergence performance of the LMS algorithm decreases significantly when the FXLMS algorithm is applied to the active control of intake noise under rapidly accelerating driving conditions. Therefore, in this study, the Co-FXLMS algorithm was proposed to improve the control performance of the FXLMS algorithm during rapid acceleration. The Co-FXLMS algorithm is realized by using an estimate of the cross correlation between the adaptation error and the filtered input signal to control the step size. The performance of the Co-FXLMS algorithm is presented in comparison with that of the FXLMS algorithm. Experimental results show that active noise control using Co-FXLMS is effective in reducing automotive intake noise during rapid acceleration.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Hash based parallel algorithms for mining association rules
Shintani, Takahiko; Kitsuregawa, Masaru
1996-12-31
In this paper, we propose four parallel algorithms (NPA, SPA, HPA and RPA-ELD) for mining association rules on shared-nothing parallel machines to improve its performance. In NPA, candidate itemsets are just copied amongst all the processors, which can lead to memory overflow for large transaction databases. The remaining three algorithms partition the candidate itemsets over the processors. If it is partitioned simply (SPA), transaction data has to be broadcast to all processors. HPA partitions the candidate itemsets using a hash function to eliminate broadcasting, which also reduces the comparison workload significantly. HPA-ELD fully utilizes the available memory space by detecting the extremely large itemsets and copying them, which is also very effective at flattering the load over the processors. We implemented these algorithms in a shared-nothing environment. Performance evaluations show that the best algorithm, HPA-ELD, attains good linearity on speedup ratio and is effective for handling skew.
Basic cluster compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.
1980-01-01
Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Statistical or biological significance?
Saxon, Emma
2015-01-01
Oat plants grown at an agricultural research facility produce higher yields in Field 1 than in Field 2, under well fertilised conditions and with similar weather exposure; all oat plants in both fields are healthy and show no sign of disease. In this study, the authors hypothesised that the soil microbial community might be different in each field, and these differences might explain the difference in oat plant growth. They carried out a metagenomic analysis of the 16 s ribosomal 'signature' sequences from bacteria in 50 randomly located soil samples in each field to determine the composition of the bacterial community. The study identified >1000 species, most of which were present in both fields. The authors identified two plant growth-promoting species that were significantly reduced in soil from Field 2 (Student's t-test P < 0.05), and concluded that these species might have contributed to reduced yield. PMID:26541972
[Algorithm for treating preoperative anemia].
Bisbe Vives, E; Basora Macaya, M
2015-06-01
Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics.
Duan, Lin; Wang, Zhongyuan; Hou, Yan; Wang, Zepeng; Gao, Guandao; Chen, Wei; Alvarez, Pedro J J
2016-10-15
Metal oxides are often anchored to graphene materials to achieve greater contaminant removal efficiency. To date, the enhanced performance has mainly been attributed to the role of graphene materials as a conductor for electron transfer. Herein, we report a new mechanism via which graphene materials enhance oxidation of organic contaminants by metal oxides. Specifically, Mn3O4-rGO nanocomposites (Mn3O4 nanoparticles anchored to reduced graphene oxide (rGO) nanosheets) enhanced oxidation of 1-naphthylamine (used here as a reaction probe) compared to bare Mn3O4. Spectroscopic analyses (X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy) show that the rGO component of Mn3O4-rGO was further reduced during the oxidation of 1-naphthylamine, although rGO reduction was not the result of direct interaction with 1-naphthylamine. We postulate that rGO improved the oxidation efficiency of anchored Mn3O4 by re-oxidizing Mn(II) formed from the reaction between Mn3O4 and 1-naphthylamine, thereby regenerating the surface-associated oxidant Mn(III). The proposed role of rGO was verified by separate experiments demonstrating its ability to oxidize dissolved Mn(II) to Mn(III), which subsequently can oxidize 1-naphthylamine. The role of dissolved oxygen in re-oxidizing Mn(II) was ruled out by anoxic (N2-purged) control experiments showing similar results as O2-sparged tests. Opposite pH effects on the oxidation efficiency of Mn3O4-rGO versus bare Mn3O4 were also observed, corroborating the proposed mechanism because higher pH facilitates oxidation of surface-associated Mn(II) even though it lowers the oxidation potential of Mn3O4. Overall, these findings may guide the development of novel metal oxide-graphene nanocomposites for contaminant removal. PMID:27448035
Duan, Lin; Wang, Zhongyuan; Hou, Yan; Wang, Zepeng; Gao, Guandao; Chen, Wei; Alvarez, Pedro J J
2016-10-15
Metal oxides are often anchored to graphene materials to achieve greater contaminant removal efficiency. To date, the enhanced performance has mainly been attributed to the role of graphene materials as a conductor for electron transfer. Herein, we report a new mechanism via which graphene materials enhance oxidation of organic contaminants by metal oxides. Specifically, Mn3O4-rGO nanocomposites (Mn3O4 nanoparticles anchored to reduced graphene oxide (rGO) nanosheets) enhanced oxidation of 1-naphthylamine (used here as a reaction probe) compared to bare Mn3O4. Spectroscopic analyses (X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy) show that the rGO component of Mn3O4-rGO was further reduced during the oxidation of 1-naphthylamine, although rGO reduction was not the result of direct interaction with 1-naphthylamine. We postulate that rGO improved the oxidation efficiency of anchored Mn3O4 by re-oxidizing Mn(II) formed from the reaction between Mn3O4 and 1-naphthylamine, thereby regenerating the surface-associated oxidant Mn(III). The proposed role of rGO was verified by separate experiments demonstrating its ability to oxidize dissolved Mn(II) to Mn(III), which subsequently can oxidize 1-naphthylamine. The role of dissolved oxygen in re-oxidizing Mn(II) was ruled out by anoxic (N2-purged) control experiments showing similar results as O2-sparged tests. Opposite pH effects on the oxidation efficiency of Mn3O4-rGO versus bare Mn3O4 were also observed, corroborating the proposed mechanism because higher pH facilitates oxidation of surface-associated Mn(II) even though it lowers the oxidation potential of Mn3O4. Overall, these findings may guide the development of novel metal oxide-graphene nanocomposites for contaminant removal.
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Noise filtering algorithm for the MFTF-B computer based control system
Minor, E.G.
1983-11-30
An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions.
Fast diffraction computation algorithms based on FFT
NASA Astrophysics Data System (ADS)
Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan
2010-11-01
The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.
Efficient Nonnegative Tucker Decompositions: Algorithms and Uniqueness.
Zhou, Guoxu; Cichocki, Andrzej; Zhao, Qibin; Xie, Shengli
2015-12-01
Nonnegative Tucker decomposition (NTD) is a powerful tool for the extraction of nonnegative parts-based and physically meaningful latent components from high-dimensional tensor data while preserving the natural multilinear structure of data. However, as the data tensor often has multiple modes and is large scale, the existing NTD algorithms suffer from a very high computational complexity in terms of both storage and computation time, which has been one major obstacle for practical applications of NTD. To overcome these disadvantages, we show how low (multilinear) rank approximation (LRA) of tensors is able to significantly simplify the computation of the gradients of the cost function, upon which a family of efficient first-order NTD algorithms are developed. Besides dramatically reducing the storage complexity and running time, the new algorithms are quite flexible and robust to noise, because any well-established LRA approaches can be applied. We also show how nonnegativity incorporating sparsity substantially improves the uniqueness property and partially alleviates the curse of dimensionality of the Tucker decompositions. Simulation results on synthetic and real-world data justify the validity and high efficiency of the proposed NTD algorithms.
Sprecher, Christoph M; Schmidutz, Florian; Helfen, Tobias; Richards, R Geoff; Blauth, Michael; Milz, Stefan
2015-12-01
Osteoporosis is a systemic disorder predominantly affecting postmenopausal women but also men at an advanced age. Both genders may suffer from low-energy fractures of, for example, the proximal humerus when reduction of the bone stock or/and quality has occurred.The aim of the current study was to compare the amount of bone in typical fracture zones of the proximal humerus in osteoporotic and non-osteoporotic individuals.The amount of bone in the proximal humerus was determined histomorphometrically in frontal plane sections. The donor bones were allocated to normal and osteoporotic groups using the T-score from distal radius DXA measurements of the same extremities. The T-score evaluation was done according to WHO criteria. Regional thickness of the subchondral plate and the metaphyseal cortical bone were measured using interactive image analysis.At all measured locations the amount of cancellous bone was significantly lower in individuals from the osteoporotic group compared to the non-osteoporotic one. The osteoporotic group showed more significant differences between regions of the same bone than the non-osteoporotic group. In both groups the subchondral cancellous bone and the subchondral plate were least affected by bone loss. In contrast, the medial metaphyseal region in the osteoporotic group exhibited higher bone loss in comparison to the lateral side.This observation may explain prevailing fracture patterns, which frequently involve compression fractures and certainly has an influence on the stability of implants placed in this medial region. It should be considered when planning the anchoring of osteosynthesis materials in osteoporotic patients with fractures of the proximal humerus.
Pileri, Emanuela; Gibert, Elisa; Soldevila, Ferran; García-Saenz, Ariadna; Pujols, Joan; Diaz, Ivan; Darwich, Laila; Casal, Jordi; Martín, Marga; Mateu, Enric
2015-01-30
The present study assessed the efficacy of vaccination against genotype 1 porcine reproductive and respiratory syndrome virus (PRRSV) in terms of reduction of the transmission. Ninety-eight 3-week-old piglets were divided in two groups: V (n=40) and NV (n=58) that were housed separately. V animals were vaccinated with a commercial genotype 1 PRRSV vaccine while NV were kept as controls. On day 35 post-vaccination, 14 NV pigs were separated and inoculated intranasally with 2 ml of a heterologous genotype 1 PRRSV isolate ("seeder" pigs, SP). The other V and NV animals were distributed in groups of 5 pigs each. Two days later, one SP was introduced into each pen to expose V and NV to PRRSV. Sentinel pigs were allocated in adjacent pens. Follow-up was of 21 days. All NV (30/30) became viremic after contact with SP while only 53% of V pigs were detected so (21/40, p<0.05). Vaccination shortened viremia (12.2±4 versus 3.7±3.4 days in NV and V pigs, respectively, p<0.01). The 50% survival time for becoming infected (Kaplan-Meier) for V was 21 days (CI95%=14.1-27.9) compared to 7 days (CI95%=5.2-8.7) for NV animals (p<0.01). These differences were reflected in the R value as well: 2.78 (CI95%=2.13-3.43) for NV and 0.53 (CI95%=0.19-0.76) for V pigs (p<0.05). All sentinel pigs (10/10) in pens adjacent to NV+SP pens got infected compared to 1/4 sentinel pigs allocated contiguous to a V+SP pen. These data show that vaccination of piglets significantly decrease parameters related to PRRSV transmission. PMID:25439650
Pileri, Emanuela; Gibert, Elisa; Soldevila, Ferran; García-Saenz, Ariadna; Pujols, Joan; Diaz, Ivan; Darwich, Laila; Casal, Jordi; Martín, Marga; Mateu, Enric
2015-01-30
The present study assessed the efficacy of vaccination against genotype 1 porcine reproductive and respiratory syndrome virus (PRRSV) in terms of reduction of the transmission. Ninety-eight 3-week-old piglets were divided in two groups: V (n=40) and NV (n=58) that were housed separately. V animals were vaccinated with a commercial genotype 1 PRRSV vaccine while NV were kept as controls. On day 35 post-vaccination, 14 NV pigs were separated and inoculated intranasally with 2 ml of a heterologous genotype 1 PRRSV isolate ("seeder" pigs, SP). The other V and NV animals were distributed in groups of 5 pigs each. Two days later, one SP was introduced into each pen to expose V and NV to PRRSV. Sentinel pigs were allocated in adjacent pens. Follow-up was of 21 days. All NV (30/30) became viremic after contact with SP while only 53% of V pigs were detected so (21/40, p<0.05). Vaccination shortened viremia (12.2±4 versus 3.7±3.4 days in NV and V pigs, respectively, p<0.01). The 50% survival time for becoming infected (Kaplan-Meier) for V was 21 days (CI95%=14.1-27.9) compared to 7 days (CI95%=5.2-8.7) for NV animals (p<0.01). These differences were reflected in the R value as well: 2.78 (CI95%=2.13-3.43) for NV and 0.53 (CI95%=0.19-0.76) for V pigs (p<0.05). All sentinel pigs (10/10) in pens adjacent to NV+SP pens got infected compared to 1/4 sentinel pigs allocated contiguous to a V+SP pen. These data show that vaccination of piglets significantly decrease parameters related to PRRSV transmission.
MLEM algorithm adaptation for improved SPECT scintimammography
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Feiglin, David H.; Lee, Wei; Kunniyur, Vikram R.; Gangal, Kedar R.; Coman, Ioana L.; Lipson, Edward D.; Karczewski, Deborah A.; Thomas, F. Deaver
2005-04-01
Standard MLEM and OSEM algorithms used in SPECT Tc-99m sestamibi scintimammography produce hot-spot artifacts (HSA) at the image support peripheries. We investigated a suitable adaptation of MLEM and OSEM algorithms needed to reduce HSA. Patients with suspicious breast lesions were administered 10 mCi of Tc-99m sestamibi and SPECT scans were acquired for patients in prone position with uncompressed breasts. In addition, to simulate breast lesions, some patients were imaged with a number of breast skin markers each containing 1 mCi of Tc-99m. In order to reduce HSA in reconstruction, we removed from the backprojection step the rays that traverse the periphery of the support region on the way to a detector bin, when their path length through this region was shorter than some critical length. Such very short paths result in a very low projection counts contributed to the detector bin, and consequently to overestimation of the activity in the peripheral voxels in the backprojection step-thus creating HSA. We analyzed the breast-lesion contrast and suppression of HSA in the images reconstructed using standard and modified MLEM and OSEM algorithms vs. critical path length (CPL). For CPL >= 0.01 pixel size, we observed improved breast-lesion contrast and lower noise in the reconstructed images, and a very significant reduction of HSA in the maximum intensity projection (MIP) images.
Control algorithms for dynamic attenuators
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current
NASA Astrophysics Data System (ADS)
Evertz, Hans Gerd
1998-03-01
Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
PVT Analysis With A Deconvolution Algorithm
Kouzes, Richard T.
2011-02-01
Polyvinyl Toluene (PVT) plastic scintillator is the most common gamma ray detector material used for large systems when only gross counting is needed because of its low cost, robustness, and relative sensitivity. PVT does provide some energy information about the incident photons, as has been demonstrated through the development of Energy Windowing analysis. There is a more sophisticated energy analysis algorithm developed by Symetrica, Inc., and they have demonstrated the application of their deconvolution algorithm to PVT with very promising results. The thrust of such a deconvolution algorithm used with PVT is to allow for identification and rejection of naturally occurring radioactive material, reducing alarm rates, rather than the complete identification of all radionuclides, which is the goal of spectroscopic portal monitors. Under this condition, there could be a significant increase in sensitivity to threat materials. The advantage of this approach is an enhancement to the low cost, robust detection capability of PVT-based radiation portal monitor systems. The success of this method could provide an inexpensive upgrade path for a large number of deployed PVT-based systems to provide significantly improved capability at a much lower cost than deployment of NaI(Tl)-based systems of comparable sensitivity.
Effective FCM noise clustering algorithms in medical images.
Kannan, S R; Devi, R; Ramathilagam, S; Takezawa, K
2013-02-01
The main motivation of this paper is to introduce a class of robust non-Euclidean distance measures for the original data space to derive new objective function and thus clustering the non-Euclidean structures in data to enhance the robustness of the original clustering algorithms to reduce noise and outliers. The new objective functions of proposed algorithms are realized by incorporating the noise clustering concept into the entropy based fuzzy C-means algorithm with suitable noise distance which is employed to take the information about noisy data in the clustering process. This paper presents initial cluster prototypes using prototype initialization method, so that this work tries to obtain the final result with less number of iterations. To evaluate the performance of the proposed methods in reducing the noise level, experimental work has been carried out with a synthetic image which is corrupted by Gaussian noise. The superiority of the proposed methods has been examined through the experimental study on medical images. The experimental results show that the proposed algorithms perform significantly better than the standard existing algorithms. The accurate classification percentage of the proposed fuzzy C-means segmentation method is obtained using silhouette validity index.
Compression algorithm for multideterminant wave functions.
Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J
2014-02-01
A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.
A Task-parallel Clustering Algorithm for Structured AMR
Gunney, B N; Wissink, A M
2004-11-02
A new parallel algorithm, based on the Berger-Rigoutsos algorithm for clustering grid points into logically rectangular regions, is presented. The clustering operation is frequently performed in the dynamic gridding steps of structured adaptive mesh refinement (SAMR) calculations. A previous study revealed that although the cost of clustering is generally insignificant for smaller problems run on relatively few processors, the algorithm scaled inefficiently in parallel and its cost grows with problem size. Hence, it can become significant for large scale problems run on very large parallel machines, such as the new BlueGene system (which has {Omicron}(10{sup 4}) processors). We propose a new task-parallel algorithm designed to reduce communication wait times. Performance was assessed using dynamic SAMR re-gridding operations on up to 16K processors of currently available computers at Lawrence Livermore National Laboratory. The new algorithm was shown to be up to an order of magnitude faster than the baseline algorithm and had better scaling trends.
Climate warming could reduce runoff significantly in New England, USA
Huntington, T.G.
2003-01-01
The relation between mean annual temperature (MAT), mean annual precipitation (MAP) and evapotranspiration (ET) for 38 forested watersheds was determined to evaluate the potential increase in ET and resulting decrease in stream runoff that could occur following climate change and lengthening of the growing season. The watersheds were all predominantly forested and were located in eastern North America, along a gradient in MAT from 3.5??C in New Brunswick, CA, to 19.8??C in northern Florida. Regression analysis for MAT versus ET indicated that along this gradient ET increased at a rate of 2.85 cm??C-1 increase in MAT (??0.96 cm??C-1, 95% confidence limits). General circulation models (GCM) using current mid-range emission scenarios project global MAT to increase by about 3??C during the 21st century. The inferred, potential, reduction in annual runoff associated with a 3??C increase in MAT for a representative small coastal basin and an inland mountainous basin in New England would be 11-13%. Percentage reductions in average daily runoff could be substantially larger during the months of lowest flows (July-September). The largest absolute reductions in runoff are likely to be during April and May with smaller reduction in the fall. This seasonal pattern of reduction in runoff is consistent with lengthening of the growing season and an increase in the ratio of rain to snow. Future increases in water use efficiency (WUE), precipitation, and cloudiness could mitigate part or all of this reduction in runoff but the full effects of changing climate on WUE remain quite uncertain as do future trends in precipitation and cloudiness.
Statistically significant relational data mining :
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.
2014-02-01
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees.
Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng
2015-09-18
In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods.
Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees
Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng
2015-01-01
In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
NASA Astrophysics Data System (ADS)
Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta
2016-10-01
We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.
Vu, Michael M; Kim, John Y S
2015-06-01
Acellular dermal matrix (ADM) is widely used in primary prosthetic breast reconstruction. Many indications and contraindications to use ADM have been reported in the literature, and their use varies by institution and surgeon. Developing rational, tested algorithms to determine when ADM is appropriate can significantly improve surgical outcomes and reduce costs associated with ADM use. We review the important indications and contraindications, and discuss the algorithms that have been put forth so far. Further research into algorithmic decision-making for ADM use will allow optimized balancing of cost with risk and benefit. PMID:26161304
Vu, Michael M.
2015-01-01
Acellular dermal matrix (ADM) is widely used in primary prosthetic breast reconstruction. Many indications and contraindications to use ADM have been reported in the literature, and their use varies by institution and surgeon. Developing rational, tested algorithms to determine when ADM is appropriate can significantly improve surgical outcomes and reduce costs associated with ADM use. We review the important indications and contraindications, and discuss the algorithms that have been put forth so far. Further research into algorithmic decision-making for ADM use will allow optimized balancing of cost with risk and benefit. PMID:26161304
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
NASA Astrophysics Data System (ADS)
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
On computational algorithms for real-valued continuous functions of several variables.
Sprecher, David
2014-11-01
The subject of this paper is algorithms for computing superpositions of real-valued continuous functions of several variables based on space-filling curves. The prototypes of these algorithms were based on Kolmogorov's dimension-reducing superpositions (Kolmogorov, 1957). Interest in these grew significantly with the discovery of Hecht-Nielsen that a version of Kolmogorov's formula has an interpretation as a feedforward neural network (Hecht-Nielse, 1987). These superpositions were constructed with devil's staircase-type functions to answer a question in functional complexity, rather than become computational algorithms, and their utility as an efficient computational tool turned out to be limited by the characteristics of space-filling curves that they determined. After discussing the link between the algorithms and these curves, this paper presents two algorithms for the case of two variables: one based on space-filling curves with worked out coding, and the Hilbert curve (Hilbert, 1891).
Fast impedance measurements at very low frequencies using curve fitting algorithms
NASA Astrophysics Data System (ADS)
Piasecki, Tomasz
2015-06-01
The method for reducing the time of impedance measurements at very low frequencies was proposed and implemented. The reduction was achieved by using impedance estimation algorithms that do not require the acquisition of the momentary voltage and current values for at least one whole period of the excitation signal. The algorithms were based on direct least squares ellipse and sine fitting to recorded waveforms. The performance of the algorithms was evaluated based on the sampling time, signal-to-noise (S/N) ratio and sampling frequency using a series of Monte Carlo experiments. An improved algorithm for the detection of the ellipse direction was implemented and compared to a voting algorithm. The sine fitting algorithm provided significantly better results. It was less sensitive to the sampling start point and measured impedance argument and did not exhibit any systematic error of impedance estimation. It allowed a significant reduction of the measurement time. A 1% standard deviation of impedance estimation was achieved using a sine fitting algorithm with a measurement time reduced to 11% of the excitation signal period.
A Fast Implementation of the ISOCLUS Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2003-01-01
Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O
Relevant and significant supervised gene clusters for microarray cancer classification.
Maji, Pradipta; Das, Chandra
2012-06-01
An important application of microarray data in functional genomics is to classify samples according to their gene expression profiles such as to classify cancer versus normal samples or to classify different types or subtypes of cancer. One of the major tasks with gene expression data is to find co-regulated gene groups whose collective expression is strongly associated with sample categories. In this regard, a gene clustering algorithm is proposed to group genes from microarray data. It directly incorporates the information of sample categories in the grouping process for finding groups of co-regulated genes with strong association to the sample categories, yielding a supervised gene clustering algorithm. The average expression of the genes from each cluster acts as its representative. Some significant representatives are taken to form the reduced feature set to build the classifiers for cancer classification. The mutual information is used to compute both gene-gene redundancy and gene-class relevance. The performance of the proposed method, along with a comparison with existing methods, is studied on six cancer microarray data sets using the predictive accuracy of naive Bayes classifier, K-nearest neighbor rule, and support vector machine. An important finding is that the proposed algorithm is shown to be effective for identifying biologically significant gene clusters with excellent predictive capability. PMID:22552589
IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS
Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.
2010-06-11
The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve
A New Positioning Algorithm for Position-Sensitive Avalanche Photodiodes.
Zhang, Jin; Olcott, Peter D; Levin, Craig S
2007-06-01
We are using a novel position sensitive avalanche photodiode (PSAPD) for the construction of a high resolution positron emission tomography (PET) camera. Up to now most researchers working with PSAPDs have been using an Anger-like positioning algorithm involving the four corner readout signals of the PSAPD. This algorithm yields a significant non-linear spatial "pin-cushion" distortion in raw crystal positioning histograms. In this paper, we report an improved positioning algorithm, which combines two diagonal corner signals of the PSAPD followed by a 45° rotation to determine the X or Y position of the interaction. We present flood positioning histogram data generated with the old and new positioning algorithms using a 3 × 4 array of 2 × 2 × 3 mm(3) and a 3 × 8 array of 1 × 1 × 3 mm(3) of LSO crystals coupled to 8 × 8 mm(2) PSAPDs. This new algorithm significantly reduces the pin-cushion distortion in raw flood histogram image. PMID:24307743
Outline of a fast hardware implementation of Winograd's DFT algorithm
NASA Technical Reports Server (NTRS)
Zohar, S.
1980-01-01
The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.
Automated algorithm for breast tissue differentiation in optical coherence tomography
Mujat, Mircea; Ferguson, R. Daniel; Hammer, Daniel X.; Gittins, Christopher; Iftimia, Nicusor
2010-01-01
An automated algorithm for differentiating breast tissue types based on optical coherence tomography (OCT) data is presented. Eight parameters are derived from the OCT reflectivity profiles and their means and covariance matrices are calculated for each tissue type from a training set (48 samples) selected based on histological examination. A quadratic discrimination score is then used to assess the samples from a validation set. The algorithm results for a set of 89 breast tissue samples were correlated with the histological findings, yielding specificity and sensitivity of 0.88. If further perfected to work in real time and yield even higher sensitivity and specificity, this algorithm would be a valuable tool for biopsy guidance and could significantly increase procedure reliability by reducing both the number of nondiagnostic aspirates and the number of false negatives. PMID:19566332
Genetic algorithms as discovery programs
Hilliard, M.R.; Liepins, G.
1986-01-01
Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
A distributed Canny edge detector: algorithm and FPGA implementation.
Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J
2014-07-01
The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
Aligning parallel arrays to reduce communication
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.; Schreiber, Robert; Gilbert, John R.; Chatterjee, Siddhartha
1994-01-01
Axis and stride alignment is an important optimization in compiling data-parallel programs for distributed-memory machines. We previously developed an optimal algorithm for aligning array expressions. Here, we examine alignment for more general program graphs. We show that optimal alignment is NP-complete in this setting, so we study heuristic methods. This paper makes two contributions. First, we show how local graph transformations can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. Second, we give a heuristic that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. Our algorithms have been implemented; we present experimental results showing their effect on the performance of some example programs running on the CM-5.
A synthesized heuristic task scheduling algorithm.
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.
ICESat-2 / ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.
2013-12-01
NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in 2016 with a 3 year mission lifetime. The ICESat-2 orbital altitude will be 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of eliminating solar background noise in near real-time, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allow the instrument to telemeter only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and an onboard relief map. Similar to what was flown on the ATLAS predecessor GLAS (Geoscience Laser Altimeter System) the DEM provides minimum and maximum heights for each 1 degree x 1 degree tile on the Earth. This information allows the onboard algorithm to limit its signal search to the region between minimum and maximum heights (plus some margin for errors). The understanding that the surface echoes will tend to clump while noise will be randomly distributed led us to histogram the received event times. The selection of the signal locations is based on those histogram bins with statistically significant counts. Once the signal location has been established the onboard Digital Relief Map (DRM) is used to determine the vertical width of the telemetry band about the signal. The ATLAS Receiver Algorithms are nearing completion of the development phase and are currently being tested using a Monte Carlo Software Simulator that models the instrument, the orbit and the environment
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
A New Aloha Anti-Collision Algorithm Based on CDMA
NASA Astrophysics Data System (ADS)
Bai, Enjian; Feng, Zhu
The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
NASA Astrophysics Data System (ADS)
Demirciler, Kemal; Ortega, Antonio
2005-12-01
This paper presents a reduced-complexity deterministic annealing (DA) approach for vector quantizer (VQ) design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN) codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.
2015-01-01
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K
2015-06-11
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution.
NASA Astrophysics Data System (ADS)
Jayaraj, V.; Ebenezer, D.
2010-12-01
A new switching-based median filtering scheme for restoration of images that are highly corrupted by salt and pepper noise is proposed. An algorithm based on the scheme is developed. The new scheme introduces the concept of substitution of noisy pixels by linear prediction prior to estimation. A novel simplified linear predictor is developed for this purpose. The objective of the scheme and algorithm is the removal of high-density salt and pepper noise in images. The new algorithm shows significantly better image quality with good PSNR, reduced MSE, good edge preservation, and reduced streaking. The good performance is achieved with reduced computational complexity. A comparison of the performance is made with several existing algorithms in terms of visual and quantitative results. The performance of the proposed scheme and algorithm is demonstrated.
Advancing-Front Algorithm For Delaunay Triangulation
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1993-01-01
Efficient algorithm performs Delaunay triangulation to generate unstructured grids for use in computing two-dimensional flows. Once grid generated, one can optionally call upon additional subalgorithm that removes diagonal lines from quadrilateral cells nearly rectangular. Resulting approximately rectangular grid reduces cost per iteration of flow-computing algorithm.
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
Cognitive radio resource allocation based on coupled chaotic genetic algorithm
NASA Astrophysics Data System (ADS)
Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang
2010-11-01
A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.
Parallel algorithm development
Adams, T.F.
1996-06-01
Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.
Song, Yang; Zhang, Bin; He, Anzhi
2006-11-01
A novel algebraic iterative algorithm based on deflection tomography is presented. This algorithm is derived from the essentials of deflection tomography with a linear expansion of the local basis functions. By use of this algorithm the tomographic problem is finally reduced to the solution of a set of linear equations. The algorithm is demonstrated by mapping a three-peak Gaussian simulative temperature field. Compared with reconstruction results obtained by other traditional deflection algorithms, its reconstruction results provide a significant improvement in reconstruction accuracy, especially in cases with noisy data added. In the density diagnosis of a hypersonic wind tunnel, this algorithm is adopted to reconstruct density distributions of an axial symmetry flow field. One cross section of the reconstruction results is selected to be compared with the inverse Abel transform algorithm. Results show that the novel algorithm can achieve an accuracy equivalent to the inverse Abel transform algorithm. However, the novel algorithm is more versatile because it is applicable to arbitrary kinds of distribution.
A symbol-map wavelet zero-tree image coding algorithm
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Liu, Wenyao; Peng, Xiang; Liu, Xiaoli
2008-03-01
A improved SPIHT image compression algorithm called symbol-map zero-tree coding algorithm (SMZTC) is proposed in this paper based on wavelet transform. The SPIHT algorithm is a high efficiency wavelet coefficients coding method and have good image compressing effect, but it has more complexity and need too much memory. The algorithm presented in this paper utilizes two small symbol-maps Mark and FC to store the status of coefficients and zero tree sets during coding procedure so as to reduce the memory requirement. By this strategy, the memory cost is reduced distinctly as well as the scanning speed of coefficients is improved. Those comparison experiments for 512 by 512 images are done with some other zerotree coding algorithms, such as SPIHT, NLS method. During the experiments, the biorthogonal 9/7 lifting wavelet transform is used to image transform. The results of coding experiments show that this algorithm speed of codec is improved significantly, and compression-ratio is almost uniformed with SPIHT algorithm.
ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.
Claire, Robert W.
1984-01-01
An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
A novel algorithm for notch detection
NASA Astrophysics Data System (ADS)
Acosta, C.; Salazar, D.; Morales, D.
2013-06-01
It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data, both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such data transformations, several polygon geometry changes are introduced, which can substantially increase shot count, geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact contribute to the complexity of the surrounding geometry, and are therefore undesirable. Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific geometrical restrictions in order to be valid candidates for notch correction. Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with strange shapes and in strange rotations. This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances
NASA Technical Reports Server (NTRS)
Rogers, David
1991-01-01
G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.
Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?
NASA Astrophysics Data System (ADS)
Petković, Dušan
The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.
NASA Astrophysics Data System (ADS)
Aziz, S.; Matott, L.
2012-12-01
The uncertain parameters of a given environmental model are often inferred from an optimization procedure that seeks to minimize discrepancies between simulated output and observed data. However, optimization search procedures can potentially yield different results across multiple calibration trials. For example, global search procedures like the genetic algorithm and simulated annealing are driven by inherent randomness that can result in variable inter-trial behavior. Despite this potential for variability in search algorithm performance, practitioners are reluctant to run multiple trials of an algorithm because of the added computational burden. As a result, estimated parameters are subject to an unrecognized source of uncertainty that could potentially bias or contaminate subsequent predictive analyses. In this study, a series of numerical experiments were performed to explore the influence of search algorithm uncertainty on parameter estimates. The experiments applied multiple trials of the simulated annealing algorithm to a suite of calibration problems involving watershed rainfall-runoff, groundwater flow, and subsurface contaminant transport. Results suggest that linking the simulated annealing algorithm with an adaptive range-reduction technique can significantly improve algorithm effectiveness while simultaneously reducing inter-trial variability. Therefore these range-reduction procedures appear to be a suitable mechanism for minimizing algorithm variance and improving the consistency of parameter estimates.
Parallel expectation-maximization algorithms for PET image reconstruction
NASA Astrophysics Data System (ADS)
Jeng, Wei-Min
1999-10-01
Image reconstruction using Positron Emission Tomography (PET) involves estimating an unknown number of photon pairs emitted from the radiopharmaceuticals within the tissues of the patient's body. The generation of the photons can be described as a Poisson process, and the difficulty of image reconstruction involves approximating the parameter of the tissue density distribution function. A significant amount of artifactual noise exists in the reconstructed image with the convolution back projection method. Using the Maximum Likelihood (ML) formulation, a better estimate can be made for the unknown image information. Despite the better quality of images, the Expectation Maximization (EM) iterative algorithm is not being used in practice due to the tremendous processing time. This research proposes new techniques in designing parallel algorithms in order to speed the reconstruction process. Using the EM algorithm as an example, several general parallel techniques were studied for both distributed-memory architecture and message-passing programming paradigm. Both intra- and inter-iteration latency-hiding schemes were designed to effectively reduce the communication time. Dependencies that exist in and between iterations were rearranged by overlap communication and computation with MPI's non-blocking collective reduction operation. A performance model was established to estimate the processing time of the algorithms and was found to agree with the experimental results. A second strategy, the sparse matrix compaction technique, was developed to reduce the computational time of the computation-bound EM algorithm with better use of PET system geometry. The proposed techniques are generally applicable to many scientific computation problems that involve sparse matrix operations as well as iterative types, of algorithms.
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
An enhanced algorithm to estimate BDS satellite's differential code biases
NASA Astrophysics Data System (ADS)
Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei
2016-02-01
This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.
Some nonlinear space decomposition algorithms
Tai, Xue-Cheng; Espedal, M.
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)
2000-01-01
In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.
Speech Enhancement based on Compressive Sensing Algorithm
NASA Astrophysics Data System (ADS)
Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel
2013-12-01
There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
GPU Accelerated Event Detection Algorithm
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.
Algorithms for automated DNA assembly
Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher
2010-01-01
Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162
Reduced discretization error in HZETRN
NASA Astrophysics Data System (ADS)
Slaba, Tony C.; Blattnig, Steve R.; Tweed, John
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.
Reduced discretization error in HZETRN
Slaba, Tony C.; Blattnig, Steve R.; Tweed, John
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.
Threshold extended ID3 algorithm
NASA Astrophysics Data System (ADS)
Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.
2012-04-01
Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.
Some Practical Payments Clearance Algorithms
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
A split finite element algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1979-01-01
An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.
A hybrid heuristic algorithm to improve known-plaintext attack on Fourier plane encryption.
Liu, Wensi; Yang, Guanglin; Xie, Haiyan
2009-08-01
A hybrid heuristic attack scheme that combines the hill climbing algorithm and the simulated annealing algorithm is proposed to speed up the search procedure and to obtain a more accurate solution to the original key in the Fourier plane encryption algorithm. And a unit cycle is adopted to analyze the value space of the random phase. The experimental result shows that our scheme can obtain more accurate solution to the key that can achieve better decryption result both for the selected encrypted image and another unseen ciphertext image. The searching time is significantly reduced while without any exceptional case in searching procedure. For an image of 64x64 pixels, our algorithm costs a comparatively short computing time, about 1 minute, can retrieve the approximated key with the normalized root mean squared error 0.1, therefore, our scheme makes the known-plaintext attack on the Fourier plane image encryption more practical, stable, and effective.
Efficient algorithm for training interpolation RBF networks with equally spaced nodes.
Huan, Hoang Xuan; Hien, Dang Thi Thu; Tue, Huynh Huu
2011-06-01
This brief paper proposes a new algorithm to train interpolation Gaussian radial basis function (RBF) networks in order to solve the problem of interpolating multivariate functions with equally spaced nodes. Based on an efficient two-phase algorithm recently proposed by the authors, Euclidean norm associated to Gaussian RBF is now replaced by a conveniently chosen Mahalanobis norm, that allows for directly computing the width parameters of Gaussian radial basis functions. The weighting parameters are then determined by a simple iterative method. The original two-phase algorithm becomes a one-phase one. Simulation results show that the generality of networks trained by this new algorithm is sensibly improved and the running time significantly reduced, especially when the number of nodes is large.
Improved Algorithm For Finite-Field Normal-Basis Multipliers
NASA Technical Reports Server (NTRS)
Wang, C. C.
1989-01-01
Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
REDUCING INDOOR HUMIDITY SIGNIFICANTLY REDUCES DUST MITES AND ALLERGEN IN HOMES. (R825250)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
An integral conservative gridding-algorithm using Hermitian curve interpolation
NASA Astrophysics Data System (ADS)
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J.; Fix, Michael K.
2008-11-01
significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Petri nets SM-cover-based on heuristic coloring algorithm
NASA Astrophysics Data System (ADS)
Tkacz, Jacek; Doligalski, Michał
2015-09-01
In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.
ERIC Educational Resources Information Center
Docking, R. A.; Docking, E.
1984-01-01
Reports on a case study of inservice training conducted to enhance the teacher/student relationship and reduce teacher anxiety. Found significant improvements in attitudes, classroom management activities, and lower anxiety among teachers. (MD)
FBP Algorithms for Attenuated Fan-Beam Projections
You, Jiangsheng; Zeng, Gengsheng L.; Liang, Zhengrong
2005-01-01
A filtered backprojection (FBP) reconstruction algorithm for attenuated fan-beam projections has been derived based on Novikov’s inversion formula. The derivation uses a common transformation between parallel-beam and fan-beam coordinates. The filtering is shift-invariant. Numerical evaluation of the FBP algorithm is presented as well. As a special application, we also present a shift-invariant FBP algorithm for fan-beam SPECT reconstruction with uniform attenuation compensation. Several other fan-beam reconstruction algorithms are also discussed. In the attenuation-free case, our algorithm reduces to the conventional fan-beam FBP reconstruction algorithm. PMID:16570111
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
ERIC Educational Resources Information Center
Timpane, Michael; And Others
A group of three conference papers, all addressing the subject of effective programs to decrease the number of school dropouts, is presented in this document. The first paper, "Systemic Approaches to Reducing Dropouts" (Michael Timpane), asserts that dropping out is a symptom of failures in the social, economic, and educational systems. Dropping…
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Protein disorder reduced in Saccharomyces cerevisiae to survive heat shock.
Vicedo, Esmeralda; Gasik, Zofia; Dong, Yu-An; Goldberg, Tatyana; Rost, Burkhard
2015-01-01
Recent experiments established that a culture of Saccharomyces cerevisiae (baker's yeast) survives sudden high temperatures by specifically duplicating the entire chromosome III and two chromosomal fragments (from IV and XII). Heat shock proteins (HSPs) are not significantly over-abundant in the duplication. In contrast, we suggest a simple algorithm to " postdict " the experimental results: Find a small enough chromosome with minimal protein disorder and duplicate this region. This algorithm largely explains all observed duplications. In particular, all regions duplicated in the experiment reduced the overall content of protein disorder. The differential analysis of the functional makeup of the duplication remained inconclusive. Gene Ontology (GO) enrichment suggested over-representation in processes related to reproduction and nutrient uptake. Analyzing the protein-protein interaction network (PPI) revealed that few network-central proteins were duplicated. The predictive hypothesis hinges upon the concept of reducing proteins with long regions of disorder in order to become less sensitive to heat shock attack. PMID:26673203
NASA Astrophysics Data System (ADS)
Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu
2016-09-01
Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2011-11-01
In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.
NASA Astrophysics Data System (ADS)
2014-02-01
When promoting the value of their research or procuring funding, researchers often need to explain the significance of their work to the community -- something that can be just as tricky as the research itself.
Visualizing output for a data learning algorithm
NASA Astrophysics Data System (ADS)
Carson, Daniel; Graham, James; Ternovskiy, Igor
2016-05-01
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
An efficient QoS-aware routing algorithm for LEO polar constellations
NASA Astrophysics Data System (ADS)
Tian, Xin; Pham, Khanh; Blasch, Erik; Tian, Zhi; Shen, Dan; Chen, Genshe
2013-05-01
In this work, a Quality of Service (QoS)-aware routing (QAR) algorithm is developed for Low-Earth Orbit (LEO) polar constellations. LEO polar orbits are the only type of satellite constellations where inter-plane inter-satellite links (ISLs) are implemented in real world. The QAR algorithm exploits features of the topology of the LEO satellite constellation, which makes it more efficient than general shortest path routing algorithms such as Dijkstra's or extended Bellman-Ford algorithms. Traffic density, priority, and error QoS requirements on communication delays can be easily incorporated into the QAR algorithm through satellite distances. The QAR algorithm also supports efficient load balancing in the satellite network by utilizing the multiple paths from the source satellite to the destination satellite, and effectively lowers the rate of network congestion. The QAR algorithm supports a novel robust routing scheme in LEO polar constellation, which is able to significantly reduce the impact of inter-satellite link (ISL) congestions on QoS in terms of communication delay and jitter.
An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm
Lu, Guangquan; Xiong, Ying; Wang, Yunpeng
2016-01-01
The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732
Attitude Estimation Signal Processing: A First Report on Possible Algorithms and Their Utility
NASA Technical Reports Server (NTRS)
Riasati, Vahid R.
1998-01-01
In this brief effort, time has been of the essence. The data had to be acquired from APL/Lincoln Labs, stored, and sorted out to obtain the pertinent streams. This has been a significant part of this effort and hardware and software problems have been addressed with the appropriate solutions to accomplish this part of the task. Passed this, some basic and important algorithms are utilized to improve the performance of the attitude estimation systems. These algorithms are an essential part of the signal processing for the attitude estimation problem as they are utilized to reduce the amount of the additive/multiplicative noise that in general may or may not change its structure and probability density function, pdf, in time. These algorithms are not currently utilized in the processing of the data, at least, we are not aware of their use in this attitude estimation problem. Some of these algorithms, like the variable thresholding, are new conjectures, but one would expect that someone somewhere must have utilized this kind of scheme before. The variable thresholding idea is a straightforward scheme to use in case of a slowly varying pdf, or statistical moments of the unwanted random process. The algorithms here are kept simple but yet effective for processing the data and removing the unwanted noise. For the most part, these algorithms can be arranged so that their consecutive and orderly execution would complement the preceding algorithm and improve the overall performance of the signal processing chain.
Speckle reducing anisotropic diffusion.
Yu, Yongjian; Acton, Scott T
2002-01-01
This paper provides the derivation of speckle reducing anisotropic diffusion (SRAD), a diffusion method tailored to ultrasonic and radar imaging applications. SRAD is the edge-sensitive diffusion for speckled images, in the same way that conventional anisotropic diffusion is the edge-sensitive diffusion for images corrupted with additive noise. We first show that the Lee and Frost filters can be cast as partial differential equations, and then we derive SRAD by allowing edge-sensitive anisotropic diffusion within this context. Just as the Lee and Frost filters utilize the coefficient of variation in adaptive filtering, SRAD exploits the instantaneous coefficient of variation, which is shown to be a function of the local gradient magnitude and Laplacian operators. We validate the new algorithm using both synthetic and real linear scan ultrasonic imagery of the carotid artery. We also demonstrate the algorithm performance with real SAR data. The performance measures obtained by means of computer simulation of carotid artery images are compared with three existing speckle reduction schemes. In the presence of speckle noise, speckle reducing anisotropic diffusion excels over the traditional speckle removal filters and over the conventional anisotropic diffusion method in terms of mean preservation, variance reduction, and edge localization.
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Robustness of Tree Extraction Algorithms from LIDAR
NASA Astrophysics Data System (ADS)
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
Practical Algorithm For Computing The 2-D Arithmetic Fourier Transform
NASA Astrophysics Data System (ADS)
Reed, Irving S.; Choi, Y. Y.; Yu, Xiaoli
1989-05-01
Recently, Tufts and Sadasiv [10] exposed a method for computing the coefficients of a Fourier series of a periodic function using the Mobius inversion of series. They called this method of analysis the Arithmetic Fourier Transform(AFT). The advantage of the AFT over the FN 1' is that this method of Fourier analysis needs only addition operations except for multiplications by scale factors at one stage of the computation. The disadvantage of the AFT as they expressed it originally is that it could be used effectively only to compute finite Fourier coefficients of a real even function. To remedy this the AFT developed in [10] is extended in [11] to compute the Fourier coefficients of both the even and odd components of a periodic function. In this paper, the improved AFT [11] is extended to a two-dimensional(2-D) Arithmetic Fourier Transform for calculating the Fourier Transform of two-dimensional discrete signals. This new algorithm is based on both the number-theoretic method of Mobius inversion of double series and the complex conjugate property of Fourier coefficients. The advantage of this algorithm over the conventional 2-D FFT is that the corner-turning problem needed in a conventional 2-D Discrete Fourier Transform(DFT) can be avoided. Therefore, this new 2-D algorithm is readily suitable for VLSI implementation as a parallel architecture. Comparing the operations of 2-D AFT of a MxM 2-D data array with the conventional 2-D FFT, the number of multiplications is significantly reduced from (2log2M)M2 to (9/4)M2. Hence, this new algorithm is faster than the FFT algorithm. Finally, two simulation results of this new 2-D AFT algorithm for 2-D artificial and real images are given in this paper.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Significant lexical relationships
Pedersen, T.; Kayaalp, M.; Bruce, R.
1996-12-31
Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.
Algorithms for Automatic Alignment of Arrays
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Oliker, Leonid; Schreiber, Robert; Sheffler, Thomas J.
1996-01-01
Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication.
A convergent hybrid decomposition algorithm model for SVM training.
Lucidi, Stefano; Palagi, Laura; Risi, Arnaldo; Sciandrone, Marco
2009-06-01
Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.
Iterative minimization algorithm for efficient calculations of transition states
NASA Astrophysics Data System (ADS)
Gao, Weiguo; Leng, Jing; Zhou, Xiang
2016-03-01
This paper presents an efficient algorithmic implementation of the iterative minimization formulation (IMF) for fast local search of transition state on potential energy surface. The IMF is a second order iterative scheme providing a general and rigorous description for the eigenvector-following (min-mode following) methodology. We offer a unified interpretation in numerics via the IMF for existing eigenvector-following methods, such as the gentlest ascent dynamics, the dimer method and many other variants. We then propose our new algorithm based on the IMF. The main feature of our algorithm is that the translation step is replaced by solving an optimization subproblem associated with an auxiliary objective function which is constructed from the min-mode information. We show that using an efficient scheme for the inexact solver and enforcing an adaptive stopping criterion for this subproblem, the overall computational cost will be effectively reduced and a super-linear rate between the accuracy and the computational cost can be achieved. A series of numerical tests demonstrate the significant improvement in the computational efficiency for the new algorithm.
A consolidation algorithm for genomes fractionated after higher order polyploidization
2012-01-01
Background It has recently been shown that fractionation, the random loss of excess gene copies after a whole genome duplication event, is a major cause of gene order disruption. When estimating evolutionary distances between genomes based on chromosomal rearrangement, fractionation inevitably leads to significant overestimation of classic rearrangement distances. This bias can be largely avoided when genomes are preprocessed by "consolidation", a procedure that identifies and accounts for regions of fractionation. Results In this paper, we present a new consolidation algorithm that extends and improves previous work in several directions. We extend the notion of the fractionation region to use information provided by regions where this process is still ongoing. The new algorithm can optionally work with this new definition of fractionation region and is able to process not only tetraploids but also genomes that have undergone hexaploidization and polyploidization events of higher order. Finally, this algorithm reduces the asymptotic time complexity of consolidation from quadratic to linear dependence on the genome size. The new algorithm is applied both to plant genomes and to simulated data to study the effect of fractionation in ancient hexaploids. PMID:23282012
NASA Technical Reports Server (NTRS)
Black, D. C.
1986-01-01
The significance of brown dwarfs for resolving some major problems in astronomy is discussed. The importance of brown dwarfs for models of star formation by fragmentation of molecular clouds and for obtaining independent measurements of the ages of stars in binary systems is addressed. The relationship of brown dwarfs to planets is considered.
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Alignment algorithms for planar optical waveguides
NASA Astrophysics Data System (ADS)
Zheng, Yu; Duan, Ji-an
2012-10-01
Planar optical waveguides are the key elements in a modern, high-speed optical network. An important problem facing the optical fiber communication system is optical-axis alignment and coupling between waveguide chips and transmission fibers. The advantages and disadvantages of the various algorithms used for the optical-axis alignment, namely, hill-climbing, pattern search, and genetic algorithm are analyzed. A new optical-axis alignment for planar optical waveguides is presented which is a composite of a genetic algorithm and a pattern search algorithm. Experiments have proved the proposed alignment's feasibility; compared with hill climbing, the search process can reduce the number of movements by 88% and reduce the search time by 83%. Moreover, the search success rate in the experiment can reach 100%.
Image compression using a novel edge-based coding algorithm
NASA Astrophysics Data System (ADS)
Keissarian, Farhad; Daemi, Mohammad F.
2001-08-01
In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.
A hierarchical algorithm for molecular similarity (H-FORMS).
Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel
2015-07-15
A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy.
A fast approximate nearest neighbor search algorithm in the Hamming space.
Esmaeili, Mani Malek; Ward, Rabab Kreidieh; Fatourechi, Mehrdad
2012-12-01
A fast approximate nearest neighbor search algorithm for the (binary) Hamming space is proposed. The proposed Error Weighted Hashing (EWH) algorithm is up to 20 times faster than the popular locality sensitive hashing (LSH) algorithm and works well even for large nearest neighbor distances where LSH fails. EWH significantly reduces the number of candidate nearest neighbors by weighing them based on the difference between their hash vectors. EWH can be used for multimedia retrieval and copy detection systems that are based on binary fingerprinting. On a fingerprint database with more than 1,000 videos, for a specific detection accuracy, we demonstrate that EWH is more than 10 times faster than LSH. For the same retrieval time, we show that EWH has a significantly better detection accuracy with a 15 times lower error rate.
NASA Astrophysics Data System (ADS)
Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.
2014-12-01
Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/
Performance analysis of approximate Affine Projection Algorithm in acoustic feedback cancellation.
Nikjoo S, Mohammad; Seyedi, Amir; Tehrani, Arash Saber
2008-01-01
Acoustic feedback is an annoying problem in several audio applications and especially in hearing aids. Adaptive feedback cancellation techniques have attracted recent attention and show great promise in reducing the deleterious effects of feedback. In this paper, we investigated the performance of a class of adaptive feedback cancellation algorithms viz. the approximated Affine Projection Algorithms (APA). Mixed results were obtained with the natural speech and music data collected from five different commercial hearing aids in a variety of sub-oscillatory and oscillatory feedback conditions. The performance of the approximated APA was significantly better with music stimuli than natural speech stimuli. PMID:19162642
Fast imaging system and algorithm for monitoring microlymphatics
NASA Astrophysics Data System (ADS)
Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.
2010-02-01
The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.
Mapping algorithms on regular parallel architectures
Lee, P.
1989-01-01
It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Bell, Graham
2016-01-01
In this experiment, the authors were interested in testing the effect of a small molecule inhibitor on the ratio of males and females in the offspring of their model Dipteran species. The authors report that in a wild-type population, ~50 % of offspring are male. They then test the effect of treating females with the chemical, which they think might affect the male:female ratio compared with the untreated group. They claim that there is a statistically significant increase in the percentage of males produced and conclude that the drug affects sex ratios. PMID:27338560
Exact significance test for Markov order
NASA Astrophysics Data System (ADS)
Pethel, S. D.; Hahs, D. W.
2014-02-01
We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Adaptive motion artifact reducing algorithm for wrist photoplethysmography application
NASA Astrophysics Data System (ADS)
Zhao, Jingwei; Wang, Guijin; Shi, Chenbo
2016-04-01
Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.
Bayesian Smoothing Algorithms in Partially Observed Markov Chains
NASA Astrophysics Data System (ADS)
Ait-el-Fquih, Boujemaa; Desbouvries, François
2006-11-01
Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.
A Revision of the NASA Team Sea Ice Algorithm
NASA Technical Reports Server (NTRS)
Markus, T.; Cavalieri, Donald J.
1998-01-01
In a recent paper, two operational algorithms to derive ice concentration from satellite multichannel passive microwave sensors have been compared. Although the results of these, known as the NASA Team algorithm and the Bootstrap algorithm, have been validated and are generally in good agreement, there are areas where the ice concentrations differ, by up to 30%. These differences can be explained by shortcomings in one or the other algorithm. Here, we present an algorithm which, in addition to the 19 and 37 GHz channels used by both the Bootstrap and NASA Team algorithms, makes use of the 85 GHz channels as well. Atmospheric effects particularly at 85 GHz are reduced by using a forward atmospheric radiative transfer model. Comparisons with the NASA Team and Bootstrap algorithm show that the individual shortcomings of these algorithms are not apparent in this new approach. The results further show better quantitative agreement with ice concentrations derived from NOAA AVHRR infrared data.
Anthropological significance of phenylketonuria.
Saugstad, L F
1975-01-01
The highest incidence rates of phenylketonuria (PKU) have been observed in Ireland and Scotlant. Parents heterozygous for PKU in Norway differ significantly from the general population in the Rhesus, Kell and PGM systems. The parents investigated showed an excess of Rh negative, Kell plus and PGM type 1 individuals, which makes them similar to the present populations in Ireland and Scotlant. It is postulated that the heterozygotes for PKU in Norway are descended from a completely assimilated sub-population of Celtic origin, who came or were brought here, 1ooo years ago. Bronze objects of Western European (Scottish, Irish) origin, found in Viking graves widely distributed in Norway, have been taken as evidence of Vikings returning with loot (including a number of Celts) from Western Viking settlements. The continuity of residence since the Viking age in most habitable parts of Norway, and what seems to be a nearly complete regional relationship between the sites where Viking graves contain western imported objects and the birthplaces of grandparents of PKUs identified in Norway, lend further support to the hypothesis that the heterozygotes for PKU in Norway are descended from a completely assimilated subpopulation. The remarkable resemblance between Iceland and Ireland, in respect of several genetic markers (including the Rhesus, PGM and Kell systems), is considered to be an expression of a similar proportion of people of Celtic origin in each of the two countries. Their identical, high incidence rates of PKU are regarded as further evidence of this. The significant decline in the incidence of PKU when one passes from Ireland, Scotland and Iceland, to Denmark and on to Norway and Sweden, is therefore explained as being related to a reduction in the proportion of inhabitants of Celtic extraction in the respective populations.
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Indexing and Automatic Significance Analysis
ERIC Educational Resources Information Center
Steinacker, Ivo
1974-01-01
An algorithm is proposed to solve the problem of sequential indexing which does not use any grammatical or semantic analysis, but follows the principle of emulating human judgement by evaluation of machine-recognizable attributes of structured word assemblies. (Author)
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Basic firefly algorithm for document clustering
NASA Astrophysics Data System (ADS)
Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza
2015-12-01
The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).
Grooming of arbitrary traffic using improved genetic algorithms
NASA Astrophysics Data System (ADS)
Jiao, Yueguang; Xu, Zhengchun; Zhang, Hanyi
2004-04-01
A genetic algorithm is proposed with permutation based chromosome presentation and roulette wheel selection to solve traffic grooming problems in WDM ring network. The parameters of the algorithm are evaluated by calculating of large amount of traffic patterns at different conditions. Four methods were developed to improve the algorithm, which can be used combining with each other. Effects of them on the algorithm are studied via computer simulations. The results show that they can all make the algorithm more powerful to reduce the number of add-drop multiplexers or wavelengths required in a network.
A VLSI architecture for simplified arithmetic Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.
1992-01-01
The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Fungi producing significant mycotoxins.
2012-01-01
Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors
An improved localization algorithm based on genetic algorithm in wireless sensor networks.
Peng, Bo; Li, Lei
2015-04-01
Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.
Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster
NASA Astrophysics Data System (ADS)
Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.
2015-10-01
Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.
NASA Astrophysics Data System (ADS)
Weber, Bruce A.
2005-07-01
We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.
On the convergence of the phase gradient autofocus algorithm for synthetic aperture radar imaging
Hicks, M.J.
1996-01-01
Synthetic Aperture Radar (SAR) imaging is a class of coherent range and Doppler signal processing techniques applied to remote sensing. The aperture is synthesized by recording and processing coherent signals at known positions along the flight path. Demands for greater image resolution put an extreme burden on requirements for inertial measurement units that are used to maintain accurate pulse-to-pulse position information. The recently developed Phase Gradient Autofocus algorithm relieves this burden by taking a data-driven digital signal processing approach to estimating the range-invariant phase aberrations due to either uncompensated motions of the SAR platform or to atmospheric turbulence. Although the performance of this four-step algorithm has been demonstrated, its convergence has not been modeled mathematically. A new sensitivity study of algorithm performance is a necessary step towards this model. Insights that are significant to the application of this algorithm to both SAR and to other coherent imaging applications are developed. New details on algorithm implementation identify an easily avoided biased phase estimate. A new algorithm for defining support of the point spread function is proposed, which promises to reduce the number of iterations required even for rural scenes with low signal-to-clutter ratios.
A novel hardware-friendly algorithm for hyperspectral linear unmixing
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Santos, Lucana; López, Sebastián.; Sarmiento, Roberto
2015-10-01
significantly reduced.
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Fast proximity algorithm for MAP ECT reconstruction
NASA Astrophysics Data System (ADS)
Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng
2012-03-01
We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.
Performance analysis of cone detection algorithms.
Mariotti, Letizia; Devaney, Nicholas
2015-04-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758
Oscillation Detection Algorithm Development Summary Report and Test Plan
Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang
2009-10-03
Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035
Algorithmic Strategies in Combinatorial Chemistry
GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN
2000-08-01
Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.
Reducing the effect of pixel crosstalk in phase only spatial light modulators.
Persson, Martin; Engström, David; Goksör, Mattias
2012-09-24
A method for compensating for pixel crosstalk in liquid crystal based spatial light modulators is presented. By modifying a commonly used hologram generating algorithm to account for pixel crosstalk, the intensity errors in obtained diffraction spot intensities are significantly reduced. We also introduce a novel method for characterizing the pixel crosstalk in phase-only spatial light modulators, providing input for the hologram generating algorithm. The methods are experimentally evaluated and an improvement of the spot uniformity by more than 100% is demonstrated for an SLM with large pixel crosstalk. PMID:23037382
A design study on complexity reduced multipath mitigation
NASA Astrophysics Data System (ADS)
Wasenmüller, U.; Brack, T.; Groh, I.; Staudinger, E.; Sand, S.; Wehn, N.
2012-09-01
Global navigation satellite systems, e.g. the current GPS and the future European Galileo system, are frequently used in car navigation systems or smart phones to determine the position of a user. The calculation of the mobile position is based on the signal propagation times between the satellites and the mobile terminal. At least four time of arrival (TOA) measurements from four different satellites are required to resolve the position uniquely. Further, the satellites need to be line-of-sight to the receiver for exact position calculation. However, in an urban area, the direct path may be blocked and the resulting multipath propagation causes errors in the order of tens of meters for each measurement. and in the case of non-line-of-sight (NLOS), positive errors in the order of hundreds of meters. In this paper an advanced algorithm for multipath mitigation known as CRMM is presented. CRMM features reduced algorithmic complexity and superior performance in comparison with other state of the art multipath mitigation algorithms. Simulation results demonstrate the significant improvements in position calculation in environments with severe multipath propagation. Nevertheless, in relation to traditional algorithms an increased effort is required for real-time signal processing due to the large amount of data, which has to be processed in parallel. Based on CRMM, we performed a comprehensive design study including a design space exploration for the tracking unit hardware part, and prototype implementation for hardware complexity estimation.
Algorithms for optimal dyadic decision trees
Hush, Don; Porter, Reid
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.
2015-08-10
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less
Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.
2015-08-10
Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.
A novel image-domain-based cone-beam computed tomography enhancement algorithm
NASA Astrophysics Data System (ADS)
Li, Xiang; Li, Tianfang; Yang, Yong; Heron, Dwight E.; Saiful Huq, M.
2011-05-01
Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.
Reducing the Dimensionality of the Inverse Problem in IMRT
NASA Astrophysics Data System (ADS)
Cabal, Gonzalo
2007-11-01
The inverse problem in IMRT (Intensity Modulated Radiation Therapy) consists in finding a set of radiation parameters based on the conditions given by the radiation therapist. The dimensionality of such problem usually depends on the number of bixels in which each radiation field is divided. Recently, the efforts have been put on finding arrangements of small segments (subfields) irradiating uniformly. In this paper a deterministic algorithm which allows to find solutions given a maximal number of segments is proposed. The procedure consists of two parts. In the first part the segments are chosen based on the irradiation geometry defined by the therapist. In the second part, the radiation intensity of the segments is optimized using standard optimization algorithms. Results are presented. Computational times were reduced and the final fluence maps were less complex without significantly sacrifying clinical value.
Reducing the Dimensionality of the Inverse Problem in IMRT
Cabal, Gonzalo
2007-11-26
The inverse problem in IMRT (Intensity Modulated Radiation Therapy) consists in finding a set of radiation parameters based on the conditions given by the radiation therapist. The dimensionality of such problem usually depends on the number of bixels in which each radiation field is divided. Recently, the efforts have been put on finding arrangements of small segments (subfields) irradiating uniformly. In this paper a deterministic algorithm which allows to find solutions given a maximal number of segments is proposed. The procedure consists of two parts. In the first part the segments are chosen based on the irradiation geometry defined by the therapist. In the second part, the radiation intensity of the segments is optimized using standard optimization algorithms. Results are presented. Computational times were reduced and the final fluence maps were less complex without significantly sacrifying clinical value.
Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain
2016-05-20
Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures. PMID:27411128
NASA Technical Reports Server (NTRS)
Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.
1976-01-01
The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Sinha, S K; Karray, F
2002-01-01
Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.
Algorithm for in-flight gyroscope calibration
NASA Technical Reports Server (NTRS)
Davenport, P. B.; Welter, G. L.
1988-01-01
An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank
2008-11-15
Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo
1999-05-01
This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.
[An Algorithm for Correcting Fetal Heart Rate Baseline].
Li, Xiaodong; Lu, Yaosheng
2015-10-01
Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Social significance of community structure: Statistical view
NASA Astrophysics Data System (ADS)
Li, Hui-Jia; Daniels, Jasmine J.
2015-01-01
Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.
Annealed Importance Sampling Reversible Jump MCMC algorithms
Karagiannis, Georgios; Andrieu, Christophe
2013-03-20
It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.
Comparing Coordinated Garbage Collection Algorithms for Arrays of Solid-state Drives
Lee, Junghee; Kim, Youngjae; Oral, H Sarp; Shipman, Galen M; Dillow, David A; Wang, Feiyi
2012-01-01
Solid-State Drives (SSDs) offer significant performance improvements over hard disk drives (HDD) on a number of workloads. The frequency of garbage collection (GC) activity is directly correlated with the pattern, frequency, and volume of write requests, and scheduling of GC is controlled by logic internal to the SSD. SSDs can exhibit significant performance degradations when garbage collection (GC) conflicts with an ongoing I/O request stream. When using SSDs in a RAID array, the lack of coordination of the local GC processes amplifies these performance degradations. No RAID controller or SSD available today has the technology to overcome this limitation. In our previous work, we presented a Global Garbage Collection (GGC) mechanism to improve response times and reduce performance variability for a RAID array of SSDs. A coordination method is employed so that GCs in the array can run at the same time. The coordination can exhibit substantial performance improvement. In this paper, we explore various GC coordination algorithms. We develop reactive and proactive GC coordination algorithms and evaluate their I/O performance and block erase counts for various workloads. We show that a proactive GC coordination algorithm can improve the I/O response times by up to 9% further and increase the lifetime of SSDs by reducing the number of block erase counts by up to 79% compared to a reactive algorithm.
Omelyan, I P; Mryglod, I M; Folk, R
2002-08-01
A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach. PMID:12241312
Omelyan, I P; Mryglod, I M; Folk, R
2002-08-01
A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.
NASA Astrophysics Data System (ADS)
Cartes, David A.; Ray, Laura R.; Collier, Robert D.
2002-04-01
An adaptive leaky normalized least-mean-square (NLMS) algorithm has been developed to optimize stability and performance of active noise cancellation systems. The research addresses LMS filter performance issues related to insufficient excitation, nonstationary noise fields, and time-varying signal-to-noise ratio. The adaptive leaky NLMS algorithm is based on a Lyapunov tuning approach in which three candidate algorithms, each of which is a function of the instantaneous measured reference input, measurement noise variance, and filter length, are shown to provide varying degrees of tradeoff between stability and noise reduction performance. Each algorithm is evaluated experimentally for reduction of low frequency noise in communication headsets, and stability and noise reduction performance are compared with that of traditional NLMS and fixed-leakage NLMS algorithms. Acoustic measurements are made in a specially designed acoustic test cell which is based on the original work of Ryan et al. [``Enclosure for low frequency assessment of active noise reducing circumaural headsets and hearing protection,'' Can. Acoust. 21, 19-20 (1993)] and which provides a highly controlled and uniform acoustic environment. The stability and performance of the active noise reduction system, including a prototype communication headset, are investigated for a variety of noise sources ranging from stationary tonal noise to highly nonstationary measured F-16 aircraft noise over a 20 dB dynamic range. Results demonstrate significant improvements in stability of Lyapunov-tuned LMS algorithms over traditional leaky or nonleaky normalized algorithms, while providing noise reduction performance equivalent to that of the NLMS algorithm for idealized noise fields.
Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng
2015-02-01
Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
OPC recipe optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Asthana, Abhishek; Wilkinson, Bill; Power, Dave
2016-03-01
Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
Advanced optimization of permanent magnet wigglers using a genetic algorithm
Hajima, Ryoichi
1995-12-31
In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.
An image-data-compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1981-01-01
Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.
Meteorological data analysis using MapReduce.
Fang, Wei; Sheng, V S; Wen, XueZhi; Pan, Wubin
2014-01-01
In the atmospheric science, the scale of meteorological data is massive and growing rapidly. K-means is a fast and available cluster algorithm which has been used in many fields. However, for the large-scale meteorological data, the traditional K-means algorithm is not capable enough to satisfy the actual application needs efficiently. This paper proposes an improved MK-means algorithm (MK-means) based on MapReduce according to characteristics of large meteorological datasets. The experimental results show that MK-means has more computing ability and scalability.
Bai, Mei; Chen, Jiuhong; Raupach, Rainer; Suess, Christoph; Tao, Ying; Peng, Mingchen
2009-01-01
A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P > 0.05), whereas noise was reduced (P < 0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P > 0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.
Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen
2009-01-15
A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.
Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm
Yao, Y
2008-02-08
's decomposition algorithm, much more efficiently, leading to significantly reduced computation time. Test runs on a desktop computer have shown reductions of up to 89%. Our focus this year has been on the implementation of parallel graph clustering on one of LLNL's supercomputers. In order to achieve efficiency in parallel computing, we have exploited the fact that large semantic graphs tend to be sparse, comprising loosely connected dense node clusters. When implemented on distributed memory computers, our approach performed well on several large graphs with up to one billion nodes, as shown in Table 2. The rightmost column of Table 2 contains the associated Newman's modularity [1], a metric that is widely used to assess the quality of community structure. Existing algorithms produce results that merely approximate the optimal solution, i.e., maximum modularity. We have developed a verification tool for decomposition algorithms, based upon a novel integer linear programming (ILP) approach, that computes an exact solution. We have used this ILP methodology to find the maximum modularity and corresponding optimal community structure for several well-studied graphs in the literature (e.g., Figure 1) [3]. The above approaches assume that modularity is the best measure of quality for community structure. In an effort to enhance this quality metric, we have also generalized Newman's modularity based upon an insightful random walk interpretation that allows us to vary the scope of the metric. Generalized modularity has enabled us to develop new, more flexible versions of our algorithms. In developing these methodologies, we have made several contributions to both graph theoretic algorithms and software engineering. We have written two research papers for refereed publication [3-4] and are working on another one [5]. In addition, we have presented our research findings at three academic and professional conferences.
CHROMagar Orientation Medium Reduces Urine Culture Workload
Manickam, Kanchana; Karlowsky, James A.; Adam, Heather; Lagacé-Wiens, Philippe R. S.; Rendina, Assunta; Pang, Paulette; Murray, Brenda-Lee
2013-01-01
Microbiology laboratories continually strive to streamline and improve their urine culture algorithms because of the high volumes of urine specimens they receive and the modest numbers of those specimens that are ultimately considered clinically significant. In the current study, we quantitatively measured the impact of the introduction of CHROMagar Orientation (CO) medium into routine use in two hospital laboratories and compared it to conventional culture on blood and MacConkey agars. Based on data extracted from our Laboratory Information System from 2006 to 2011, the use of CO medium resulted in a 28% reduction in workload for additional procedures such as Gram stains, subcultures, identification panels, agglutination tests, and biochemical tests. The average number of workload units (one workload unit equals 1 min of hands-on labor) per urine specimen was significantly reduced (P < 0.0001; 95% confidence interval [CI], 0.5326 to 1.047) from 2.67 in 2006 (preimplementation of CO medium) to 1.88 in 2011 (postimplementation of CO medium). We conclude that the use of CO medium streamlined the urine culture process and increased bench throughput by reducing both workload and turnaround time in our laboratories. PMID:23363839
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Omura, Yoshiaki; Jones, Marilyn; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu
2013-01-01
Sterilizing the pre-cancer skin of malignant melanoma (M.M.) with 70% Isopropyl alcohol intensified malignancy & the malignant response extended to surrounding normal looking skin, while sterilizing with 80% (vodka) or 12% (plum wine) ethyl alcohol completely inhibited M.M. in the area (both effects lasted for about 90 minutes initially). Burnt food (bread, vegetables, meat, and fish), a variety of smoked & non-smoked fish-skin, many animal's skin, pepper, Vitamin C over 75 mg, mango, pineapple, coconut, almond, sugars, Saccharine & Aspartame, garlic, onion, etc & Electromagnetic field from cellular phones worsened M.M. & induced abnormal M.M. response of surrounding skin. We found the following factors inhibit early stage of M.M. significantly: 1) Increasing normal cell telomere, by taking 500 mg Haritaki, often reached between 400-1150 ng& gradually diminished, but the M.M. response was completely inhibited until normal cell telomeres are reduced to 150 ng, which takes 6-8 hours. More than 70 mg Vitamin C, Orange Juice, & other high Vitamin C containing substances shouldn't be taken because they completely inhibit the effects of Haritaki. 2) We found Chrysotile asbestos & Tremolite asbestos (% of the Chrysotile amount) coexist. A special Cilantro tablet was used to remove asbestos & some toxic metals. 3) Vitamin D3 400 I.U. has a maximum inhibiting effect on M.M. but 800 I.U. or higher promotes malignancy. 4) Noricontaining Iodine, etc., was used. 5) EPA 180 mm with DHA 120 mg was most effectively used after metastasis to the surrounding skin was eliminated. When we combined 1 Cilantro tablet & Vitamin D3 400 I.U. withsmall Nori pieces & EPA with DHA, the effect of complete inhibition of M.M. lasted 9-11 hours. When these anti-M.M.substances (Haritaki, Vitamin D3, Cilantro, Nori, EPA. with DHA) were taken together, the effect lasted 12-14 hoursand M.M. involvement in surrounding normal-looking skin disappeared rapidly & original dark brown or black are as
Improving CMD Areal Density Analysis: Algorithms and Strategies
NASA Astrophysics Data System (ADS)
Wilson, R. E.
2014-06-01
Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMDÂ¡Â¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Multitasking the Davidson algorithm for the large, sparse eigenvalue problem
Umar, V.M.; Fischer, C.F. )
1989-01-01
The authors report how the Davidson algorithm, developed for handling the eigenvalue problem for large and sparse matrices arising in quantum chemistry, was modified for use in atomic structure calculations. To date these calculations have used traditional eigenvalue methods, which limit the range of feasible calculations because of their excessive memory requirements and unsatisfactory performance attributed to time-consuming and costly processing of zero valued elements. The replacement of a traditional matrix eigenvalue method by the Davidson algorithm reduced these limitations. Significant speedup was found, which varied with the size of the underlying problem and its sparsity. Furthermore, the range of matrix sizes that can be manipulated efficiently was expended by more than one order or magnitude. On the CRAY X-MP the code was vectorized and the importance of gather/scatter analyzed. A parallelized version of the algorithm obtained an additional 35% reduction in execution time. Speedup due to vectorization and concurrency was also measured on the Alliant FX/8.
Evaluation of hybrids algorithms for mass detection in digitalized mammograms
NASA Astrophysics Data System (ADS)
Cordero, José; Garzón Reyes, Johnson
2011-01-01
The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.
Faster unfolding of communities: Speeding up the Louvain algorithm
NASA Astrophysics Data System (ADS)
Traag, V. A.
2015-09-01
Many complex networks exhibit a modular structure of densely connected groups of nodes. Usually, such a modular structure is uncovered by the optimization of some quality function. Although flawed, modularity remains one of the most popular quality functions. The Louvain algorithm was originally developed for optimizing modularity, but has been applied to a variety of methods. As such, speeding up the Louvain algorithm enables the analysis of larger graphs in a shorter time for various methods. We here suggest to consider moving nodes to a random neighbor community, instead of the best neighbor community. Although incredibly simple, it reduces the theoretical runtime complexity from O (m ) to O (n log
Efficiency of a POD-based reduced second-order adjoint model in 4D-Var data assimilation
NASA Astrophysics Data System (ADS)
Daescu, D. N.; Navon, I. M.
2007-02-01
Order reduction strategies aim to alleviate the computational burden of the four-dimensional variational data assimilation by performing the optimization in a low-order control space. The proper orthogonal decomposition (POD) approach to model reduction is used to identify a reduced-order control space for a two-dimensional global shallow water model. A reduced second-order adjoint (SOA) model is developed and used to facilitate the implementation of a Hessian-free truncated-Newton (HFTN) minimization algorithm in the POD-based space. The efficiency of the SOA/HFTN implementation is analysed by comparison with the quasi-Newton BFGS and a nonlinear conjugate gradient algorithm. Several data assimilation experiments that differ only in the optimization algorithm employed are performed in the reduced control space. Numerical results indicate that first-order derivative methods are effective during the initial stages of the assimilation; in the later stages, the use of second-order derivative information is of benefit and HFTN provided significant CPU time savings when compared to the BFGS and CG algorithms. A comparison with data assimilation experiments in the full model space shows that with an appropriate selection of the basis functions the optimization in the POD space is able to provide accurate results at a reduced computational cost. The HFTN algorithm benefited most from the order reduction since computational savings were achieved both in the outer and inner iterations of the method. Further experiments are required to validate the approach for comprehensive global circulation models.
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the
A parallel unmixing algorithm for hyperspectral images
NASA Astrophysics Data System (ADS)
Robila, Stefan A.; Maciak, Lukasz G.
2006-10-01
We present a new algorithm for feature extraction in hyperspectral images based on source separation and parallel computing. In source separation, given a linear mixture of sources, the goal is to recover the components by producing an unmixing matrix. In hyperspectral imagery, the mixing transform and the separated components can be associated with endmembers and their abundances. Source separation based methods have been employed for target detection and classification of hyperspectral images. However, these methods usually involve restrictive conditions on the nature of the results such as orthogonality (in Principal Component Analysis - PCA and Orthogonal Subspace Projection - OSP) of the endmembers or statistical independence (in Independent Component Analysis - ICA) of the abundances nor do they fully satisfy all the conditions included in the Linear Mixing Model. Compared to this, our approach is based on the Nonnegative Matrix Factorization (NMF), a less constraining unmixing method. NMF has the advantage of producing positively defined data, and, with several modifications that we introduce also ensures addition to one. The endmember vectors and the abundances are obtained through a gradient based optimization approach. The algorithm is further modified to run in a parallel environment. The parallel NMF (P-NMF) significantly reduces the time complexity and is shown to also easily port to a distributed environment. Experiments with in-house and Hydice data suggest that NMF outperforms ICA, PCA and OSP for unsupervised endmember extraction. Coupled with its parallel implementation, the new method provides an efficient way for unsupervised unmixing further supporting our efforts in the development of a real time hyperspectral sensing environment with applications to industry and life sciences.
IJA: an efficient algorithm for query processing in sensor networks.
Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa
2011-01-01
One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375
Inference from matrix products: a heuristic spin glass algorithm
Hastings, Matthew B
2008-01-01
We present an algorithm for finding ground states of two-dimensional spin-glass systems based on ideas from matrix product states in quantum information theory. The algorithm works directly at zero temperature and defines an approximation to the energy whose accuracy depends on a parameter k. We test the algorithm against exact methods on random field and random bond Ising models, and we find that accurate results require a k which scales roughly polynomially with the system size. The algorithm also performs well when tested on small systems with arbitrary interactions, where no fast, exact algorithms exist. The time required is significantly less than Monte Carlo schemes.
A Re-Usable Algorithm for Teaching Procedural Skills.
ERIC Educational Resources Information Center
Jones, Mark K.; And Others
The design of a re-usable instructional algorithm for computer based instruction (CBI) is described. The prototype is implemented on IBM PC compatibles running the Windows(TM) graphical environment, using the prototyping tool ToolBook(TM). The algorithm is designed to reduce development and life cycle costs for CBI by providing an authoring…
Neural algorithms on VLSI concurrent architectures
Caviglia, D.D.; Bisio, G.M.; Parodi, G.
1988-09-01
The research concerns the study of neural algorithms for developing CAD tools with A.I. features in VLSI design activities. In this paper the focus is on optimization problems such as partitioning, placement and routing. These problems require massive computational power to be solved (NP-complete problems) and the standard approach is usually based on euristic techniques. Neural algorithms can be represented by a circuital model. This kind of representation can be easily mapped in a real circuit, which, however, features limited flexibility with respect to the variety of problems. In this sense the simulation of the neural circuit, by mapping it on a digital VLSI concurrent architecture seems to be preferrable; in addition this solution offers a wider choice with regard to algorithms characteristics (e.g. transfer curve of neural elements, reconfigurability of interconnections, etc.). The implementation with programmable components, such as transputers, allows an indirect mapping of the algorithm (one transputer for N neurons) accordingly to the dimension and the characteristics of the problem. In this way the neural algorithm described by the circuit is reduced to the algorithm that simulates the network behavior. The convergence properties of that formulation are studied with respect to the characteristics of the neural element transfer curve.
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
New algorithms for the minimal form'' problem
Oliveira, J.S.; Cook, G.O. Jr. ); Purtill, M.R. . Center for Communications Research)
1991-12-20
It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expression leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing minimal forms.'' In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.
NASA Astrophysics Data System (ADS)
Zu, Yun-Xiao; Zhou, Jie
2012-01-01
Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate.
A segmentation algorithm for noisy images
Xu, Y.; Olman, V.; Uberbacher, E.C.
1996-12-31
This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.
A comprehensive review of swarm optimization algorithms.
Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
A Comprehensive Review of Swarm Optimization Algorithms
2015-01-01
Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655
Optimal configuration algorithm of a satellite transponder
NASA Astrophysics Data System (ADS)
Sukhodoev, M. S.; Savenko, I. I.; Martynov, Y. A.; Savina, N. I.; Asmolovskiy, V. V.
2016-04-01
This paper describes the algorithm of determining the optimal transponder configuration of the communication satellite while in service. This method uses a mathematical model of the pay load scheme based on the finite-state machine. The repeater scheme is shown as a weighted oriented graph that is represented as plexus in the program view. This paper considers an algorithm example for application with a typical transparent repeater scheme. In addition, the complexity of the current algorithm has been calculated. The main peculiarity of this algorithm is that it takes into account the functionality and state of devices, reserved equipment and input-output ports ranged in accordance with their priority. All described limitations allow a significant decrease in possible payload commutation variants and enable a satellite operator to make reconfiguration solutions operatively.
Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm
NASA Astrophysics Data System (ADS)
Hasal, Martin; Pospisil, Lukas; Nowakova, Jana
2016-06-01
Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.
Pennington, A; Selvaraj, R; Kirkpatrick, S; Oliveira, S; Leventouri, T
2014-06-01
Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of −0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV.
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Cheney, M.C.
1997-12-31
The cost of energy for renewables has gained greater significance in recent years due to the drop in price in some competing energy sources, particularly natural gas. In pursuit of lower manufacturing costs for wind turbine systems, work was conducted to explore an innovative rotor designed to reduce weight and cost over conventional rotor systems. Trade-off studies were conducted to measure the influence of number of blades, stiffness, and manufacturing method on COE. The study showed that increasing number of blades at constant solidity significantly reduced rotor weight and that manufacturing the blades using pultrusion technology produced the lowest cost per pound. Under contracts with the National Renewable Energy Laboratory and the California Energy Commission, a 400 kW (33m diameter) turbine was designed employing this technology. The project included tests of an 80 kW (15.5m diameter) dynamically scaled rotor which demonstrated the viability of the design.
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Sidky, Emil Y.
2015-03-01
Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.
Genetic Algorithms for Digital Quantum Simulations
NASA Astrophysics Data System (ADS)
Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.
2016-06-01
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
On Dijkstra's Algorithm for Deadlock Detection
NASA Astrophysics Data System (ADS)
Li, Youming; Greca, Ardian; Harris, James
We study a classical problem in operating systems concerning deadlock detection for systems with reusable resources. The elegant Dijkstra's algorithm utilizes simple data structures, but it has the cost of quadratic dependence on the number of the processes. Our goal is to reduce the cost in an optimal way without losing the simplicity of the data structures. More specifically, we present a graph-free and almost optimal algorithm with the cost of linear dependence on the number of the processes, when the number of resources is fixed and when the units of requests for resources are bounded by constants.
Paradigms for Realizing Machine Learning Algorithms.
Agneeswaran, Vijay Srinivas; Tonpay, Pranay; Tiwary, Jayati
2013-12-01
The article explains the three generations of machine learning algorithms-with all three trying to operate on big data. The first generation tools are SAS, SPSS, etc., while second generation realizations include Mahout and RapidMiner (that work over Hadoop), and the third generation paradigms include Spark and GraphLab, among others. The essence of the article is that for a number of machine learning algorithms, it is important to look beyond the Hadoop's Map-Reduce paradigm in order to make them work on big data. A number of promising contenders have emerged in the third generation that can be exploited to realize deep analytics on big data.
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
Multilevel and motion model-based ultrasonic speckle tracking algorithms.
Yeung, F; Levinson, S F; Parker, K J
1998-03-01
A multilevel motion model-based approach to ultrasonic speckle tracking has been developed that addresses the inherent trade-offs associated with traditional single-level block matching (SLBM) methods. The multilevel block matching (MLBM) algorithm uses variable matching block and search window sizes in a coarse-to-fine scheme, preserving the relative immunity to noise associated with the use of a large matching block while preserving the motion field detail associated with the use of a small matching block. To decrease further the sensitivity of the multilevel approach to noise, speckle decorrelation and false matches, a smooth motion model-based block matching (SMBM) algorithm has been implemented that takes into account the spatial inertia of soft tissue elements. The new algorithms were compared to SLBM through a series of experiments involving manual translation of soft tissue phantoms, motion field computer simulations of rotation, compression and shear deformation, and an experiment involving contraction of human forearm muscles. Measures of tracking accuracy included mean squared tracking error, peak signal-to-noise ratio (PSNR) and blinded observations of optical flow. Measures of tracking efficiency included the number of sum squared difference calculations and the computation time. In the phantom translation experiments, the SMBM algorithm successfully matched the accuracy of SLBM using both large and small matching blocks while significantly reducing the number of computations and computation time when a large matching block was used. For the computer simulations, SMBM yielded better tracking accuracies and spatial resolution when compared with SLBM using a large matching block. For the muscle experiment, SMBM outperformed SLBM both in terms of PSNR and observations of optical flow. We believe that the smooth motion model-based MLBM approach represents a meaningful development in ultrasonic soft tissue motion measurement. PMID:9587997
Deblurring algorithms accounting for the finite detector size in photoacoustic tomography.
Roitner, Heinz; Haltmeier, Markus; Nuster, Robert; O'Leary, Dianne P; Berer, Thomas; Paltauf, Guenther; Grün, Hubert; Burgholzer, Peter
2014-05-01
Most reconstruction algorithms for photoacoustic tomography, like back projection or time reversal, work ideally for point-like detectors. For real detectors, which integrate the pressure over their finite size, images reconstructed by these algorithms show some blurring. Iterative reconstruction algorithms using an imaging matrix can take the finite size of real detectors directly into account, but the numerical effort is significantly higher compared to the use of direct algorithms. For spherical or cylindrical detection surfaces, the blurring caused by a finite detector size is proportional to the distance from the rotation center (spin blur) and is equal to the detector size at the detection surface. In this work, we apply deconvolution algorithms to reduce this type of blurring on simulated and on experimental data. Two particular deconvolution methods are compared, which both utilize the fact that a representation of the blurred image in polar coordinates decouples pixels at different radii from the rotation center. Experimental data have been obtained with a flat, rectangular piezoelectric detector measuring signals around a plastisol cylinder containing various small photoacoustic sources with variable distance from the center. Both simulated and experimental results demonstrate a nearly complete elimination of spin blur. PMID:24853146
Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants
Parlos, A.G.; Atiya, Amir; Chong, K.T. )
1991-11-01
A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process.
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Hierarchical tree algorithm for collisional N-body simulations on GRAPE
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Kawai, Atsushi
2016-06-01
We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
NASA Technical Reports Server (NTRS)
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Algorithmic cooling in liquid-state nuclear magnetic resonance
NASA Astrophysics Data System (ADS)
Atia, Yosi; Elias, Yuval; Mor, Tal; Weinstein, Yossi
2016-01-01
Algorithmic cooling is a method that employs thermalization to increase qubit purification level; namely, it reduces the qubit system's entropy. We utilized gradient ascent pulse engineering, an optimal control algorithm, to implement algorithmic cooling in liquid-state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of C132-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic-resonance spectroscopy.
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
Power spectral estimation algorithms
NASA Technical Reports Server (NTRS)
Bhatia, Manjit S.
1989-01-01
Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.
Optical rate sensor algorithms
NASA Astrophysics Data System (ADS)
Uhde-Lacovara, Jo A.
1989-12-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Large-scale sequential quadratic programming algorithms
Eldersveld, S.K.
1992-09-01
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.
An adaptive multi-level simulation algorithm for stochastic biological systems
Lester, C. Giles, M. B.; Baker, R. E.; Yates, C. A.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
An adaptive multi-level simulation algorithm for stochastic biological systems
NASA Astrophysics Data System (ADS)
Lester, C.; Yates, C. A.; Giles, M. B.; Baker, R. E.
2015-01-01
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the
CARVE--a constructive algorithm for real-valued examples.
Young, S; Downs, T
1998-01-01
A constructive neural-network algorithm is presented. For any consistent classification task on real-valued training vectors, the algorithm constructs a feedforward network with a single hidden layer of threshold units which implements the task. The algorithm, which we call CARVE, extends the "sequential learning" algorithm of Marchand et al. from Boolean inputs to the real-valued input case, and uses convex hull methods for the determination of the network weights. The algorithm is an efficient training scheme for producing near-minimal network solutions for arbitrary classification tasks. The algorithm is applied to a number of benchmark problems including Gorman and Sejnowski's sonar data, the Monks problems and Fisher's iris data. A significant application of the constructive algorithm is in providing an initial network topology and initial weights for other neural-network training schemes and this is demonstrated by application to backpropagation.
SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm
Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M
2014-06-01
Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans.
A New ADPCM Image Compression Algorithm and the Effect of Fixed-Pattern Sensor Noise
NASA Astrophysics Data System (ADS)
Sullivan, James R.
1989-04-01
High speed image compression algorithms that achieve visually lossless quality at low bit-rates are essential elements of many digital imaging systems. In examples such as remote sensing, there is often the additional requirement that the compression hardware be compact and consume minimal power. To meet these requirements a new adaptive differential pulse code modulation (ADPCM) algorithm was developed that significantly reduces edge errors by including quantizers that adapt to the local bias of the differential signal. In addition, to reduce the average bit-rate in certain applications a variable rate version of the algorithm called run adaptive differential coding (RADC) was developed that combines run-length and predictive coding and a variable number of levels in each quantizer to produce bit-rates comparable with adaptive discrete cosine transform (ADCT) at a visually lossless level of image quality. It will also be shown that this