Science.gov

Sample records for algorithm significantly reduces

  1. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  2. Pyrolysis of wastewater biosolids significantly reduces estrogenicity.

    PubMed

    Hoffman, T C; Zitomer, D H; McNamara, P J

    2016-11-05

    Most wastewater treatment processes are not specifically designed to remove micropollutants. Many micropollutants are hydrophobic so they remain in the biosolids and are discharged to the environment through land-application of biosolids. Micropollutants encompass a broad range of organic chemicals, including estrogenic compounds (natural and synthetic) that reside in the environment, a.k.a. environmental estrogens. Public concern over land application of biosolids stemming from the occurrence of micropollutants hampers the value of biosolids which are important to wastewater treatment plants as a valuable by-product. This research evaluated pyrolysis, the partial decomposition of organic material in an oxygen-deprived system under high temperatures, as a biosolids treatment process that could remove estrogenic compounds from solids while producing a less hormonally active biochar for soil amendment. The estrogenicity, measured in estradiol equivalents (EEQ) by the yeast estrogen screen (YES) assay, of pyrolyzed biosolids was compared to primary and anaerobically digested biosolids. The estrogenic responses from primary solids and anaerobically digested solids were not statistically significantly different, but pyrolysis of anaerobically digested solids resulted in a significant reduction in EEQ; increasing pyrolysis temperature from 100°C to 500°C increased the removal of EEQ with greater than 95% removal occurring at or above 400°C. This research demonstrates that biosolids treatment with pyrolysis would substantially decrease (removal>95%) the estrogens associated with this biosolids product. Thus, pyrolysis of biosolids can be used to produce a valuable soil amendment product, biochar, that minimizes discharge of estrogens to the environment.

  3. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word ( k-tuple'') length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  4. Discovering sequence similarity by the algorithmic significance method

    SciTech Connect

    Milosavljevic, A.

    1993-02-01

    The minimal-length encoding approach is applied to define concept of sequence similarity. A sequence is defined to be similar to another sequence or to a set of keywords if it can be encoded in a small number of bits by taking advantage of common subwords. Minimal-length encoding of a sequence is computed in linear time, using a data compression algorithm that is based on a dynamic programming strategy and the directed acyclic word graph data structure. No assumptions about common word (``k-tuple``) length are made in advance, and common words of any length are considered. The newly proposed algorithmic significance method provides an exact upper bound on the probability that sequence similarity has occurred by chance, thus eliminating the need for any arbitrary choice of similarity thresholds. Preliminary experiments indicate that a small number of keywords can positively identify a DNA sequence, which is extremely relevant in the context of partial sequencing by hybridization.

  5. Novel fracturing algorithm to reduce shot count for curvy shape

    NASA Astrophysics Data System (ADS)

    Tao, Takuya; Takahashi, Nobuyasu; Hamaji, Masakazu

    2013-09-01

    The increasing complexity of RET solutions has increased the shot count for advanced photomasks. In particular, the introduction of the inverse lithography technique (ILT) brings a significant increase in mask complexity and conventional fracturing algorithms generate much more shots because they are not optimized for curvilinear shapes. Several methods have been proposed to reduce shot count for ILT photomasks. One of the stronger approaches is the model-based fracturing, which utilizes precise dose control, shot overlaps and many other techniques. However, it requires much more computation resource and upgrades to the EB mask writer to support user-level dose modulation and shot overlaps. The algorithm proposed here is not model-based but based on geometry processing, the combination of shape extraction and direct manhattanization. Because it is not based on physical simulation, its processing speed is as fast as a conventional fracturing algorithm. It can generate both non-overlapping shots and overlapping shots and does not require user-level dose modulation. As the result, it can be utilized for the current standard VSB mask writers.

  6. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  7. A genetic algorithm to reduce stream channel cross section data

    USGS Publications Warehouse

    Berenbrock, C.

    2006-01-01

    A genetic algorithm (GA) was used to reduce cross section data for a hypothetical example consisting of 41 data points and for 10 cross sections on the Kootenai River. The number of data points for the Kootenai River cross sections ranged from about 500 to more than 2,500. The GA was applied to reduce the number of data points to a manageable dataset because most models and other software require fewer than 100 data points for management, manipulation, and analysis. Results indicated that the program successfully reduced the data. Fitness values from the genetic algorithm were lower (better) than those in a previous study that used standard procedures of reducing the cross section data. On average, fitnesses were 29 percent lower, and several were about 50 percent lower. Results also showed that cross sections produced by the genetic algorithm were representative of the original section and that near-optimal results could be obtained in a single run, even for large problems. Other data also can be reduced in a method similar to that for cross section data.

  8. Mitochondrial Polymorphisms Significantly Reduce the Risk of Parkinson Disease

    PubMed Central

    van der Walt, Joelle M.; Nicodemus, Kristin K.; Martin, Eden R.; Scott, William K.; Nance, Martha A.; Watts, Ray L.; Hubble, Jean P.; Haines, Jonathan L.; Koller, William C.; Lyons, Kelly; Pahwa, Rajesh; Stern, Matthew B.; Colcher, Amy; Hiner, Bradley C.; Jankovic, Joseph; Ondo, William G.; Allen Jr., Fred H.; Goetz, Christopher G.; Small, Gary W.; Mastaglia, Frank; Stajich, Jeffrey M.; McLaurin, Adam C.; Middleton, Lefkos T.; Scott, Burton L.; Schmechel, Donald E.; Pericak-Vance, Margaret A.; Vance, Jeffery M.

    2003-01-01

    Mitochondrial (mt) impairment, particularly within complex I of the electron transport system, has been implicated in the pathogenesis of Parkinson disease (PD). More than half of mitochondrially encoded polypeptides form part of the reduced nicotinamide adenine dinucleotide dehydrogenase (NADH) complex I enzyme. To test the hypothesis that mtDNA variation contributes to PD expression, we genotyped 10 single-nucleotide polymorphisms (SNPs) that define the European mtDNA haplogroups in 609 white patients with PD and 340 unaffected white control subjects. Overall, individuals classified as haplogroup J (odds ratio [OR] 0.55; 95% confidence interval [CI] 0.34–0.91; P=.02) or K (OR 0.52; 95% CI 0.30–0.90; P=.02) demonstrated a significant decrease in risk of PD versus individuals carrying the most common haplogroup, H. Furthermore, a specific SNP that defines these two haplogroups, 10398G, is strongly associated with this protective effect (OR 0.53; 95% CI 0.39–0.73; P=.0001). SNP 10398G causes a nonconservative amino acid change from threonine to alanine within the NADH dehydrogenase 3 (ND3) of complex I. After stratification by sex, this decrease in risk appeared stronger in women than in men (OR 0.43; 95% CI 0.27–0.71; P=.0009). In addition, SNP 9055A of ATP6 demonstrated a protective effect for women (OR 0.45; 95% CI 0.22–0.93; P=.03). Our results suggest that ND3 is an important factor in PD susceptibility among white individuals and could help explain the role of complex I in PD expression. PMID:12618962

  9. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  10. New algorithms for reducing cross-dispersed echelle spectra

    NASA Astrophysics Data System (ADS)

    Piskunov, N. E.; Valenti, J. A.

    2002-04-01

    We describe advanced image processing algorithms, implemented in a data analysis package for conventional and cross-dispersed echelle spectra. Comparisons with results from other packages illustrate the outstanding quality of the new REDUCE package, particularly in terms of resulting noise level and treatment of CCD defects and cosmic ray spikes. REDUCE can be adapted relatively easily to handle a variety of instrument types, including spectrographs with prism or grating cross-dispersers, possibly fed by a fiber or image slicer, etc. In addition to reduced spectra, an accurate spatial profile is recovered, providing valuable information about the spectrograph PSF and simplifying scattered light corrections. Based on data obtained with the VLT UVES and SAAO Giraffe spectrometers.

  11. The significance of sensory appeal for reduced meat consumption.

    PubMed

    Tucker, Corrina A

    2014-10-01

    Reducing meat (over-)consumption as a way to help address environmental deterioration will require a range of strategies, and any such strategies will benefit from understanding how individuals might respond to various meat consumption practices. To investigate how New Zealanders perceive such a range of practices, in this instance in vitro meat, eating nose-to-tail, entomophagy and reducing meat consumption, focus groups involving a total of 69 participants were held around the country. While it is the damaging environmental implications of intensive farming practices and the projected continuation of increasing global consumer demand for meat products that has propelled this research, when asked to consider variations on the conventional meat-centric diet common to many New Zealanders, it was the sensory appeal of the areas considered that was deemed most problematic. While an ecological rationale for considering these 'meat' alternatives was recognised and considered important by most, transforming this value into action looks far less promising given the recurrent sensory objections to consuming different protein-based foods or of reducing meat consumption. This article considers the responses of focus group participants in relation to each of the dietary practices outlined, and offers suggestions on ways to encourage a more environmentally viable diet.

  12. Bacteriophage significantly reduces Listeria monocytogenes on raw salmon fillet tissue.

    PubMed

    Soni, Kamlesh A; Nannapaneni, Ramakrishna

    2010-01-01

    We have demonstrated the antilisterial activity of generally recognized as safe (GRAS) bacteriophage LISTEX P100 (phage P100) on the surface of raw salmon fillet tissue against Listeria monocytogenes serotypes 1/2a and 4b. In a broth model system, phage P100 completely inhibited L. monocytogenes growth at 4 degrees Celsius for 12 days, at 10 degrees Celsius for 8 days, and at 30 degrees Celsius for 4 days, at all three phage concentrations of 10(4), 10(6), and 10(8) PFU/ml. On raw salmon fillet tissue, a higher phage concentration of 10(8) PFU/g was required to yield 1.8-, 2.5-, and 3.5-log CFU/g reductions of L. monocytogenes from its initial loads of 2, 3, and 4.5 log CFU/g at 4 or 22 degrees Celsius. Over the 10 days of storage at 4 degrees Celsius, L. monocytogenes growth was inhibited by phage P100 on the raw salmon fillet tissue to as low as 0.3 log CFU/g versus normal growth of 2.6 log CFU/g in the absence of phage. Phage P100 remained stable on the raw salmon fillet tissue over a 10-day storage period, with only a marginal loss of 0.6 log PFU/g from an initial phage treatment of 8 log PFU/g. These findings illustrate that the GRAS bacteriophage LISTEX P100 is listericidal on raw salmon fillets and is useful in quantitatively reducing L. monocytogenes.

  13. Protein knotting through concatenation significantly reduces folding stability

    PubMed Central

    Hsu, Shang-Te Danny

    2016-01-01

    Concatenation by covalent linkage of two protomers of an intertwined all-helical HP0242 homodimer from Helicobacter pylori results in the first example of an engineered knotted protein. While concatenation does not affect the native structure according to X-ray crystallography, the folding kinetics is substantially slower compared to the parent homodimer. Using NMR hydrogen-deuterium exchange analysis, we showed here that concatenation destabilises significantly the knotted structure in solution, with some regions close to the covalent linkage being destabilised by as much as 5 kcal mol−1. Structural mapping of chemical shift perturbations induced by concatenation revealed a pattern that is similar to the effect induced by concentrated chaotrophic agent. Our results suggested that the design strategy of protein knotting by concatenation may be thermodynamically unfavourable due to covalent constrains imposed on the flexible fraying ends of the template structure, leading to rugged free energy landscape with increased propensity to form off-pathway folding intermediates. PMID:27982106

  14. Tadalafil significantly reduces ischemia reperfusion injury in skin island flaps

    PubMed Central

    Kayiran, Oguz; Cuzdan, Suat S.; Uysal, Afsin; Kocer, Ugur

    2013-01-01

    Introduction: Numerous pharmacological agents have been used to enhance the viability of flaps. Ischemia reperfusion (I/R) injury is an unwanted, sometimes devastating complication in reconstructive microsurgery. Tadalafil, a specific inhibitor of phosphodiesterase type 5 is mainly used for erectile dysfunction, and acts on vascular smooth muscles, platelets and leukocytes. Herein, the protective and therapeutical effect of tadalafil in I/R injury in rat skin flap model is evaluated. Materials and Methods: Sixty epigastric island flaps were used to create I/R model in 60 Wistar rats (non-ischemic group, ischemic group, medication group). Biochemical markers including total nitrite, malondialdehyde (MDA) and myeloperoxidase (MPO) were analysed. Necrosis rates were calculated and histopathologic evaluation was carried out. Results: MDA, MPO and total nitrite values were found elevated in the ischemic group, however there was an evident drop in the medication group. Histological results revealed that early inflammatory findings (oedema, neutrophil infiltration, necrosis rate) were observed lower with tadalafil administration. Moreover, statistical significance (P < 0.05) was recorded. Conclusions: We conclude that tadalafil has beneficial effects on epigastric island flaps against I/R injury. PMID:23960309

  15. Colchicine Significantly Reduces Incident Cancer in Gout Male Patients

    PubMed Central

    Kuo, Ming-Chun; Chang, Shun-Jen; Hsieh, Ming-Chia

    2015-01-01

    Abstract Patients with gout are more likely to develop most cancers than subjects without gout. Colchicine has been used for the treatment and prevention of gouty arthritis and has been reported to have an anticancer effect in vitro. However, to date no study has evaluated the relationship between colchicine use and incident cancers in patients with gout. This study enrolled male patients with gout identified in Taiwan's National Health Insurance Database for the years 1998 to 2011. Each gout patient was matched with 4 male controls by age and by month and year of first diagnosis, and was followed up until 2011. The study excluded those who were diagnosed with diabetes or any type of cancer within the year following enrollment. We calculated hazard ratio (HR), aged-adjusted standardized incidence ratio, and incidence of 1000 person-years analyses to evaluate cancer risk. A total of 24,050 male patients with gout and 76,129 male nongout controls were included. Patients with gout had a higher rate of incident all-cause cancers than controls (6.68% vs 6.43%, P = 0.006). A total of 13,679 patients with gout were defined as having been ever-users of colchicine and 10,371 patients with gout were defined as being never-users of colchicine. Ever-users of colchicine had a significantly lower HR of incident all-cause cancers than never-users of colchicine after adjustment for age (HR = 0.85, 95% CI = 0.77–0.94; P = 0.001). In conclusion, colchicine use was associated with a decreased risk of incident all-cause cancers in male Taiwanese patients with gout. PMID:26683907

  16. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  17. A proposed Fast algorithm to construct the system matrices for a reduced-order groundwater model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2017-04-01

    Past research has demonstrated that a reduced-order model (ROM) can be two-to-three orders of magnitude smaller than the original model and run considerably faster with acceptable error. A standard method to construct the system matrices for a ROM is Proper Orthogonal Decomposition (POD), which projects the system matrices from the full model space onto a subspace whose range spans the full model space but has a much smaller dimension than the full model space. This projection can be prohibitively expensive to compute if it must be done repeatedly, as with a Monte Carlo simulation. We propose a Fast Algorithm to reduce the computational burden of constructing the system matrices for a parameterized, reduced-order groundwater model (i.e. one whose parameters are represented by zones or interpolation functions). The proposed algorithm decomposes the expensive system matrix projection into a set of simple scalar-matrix multiplications. This allows the algorithm to efficiently construct the system matrices of a POD reduced-order model at a significantly reduced computational cost compared with the standard projection-based method. The developed algorithm is applied to three test cases for demonstration purposes. The first test case is a small, two-dimensional, zoned-parameter, finite-difference model; the second test case is a small, two-dimensional, interpolated-parameter, finite-difference model; and the third test case is a realistically-scaled, two-dimensional, zoned-parameter, finite-element model. In each case, the algorithm is able to accurately and efficiently construct the system matrices of the reduced-order model.

  18. Classification Algorithms for Big Data Analysis, a Map Reduce Approach

    NASA Astrophysics Data System (ADS)

    Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.

    2015-03-01

    Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  19. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  20. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  1. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  2. A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.

    SciTech Connect

    Steensland, Johan; Ray, Jaideep

    2003-07-01

    This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In many cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.

  3. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  4. Deconvolution algorithms for photoacoustic tomography to reduce blurring caused by finite sized detectors

    NASA Astrophysics Data System (ADS)

    Burgholzer, Peter; Roitner, Heinz; Berer, Thomas; Grün, Hubert; O'Leary, D. P.; Nuster, R.; Paltauf, G.; Haltmeier, M.

    2013-03-01

    Most reconstruction algorithms for photoacoustic tomography, like back-projection or time-reversal, work ideally for point-like detectors. For real detectors, which integrate the pressure over their finite size, it was shown that images reconstructed by back-projection or time-reversal show some blurring. Iterative reconstruction algorithms using an imaging matrix can take the finite size of real detectors directly into account, but the numerical effort is significantly higher compared to the use of direct algorithms. For spherical or cylindrical detection surfaces the blurring caused by a finite detector size is proportional to the distance from the rotation center ("spin blur") and is equal to the detector size at the detection surface. In this work we use deconvolution algorithms to reduce this type of blurring on simulated and on experimental data. Experimental data were obtained on a plastisol cylinder with 6 thin holes filled with an absorbing liquid (OrangeG). The holes were located on a spiral emanating from the center of the cylinder. Data acquisition was done by utilization of a piezoelectric detector which was rotated around the plastisol cylinder.

  5. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  6. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  7. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    PubMed Central

    Amudha, P.; Karthik, S.; Sivakumari, S.

    2015-01-01

    Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625

  8. Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula

    2012-01-01

    AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,

  9. Reducing communication costs in the conjugate gradient algorithm on distributed memory multiprocessors

    SciTech Connect

    D`Azevedo, E.F.; Romine, C.H.

    1992-09-01

    The standard formulation of the conjugate gradient algorithm involves two inner product computations. The results of these two inner products are needed to update the search direction and the computed solution. In a distributed memory parallel environment, the computation and subsequent distribution of these two values requires two separate communication and synchronization phases. In this paper, we present a mathematically equivalent rearrangement of the standard algorithm that reduces the number of communication phases. We give a second derivation of the modified conjugate gradient algorithm in terms of the natural relationship with the underlying Lanczos process. We also present empirical evidence of the stability of this modified algorithm.

  10. Use of computer algorithms to reduce viral quasispecies sequence space.

    PubMed

    Epperson, E S; Tyrer, H W

    1995-01-01

    A virus may express multiple simple mutations producing a set of viral subspecies called quasispecies: the quasispecies are considered the same species as the original virus. We are interested in reducing the point mutation space to enumerate that sequence space. We form a point mutation by applying a single bit mutation to a strand of viral DNA. How many differing viruses are possible if we allow any of the base pairs to change along the strand? For a strand of arbitrary length n we see that there is a possible sequence space of 4n-1.4 = 4n combinations. We can further remove identical sequences due to redundant amino acid codon encoding. This requires the use of a computer, but this time the complexity is a product: the number of possible amino acids times the number of codons. This substantial reduction from an exponential complexity O(4n) to a product O(n.amino-acid-number) gives us the complete list of mutant viral entities which are one mutation away from the original. Further reduction is possible, but requires biological insight regarding the viability of the mutation. By recognizing the possible sequence space, prediction can be made toward identifying future viral strains of HIV and influenza (to name two important viral particles), and perhaps develop a predictive intervention.

  11. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  12. [Parallel PLS algorithm using MapReduce and its aplication in spectral modeling].

    PubMed

    Yang, Hui-Hua; Du, Ling-Ling; Li, Ling-Qiao; Tang, Tian-Biao; Guo, Tuo; Liang, Qiong-Lin; Wang, Yi-Ming; Luo, Guo-An

    2012-09-01

    Partial least squares (PLS) has been widely used in spectral analysis and modeling, and it is computation-intensive and time-demanding when dealing with massive data To solve this problem effectively, a novel parallel PLS using MapReduce is proposed, which consists of two procedures, the parallelization of data standardizing and the parallelization of principal component computing. Using NIR spectral modeling as an example, experiments were conducted on a Hadoop cluster, which is a collection of ordinary computers. The experimental results demonstrate that the parallel PLS algorithm proposed can handle massive spectra, can significantly cut down the modeling time, and gains a basically linear speedup, and can be easily scaled up.

  13. Utilization of UV Curing Technology to Significantly Reduce the Manufacturing Cost of LIB Electrodes

    SciTech Connect

    Voelker, Gary; Arnold, John

    2015-11-30

    Previously identified novel binders and associated UV curing technology have been shown to reduce the time required to apply and finish electrode coatings from tens of minutes to less than one second. This revolutionary approach can result in dramatic increases in process speeds, significantly reduced capital (a factor of 10 to 20) and operating costs, reduced energy requirements, and reduced environmental concerns and costs due to the virtual elimination of harmful volatile organic solvents and associated solvent dryers and recovery systems. The accumulated advantages of higher speed, lower capital and operating costs, reduced footprint, lack of VOC recovery, and reduced energy cost is a reduction of 90% in the manufacturing cost of cathodes. When commercialized, the resulting cost reduction in Lithium batteries will allow storage device manufacturers to expand their sales in the market and thereby accrue the energy savings of broader utilization of HEVs, PHEVs and EVs in the U.S., and a broad technology export market is also envisioned.

  14. Practical aspects of variable reduction formulations and reduced basis algorithms in multidisciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1995-01-01

    This paper discusses certain connections between nonlinear programming algorithms and the formulation of optimization problems for systems governed by state constraints. The major points of this paper are the detailed calculation of the sensitivities associated with different formulations of optimization problems and the identification of some useful relationships between different formulations. These relationships have practical consequences; if one uses a reduced basis nonlinear programming algorithm, then the implementations for the different formulations need only differ in a single step.

  15. New Classification Method Based on Support-Significant Association Rules Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Guoxin; Shi, Wen

    One of the most well-studied problems in data mining is mining for association rules. There was also research that introduced association rule mining methods to conduct classification tasks. These classification methods, based on association rule mining, could be applied for customer segmentation. Currently, most of the association rule mining methods are based on a support-confidence structure, where rules satisfied both minimum support and minimum confidence were returned as strong association rules back to the analyzer. But, this types of association rule mining methods lack of rigorous statistic guarantee, sometimes even caused misleading. A new classification model for customer segmentation, based on association rule mining algorithm, was proposed in this paper. This new model was based on the support-significant association rule mining method, where the measurement of confidence for association rule was substituted by the significant of association rule that was a better evaluation standard for association rules. Data experiment for customer segmentation from UCI indicated the effective of this new model.

  16. An explicit algebraic reduced order algorithm for lithium ion cell voltage prediction

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, V.; Gambhire, Priya; Hariharan, Krishnan S.; Khandelwal, Ashish; Kolake, Subramanya Mayya; Oh, Dukjin; Doo, Seokgwang

    2014-02-01

    The detailed isothermal electrochemical model for a lithium ion cell has ten coupled partial differential equations to describe the cell behavior. In an earlier publication [Journal of Power Sources, 222, 426 (2013)], a reduced order model (ROM) was developed by reducing the detailed model to a set of five linear ordinary differential equations and nonlinear algebraic expressions, using uniform reaction rate, volume averaging and profile based approximations. An arbitrary current profile, involving charge, rest and discharge, is broken down into constant current and linearly varying current periods. The linearly varying current period results are generic, since it includes the constant current period results as well. Hence, the linear ordinary differential equations in ROM are solved for a linearly varying current period and an explicit algebraic algorithm is developed for lithium ion cell voltage prediction. While the existing battery management system (BMS) algorithms are equivalent circuit based and ordinary differential equations, the proposed algorithm is an explicit algebraic algorithm. These results are useful to develop a BMS algorithm for on-board applications in electric or hybrid vehicles, smart phones etc. This algorithm is simple enough for a spread-sheet implementation and is useful for rapid analysis of laboratory data.

  17. An efficient algorithm for DNA fragment assembly in MapReduce.

    PubMed

    Xu, Baomin; Gao, Jin; Li, Chunyan

    2012-09-28

    Fragment assembly is one of the most important problems of sequence assembly. Algorithms for DNA fragment assembly using de Bruijn graph have been widely used. These algorithms require a large amount of memory and running time to build the de Bruijn graph. Another drawback of the conventional de Bruijn approach is the loss of information. To overcome these shortcomings, this paper proposes a parallel strategy to construct de Bruijin graph. Its main characteristic is to avoid the division of de Bruijin graph. A novel fragment assembly algorithm based on our parallel strategy is implemented in the MapReduce framework. The experimental results show that the parallel strategy can effectively improve the computational efficiency and remove the memory limitations of the assembly algorithm based on Euler superpath. This paper provides a useful attempt to the assembly of large-scale genome sequence using Cloud Computing.

  18. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  19. The photon dose calculation algorithm used in breast radiotherapy has significant impact on the parameters of radiobiological models.

    PubMed

    Petillion, Saskia; Swinnen, Ans; Defraene, Gilles; Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2014-07-08

    The comparison of the pencil beam dose calculation algorithm with modified Batho heterogeneity correction (PBC-MB) and the analytical anisotropic algorithm (AAA) and the mutual comparison of advanced dose calculation algorithms used in breast radiotherapy have focused on the differences between the physical dose distributions. Studies on the radiobiological impact of the algorithm (both on the tumor control and the moderate breast fibrosis prediction) are lacking. We, therefore, investigated the radiobiological impact of the dose calculation algorithm in whole breast radiotherapy. The clinical dose distributions of 30 breast cancer patients, calculated with PBC-MB, were recalculated with fixed monitor units using more advanced algorithms: AAA and Acuros XB. For the latter, both dose reporting modes were used (i.e., dose-to-medium and dose-to-water). Next, the tumor control probability (TCP) and the normal tissue complication probability (NTCP) of each dose distribution were calculated with the Poisson model and with the relative seriality model, respectively. The endpoint for the NTCP calculation was moderate breast fibrosis five years post treatment. The differences were checked for significance with the paired t-test. The more advanced algorithms predicted a significantly lower TCP and NTCP of moderate breast fibrosis then found during the corresponding clinical follow-up study based on PBC calculations. The differences varied between 1% and 2.1% for the TCP and between 2.9% and 5.5% for the NTCP of moderate breast fibrosis. The significant differences were eliminated by determination of algorithm-specific model parameters using least square fitting. Application of the new parameters on a second group of 30 breast cancer patients proved their appropriateness. In this study, we assessed the impact of the dose calculation algorithms used in whole breast radiotherapy on the parameters of the radiobiological models. The radiobiological impact was eliminated by

  20. Fast maximum intensity projection algorithm using shear warp factorization and reduced resampling.

    PubMed

    Fang, Laifa; Wang, Yi; Qiu, Bensheng; Qian, Yuancheng

    2002-04-01

    Maximal intensity projection (MIP) is routinely used to view MRA and other volumetric angiographic data. The straightforward implementation of MIP is ray casting that traces a volumetric data set in a computationally expensive manner. This article reports a fast MIP algorithm using shear warp factorization and reduced resampling that drastically reduced the redundancy in the computations for projection, thereby speeding up MIP by more than 10 times.

  1. Road Traffic Control Based on Genetic Algorithm for Reducing Traffic Congestion

    NASA Astrophysics Data System (ADS)

    Shigehiro, Yuji; Miyakawa, Takuya; Masuda, Tatsuya

    In this paper, we propose a road traffic control method for reducing traffic congestion with genetic algorithm. In the not too distant future, the system which controls the routes of all vehicles in a certain area must be realized. The system should optimize the routes of all vehicles, however the solution space of this problem is enormous. Therefore we apply the genetic algorithm to this problem, by encoding the route of all vehicles to a fixed length chromosome. To improve the search performance, a new genetic operator called “path shortening” is also designed. The effectiveness of the proposed method is shown by the experiment.

  2. Aerodynamic Improvements of an Empty Timber Truck can Have the Potential of Significantly Reducing Fuel Consumption

    NASA Astrophysics Data System (ADS)

    Andersson, Magnus; Marashi, Seyedeh Sepideh; Karlsson, Matts

    2012-11-01

    In the present study, aerodynamic drag (AD) has been estimated for an empty and a fully loaded conceptual timber truck (TT) using Computational Fluid Dynamics (CFD). The increasing fuel prices have challenged heavy duty vehicle (HDV) manufactures to strive for better fuel economy, by e.g. utilizing drag reducing external devices. Despite this knowledge, the TT fleets seem to be left in the dark. Like HDV aerodynamics, similarities can be observed as a large low pressure wake is formed behind the tractor (unloaded) and downstream of the trailer (full load) thus generating AD. As TTs travel half the time without any cargo, focus on drag reduction is important. The full scaled TTs where simulated using the realizable k-epsilon model with grid adaption techniques for mesh independence. Our results indicate that a loaded TT reduces the AD significantly as both wake size and turbulence kinetic energy are lowered. In contrast to HDV the unloaded TTs have a much larger design space available for possible drag reducing devices, e.g. plastic wrapping and/or flaps. This conceptual CFD study has given an indication of the large AD difference between the unloaded and fully loaded TT, showing the potential for significant AD improvements.

  3. New noise reduction method for reducing CT scan dose: Combining Wiener filtering and edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Anam, Choirul; Haryanto, Freddy; Widita, Rena; Arif, Idam

    2015-09-01

    New noise reduction method for reducing dose of CT scans has been proposed. The new method is expected to address the major problems in the noise reduction algorithm, i.e. the decreasing in the spatial resolution of the image. The proposed method was developed by combining adaptive Wiener filtering and edge detection algorithms. The first step, the image was filtered with a Wiener filter. Separately, edge detection operation performed on the original image using the Prewitt method. The next step, a new image was generated based on the edge detection operation. At the edge area, the image was taken from the original image, while at the non-edge area, the image was taken from the image that had been filtered with a Wiener filter. The new method was tested on a CT image of the spatial resolution phantom, which was scanned by different current-time multiplication, namely 80, 130 and 200 mAs, while other exposure factors were kept in constant conditions. The spatial resolution phantom consists of six sets of bar pattern made of plexi-glass and separated at some distance by water. The new image quality assessed from the amount of noise and the magnitude of spatial resolution. Noise was calculated by determining the standard deviation of the homogeneous regions, while the spatial resolution was assessed by observation of the area sets of the bar pattern. In addition, to evaluate the performance of this new method has also been tested on patient CT images. From the measurements, the new method can reduce the noise to an average 64.85%, with a spatial resolution does not decrease significantly. Visually, the third set bar on the image phantom (the distance between the bar 1.0 mm) can still be distinguished, as well as on the original image. Meanwhile, if the image is only processed using Wiener filter, the second set bar (the distance between the bar 1.3 mm) are distinguishable. Testing this new method to patient image, its results in relatively the same. Thus, using this

  4. Pediatric Medical Care System in China Has Significantly Reduced Abandonment of Acute Lymphoblastic Leukemia Treatment

    PubMed Central

    Zhou, Qi; Hong, Dan; Lu, Jun; Zheng, Defei; Ashwani, Neetica

    2015-01-01

    In this study, we have analyzed both administrative and clinical data from our hospital during 2002 to 2012 to evaluate the influence of government medical policies on reducing abandonment treatment in pediatric patients with acute lymphoblastic leukemia. Two policies funding for the catastrophic diseases and the new rural cooperative medical care system (NRCMS) were initiated in 2005 and 2011, respectively. About 1151 children diagnosed with acute lymphoblastic leukemia were enrolled in our study during this period and 316 cases abandoned treatment. Statistical differences in sex, age, number of children in the family, and family financial status were observed. Of most importance, the medical insurance coverage was critical for reducing abandonment treatment. However, 92 cases abandoning treatment after relapse did not show significant difference either in medical insurance coverage or in duration from first complete remission. In conclusion, financial crisis was the main reason for abandoning treatment. Government-funded health care expenditure programs reduced families’ economic burden and thereby reduced the abandonment rate with resultant increased overall survival. PMID:25393454

  5. Multiwalled Carbon Nanotube Functionalization with High Molecular Weight Hyaluronan Significantly Reduces Pulmonary Injury.

    PubMed

    Hussain, Salik; Ji, Zhaoxia; Taylor, Alexia J; DeGraff, Laura M; George, Margaret; Tucker, Charles J; Chang, Chong Hyun; Li, Ruibin; Bonner, James C; Garantziotis, Stavros

    2016-08-23

    Commercialization of multiwalled carbon nanotubes (MWCNT)-based applications has been hampered by concerns regarding their lung toxicity potential. Hyaluronic acid (HA) is a ubiquitously found polysaccharide, which is anti-inflammatory in its native high molecular weight form. HA-functionalized smart MWCNTs have shown promise as tumor-targeting drug delivery agents and can enhance bone repair and regeneration. However, it is unclear whether HA functionalization could reduce the pulmonary toxicity potential of MWCNTs. Using in vivo and in vitro approaches, we investigated the effectiveness of MWCNT functionalization with HA in increasing nanotube biocompatibility and reducing lung inflammatory and fibrotic effects. We utilized three-dimensional cultures of differentiated primary human bronchial epithelia to translate findings from rodent assays to humans. We found that HA functionalization increased stability and dispersion of MWCNTs and reduced postexposure lung inflammation, fibrosis, and mucus cell metaplasia compared with nonfunctionalized MWCNTs. Cocultures of fully differentiated bronchial epithelial cells (cultivated at air-liquid interface) and human lung fibroblasts (submerged) displayed significant reduction in injury, oxidative stress, as well as pro-inflammatory gene and protein expression after exposure to HA-functionalized MWCNTs compared with MWCNTs alone. In contrast, neither type of nanotubes stimulated cytokine production in primary human alveolar macrophages. In aggregate, our results demonstrate the effectiveness of HA functionalization as a safer design approach to eliminate MWCNT-induced lung injury and suggest that HA functionalization works by reducing MWCNT-induced epithelial injury.

  6. Stable reduced-order models of generalized dynamical systems using coordinate-transformed Arnoldi algorithms

    SciTech Connect

    Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J.

    1996-12-31

    Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.

  7. Pretreatment with a novel aquaporin 4 inhibitor, TGN-020, significantly reduces ischemic cerebral edema.

    PubMed

    Igarashi, Hironaka; Huber, Vincent J; Tsujita, Mika; Nakada, Tsutomu

    2011-02-01

    We investigated the in vivo effects of a novel aquaporin 4 (AQP4) inhibitor 2-(nicotinamide)-1,3,4-thiadiazole, TGN-020, in a mouse model of focal cerebral ischemia using 7.0-T magnetic resonance imaging (MRI). Pretreatment with TGN-020 significantly reduced brain edema associated with brain ischemia, as reflected by percentage of brain swelling volume (%BSV), 12.1 ± 6.3% in the treated group, compared to (20.8 ± 5.9%) in the control group (p < 0.05), and in the size of cortical infarction as reflected by the percentage of hemispheric lesion volume (%HLV), 20.0 ± 7.6% in the treated group, compared to 30.0 ± 9.1% in the control group (p < 0.05). The study indicated the potential pharmacological use of AQP4 inhibition in reducing brain edema associated with focal ischemia.

  8. [SKLOF: a new algorithm to reduce the range of supernova candidates].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng

    2015-01-01

    Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection.

  9. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K.

    2016-02-01

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from 11× 11 to 19× 19 , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  10. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  11. Vitamin A dietary supplementation reduces the mortality of velogenic Newcastle disease significantly in cockerels.

    PubMed

    Okpe, Godwin Chinedu; Ezema, Wilfred Sunday; Shoyinka, Shodeinde Vincent Olumuyiwa; Okoye, John Osita Arinze

    2015-10-01

    This project was undertaken to find ways of reducing mortalities and economic losses due to velogenic Newcastle disease (VND) in areas where the disease is enzootic. Four groups of cockerels of 44 birds each were used for this experiment. The birds in groups 1 and 2 received no dietary vitamin A supplementation, whereas groups 3 and 4 received 300 iu and 600 iu of vitamin A per kilogram of commercial feed, respectively, from 1 week of age till the end of the experiment. At 6 weeks of age, the birds in groups 2, 3 and 4 were inoculated intraocularly with a VND virus (duck/Nigeria/Plateau/Kuru/113/1991). The birds in Group 1 were given phosphate-buffered saline intraocularly. Clinical signs appeared in Group 2 birds on day 3 PI and in groups 3 and 4 on day 5 PI. The clinical signs included a drop in feed and water consumption, depression, diarrhoea, torticollis and paralysis in all the infected groups. The average body weights of all groups were significantly different from one another on day 14 PI with Group 2 birds having the lowest body weight. Mortalities were highest in Group 2 birds (0%, 93.18%, 72.73% and 56.82% in groups 1, 2, 3 and 4 respectively). The antibody response in all the groups was significantly different from one another on days 14 and 21 PI. Group 2 birds had the lowest titres on those 2 days and showed more severe atrophy of the bursa, spleen, thymus and fibrin deposition in the spleen and thymus than the birds in groups 3 and 4. The above observations show that vitamin A dietary supplementation delayed the onset of clinical signs and significantly reduced body weight loss, atrophy of the bursa, spleen and thymus, and mortalities by 36%. It also significantly potentiated haemagglutination inhibition antibody response.

  12. Use of preoperative gabapentin significantly reduces postoperative opioid consumption: a meta-analysis

    PubMed Central

    Arumugam, Sudha; Lau, Christine SM; Chamberlain, Ronald S

    2016-01-01

    Objectives Effective postoperative pain management is crucial in the care of surgical patients. Opioids, which are commonly used in managing postoperative pain, have a potential for tolerance and addiction, along with sedating side effects. Gabapentin’s use as a multimodal analgesic regimen to treat neuropathic pain has been documented as having favorable side effects. This meta-analysis examined the use of preoperative gabapentin and its impact on postoperative opioid consumption. Materials and methods A comprehensive literature search was conducted to identify randomized control trials that evaluated preoperative gabapentin on postoperative opioid consumption. The outcomes of interest were cumulative opioid consumption following the surgery and the incidence of vomiting, somnolence, and nausea. Results A total of 1,793 patients involved in 17 randomized control trials formed the final analysis for this study. Postoperative opioid consumption was reduced when using gabapentin within the initial 24 hours following surgery (standard mean difference −1.35, 95% confidence interval [CI]: −1.96 to −0.73; P<0.001). There was a significant reduction in morphine, fentanyl, and tramadol consumption (P<0.05). While a significant increase in postoperative somnolence incidence was observed (relative risk 1.30, 95% CI: 1.10–1.54, P<0.05), there were no significant effects on postoperative vomiting and nausea. Conclusion The administration of preoperative gabapentin reduced the consumption of opioids during the initial 24 hours following surgery. The reduction in postoperative opioids with preoperative gabapentin increased postoperative somnolence, but no significant differences were observed in nausea and vomiting incidences. The results from this study demonstrate that gabapentin is more beneficial in mastectomy and spinal, abdominal, and thyroid surgeries. Gabapentin is an effective analgesic adjunct, and clinicians should consider its use in multimodal treatment

  13. Male Circumcision Significantly Reduces Prevalence and Load of Genital Anaerobic Bacteria

    PubMed Central

    Liu, Cindy M.; Hungate, Bruce A.; Tobian, Aaron A. R.; Serwadda, David; Ravel, Jacques; Lester, Richard; Kigozi, Godfrey; Aziz, Maliha; Galiwango, Ronald M.; Nalugoda, Fred; Contente-Cuomo, Tania L.; Wawer, Maria J.; Keim, Paul; Gray, Ronald H.; Price, Lance B.

    2013-01-01

    ABSTRACT Male circumcision reduces female-to-male HIV transmission. Hypothesized mechanisms for this protective effect include decreased HIV target cell recruitment and activation due to changes in the penis microbiome. We compared the coronal sulcus microbiota of men from a group of uncircumcised controls (n = 77) and from a circumcised intervention group (n = 79) at enrollment and year 1 follow-up in a randomized circumcision trial in Rakai, Uganda. We characterized microbiota using16S rRNA gene-based quantitative PCR (qPCR) and pyrosequencing, log response ratio (LRR), Bayesian classification, nonmetric multidimensional scaling (nMDS), and permutational multivariate analysis of variance (PerMANOVA). At baseline, men in both study arms had comparable coronal sulcus microbiota; however, by year 1, circumcision decreased the total bacterial load and reduced microbiota biodiversity. Specifically, the prevalence and absolute abundance of 12 anaerobic bacterial taxa decreased significantly in the circumcised men. While aerobic bacterial taxa also increased postcircumcision, these gains were minor. The reduction in anaerobes may partly account for the effects of circumcision on reduced HIV acquisition. PMID:23592260

  14. A deep stop during decompression from 82 fsw (25 m) significantly reduces bubbles and fast tissue gas tensions.

    PubMed

    Marroni, A; Bennett, P B; Cronje, F J; Cali-Corleo, R; Germonpre, P; Pieri, M; Bonuccelli, C; Balestra, C

    2004-01-01

    In spite of many modifications to decompression algorithms, the incidence of decompression sickness (DCS) in scuba divers has changed very little. The success of stage, compared to linear ascents, is well described yet theoretical changes in decompression ratios have diminished the importance of fast tissue gas tensions as critical for bubble generation. The most serious signs and symptoms of DCS involve the spinal cord, with a tissue half time of only 12.5 minutes. It is proposed that present decompression schedules do not permit sufficient gas elimination from such fast tissues, resulting in bubble formation. Further, it is hypothesized that introduction of a deep stop will significantly reduce fast tissue bubble formation and neurological DCS risk. A total of 181 dives were made to 82 fsw (25 m) by 22 volunteers. Two dives of 25 min and 20 min were made, with a 3 hr 30 min surface interval and according to 8 different ascent protocols. Ascent rates of 10, 33 or 60 fsw/min (3, 10, 18 m/min) were combined with no stops or a shallow stop at 20 fsw (6 m) or a deep stop at 50 fsw (15 m) and a shallow at 20 fsw (6 m). The highest bubbles scores (8.78/9.97), using the Spencer Scale (SS) and Extended Spencer Scale (ESS) respectively, were with the slowest ascent rate. This also showed the highest 5 min and 10 min tissue loads of 48% and 75%. The lowest bubble scores (1.79/2.50) were with an ascent rate of 33 fsw (10 m/min) and stops for 5 min at 50 fsw (15 m) and 20 fsw (6 m). This also showed the lowest 5 and 10 min tissue loads at 25% and 52% respectively. Thus, introduction of a deep stop significantly reduced Doppler detected bubbles together with tissue gas tensions in the 5 and 10 min tissues, which has implications for reducing the incidence of neurological DCS in divers.

  15. MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees

    PubMed Central

    2010-01-01

    Background MapReduce is a parallel framework that has been used effectively to design large-scale parallel applications for large computing clusters. In this paper, we evaluate the viability of the MapReduce framework for designing phylogenetic applications. The problem of interest is generating the all-to-all Robinson-Foulds distance matrix, which has many applications for visualizing and clustering large collections of evolutionary trees. We introduce MrsRF (MapReduce Speeds up RF), a multi-core algorithm to generate a t × t Robinson-Foulds distance matrix between t trees using the MapReduce paradigm. Results We studied the performance of our MrsRF algorithm on two large biological trees sets consisting of 20,000 trees of 150 taxa each and 33,306 trees of 567 taxa each. Our experiments show that MrsRF is a scalable approach reaching a speedup of over 18 on 32 total cores. Our results also show that achieving top speedup on a multi-core cluster requires different cluster configurations. Finally, we show how to use an RF matrix to summarize collections of phylogenetic trees visually. Conclusion Our results show that MapReduce is a promising paradigm for developing multi-core phylogenetic applications. The results also demonstrate that different multi-core configurations must be tested in order to obtain optimum performance. We conclude that RF matrices play a critical role in developing techniques to summarize large collections of trees. PMID:20122186

  16. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    SciTech Connect

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm is shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.

  17. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Stanier, A.

    2016-12-01

    We demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton-Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm is shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ∼6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.

  18. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  19. 26 CFR 54.4980F-1 - Notice requirements for certain pension plan amendments significantly reducing the rate of future...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... significantly reducing the rate of future benefit accrual. The following questions and answers concern the... a plan amendment of an applicable pension plan that significantly reduces the rate of future...

  20. Incorporation of catalytic dehydrogenation into fischer-tropsch synthesis to significantly reduce carbon dioxide emissions

    DOEpatents

    Huffman, Gerald P.

    2012-11-13

    A new method of producing liquid transportation fuels from coal and other hydrocarbons that significantly reduces carbon dioxide emissions by combining Fischer-Tropsch synthesis with catalytic dehydrogenation is claimed. Catalytic dehydrogenation (CDH) of the gaseous products (C1-C4) of Fischer-Tropsch synthesis (FTS) can produce large quantities of hydrogen while converting the carbon to multi-walled carbon nanotubes (MWCNT). Incorporation of CDH into a FTS-CDH plant converting coal to liquid fuels can eliminate all or most of the CO.sub.2 emissions from the water-gas shift (WGS) reaction that is currently used to elevate the H.sub.2 level of coal-derived syngas for FTS. Additionally, the FTS-CDH process saves large amounts of water used by the WGS reaction and produces a valuable by-product, MWCNT.

  1. Pretreatment with a novel aquaporin 4 inhibitor, TGN-020, significantly reduces ischemic cerebral edema

    PubMed Central

    Igarashi, Hironaka; Huber, Vincent J.; Tsujita, Mika

    2010-01-01

    We investigated the in vivo effects of a novel aquaporin 4 (AQP4) inhibitor 2-(nicotinamide)-1,3,4-thiadiazole, TGN-020, in a mouse model of focal cerebral ischemia using 7.0-T magnetic resonance imaging (MRI). Pretreatment with TGN-020 significantly reduced brain edema associated with brain ischemia, as reflected by percentage of brain swelling volume (%BSV), 12.1 ± 6.3% in the treated group, compared to (20.8 ± 5.9%) in the control group (p < 0.05), and in the size of cortical infarction as reflected by the percentage of hemispheric lesion volume (%HLV), 20.0 ± 7.6% in the treated group, compared to 30.0 ± 9.1% in the control group (p < 0.05). The study indicated the potential pharmacological use of AQP4 inhibition in reducing brain edema associated with focal ischemia. PMID:20924629

  2. Reduced-complexity algorithms for data assimilation of large-scale systems

    NASA Astrophysics Data System (ADS)

    Chandrasekar, Jaganath

    Data assimilation is the use of measurement data to improve estimates of the state of dynamical systems using mathematical models. Estimates from models alone are inherently imperfect due to the presence of unknown inputs that affect dynamical systems and model uncertainties. Thus, data assimilation is used in many applications: from satellite tracking to biological systems monitoring. As the complexity of the underlying model increases, so does the complexity of the data assimilation technique. This dissertation considers reduced-complexity algorithms for data assimilation of large-scale systems. For linear discrete-time systems, an estimator that injects data into only a specified subset of the state estimates is considered. Bounds on the performance of the new filter are obtained, and conditions that guarantee the asymptotic stability of the new filter for linear time-invariant systems are derived. We then derive a reduced-order estimator that uses a reduced-order model to propagate the estimator state using a finite-horizon cost, and hence solutions of algebraic Riccati and Lyapunov equations are not required. Finally, a reduced-rank square-root filter that propagates only a few columns of the square root of the state-error covariance is developed. Specifically, the columns are chosen from the Cholesky factor of the state-error covariance. Next, data assimilation algorithms for nonlinear systems is considered. We first compare the performance of two suboptimal estimation algorithms, the extended Kalman filter and unscented Kalman filter. To reduce the computational requirements, variations of the unscented Kalman filter with reduced ensemble are suggested. Specifically, a reduced-rank unscented Kalman filter is introduced whose ensemble members are chosen according to the Cholesky decomposition of the square root of the pseudo-error covariance. Finally, a reduced-order model is used to propagate the pseudo-error covariance, while the full-order model is used to

  3. Using lytic bacteriophages to eliminate or significantly reduce contamination of food by foodborne bacterial pathogens.

    PubMed

    Sulakvelidze, Alexander

    2013-10-01

    Bacteriophages (also called 'phages') are viruses that kill bacteria. They are arguably the oldest (3 billion years old, by some estimates) and most ubiquitous (total number estimated to be 10(30) -10(32) ) known organisms on Earth. Phages play a key role in maintaining microbial balance in every ecosystem where bacteria exist, and they are part of the normal microflora of all fresh, unprocessed foods. Interest in various practical applications of bacteriophages has been gaining momentum recently, with perhaps the most attention focused on using them to improve food safety. That approach, called 'phage biocontrol', typically includes three main types of applications: (i) using phages to treat domesticated livestock in order to reduce their intestinal colonization with, and shedding of, specific bacterial pathogens; (ii) treatments for decontaminating inanimate surfaces in food-processing facilities and other food establishments, so that foods processed on those surfaces are not cross-contaminated with the targeted pathogens; and (iii) post-harvest treatments involving direct applications of phages onto the harvested foods. This mini-review primarily focuses on the last type of intervention, which has been gaining the most momentum recently. Indeed, the results of recent studies dealing with improving food safety, and several recent regulatory approvals of various commercial phage preparations developed for post-harvest food safety applications, strongly support the idea that lytic phages may provide a safe, environmentally-friendly, and effective approach for significantly reducing contamination of various foods with foodborne bacterial pathogens. However, some important technical and nontechnical problems may need to be addressed before phage biocontrol protocols can become an integral part of routine food safety intervention strategies implemented by food industries in the USA.

  4. Pharmacological kynurenine 3-monooxygenase enzyme inhibition significantly reduces neuropathic pain in a rat model.

    PubMed

    Rojewska, Ewelina; Piotrowska, Anna; Makuch, Wioletta; Przewlocka, Barbara; Mika, Joanna

    2016-03-01

    Recent studies have highlighted the involvement of the kynurenine pathway in the pathology of neurodegenerative diseases, but the role of this system in neuropathic pain requires further extensive research. Therefore, the aim of our study was to examine the role of kynurenine 3-monooxygenase (Kmo), an enzyme that is important in this pathway, in a rat model of neuropathy after chronic constriction injury (CCI) to the sciatic nerve. For the first time, we demonstrated that the injury-induced increase in the Kmo mRNA levels in the spinal cord and the dorsal root ganglia (DRG) was reduced by chronic administration of the microglial inhibitor minocycline and that this effect paralleled a decrease in the intensity of neuropathy. Further, minocycline administration alleviated the lipopolysaccharide (LPS)-induced upregulation of Kmo mRNA expression in microglial cell cultures. Moreover, we demonstrated that not only indirect inhibition of Kmo using minocycline but also direct inhibition using Kmo inhibitors (Ro61-6048 and JM6) decreased neuropathic pain intensity on the third and the seventh days after CCI. Chronic Ro61-6048 administration diminished the protein levels of IBA-1, IL-6, IL-1beta and NOS2 in the spinal cord and/or the DRG. Both Kmo inhibitors potentiated the analgesic properties of morphine. In summary, our data suggest that in neuropathic pain model, inhibiting Kmo function significantly reduces pain symptoms and enhances the effectiveness of morphine. The results of our studies show that the kynurenine pathway is an important mediator of neuropathic pain pathology and indicate that Kmo represents a novel pharmacological target for the treatment of neuropathy.

  5. Ethanol, not metabolized in brain, significantly reduces brain metabolism, probably via specific GABA(A) receptors

    PubMed Central

    Rae, Caroline D.; Davidson, Joanne E.; Maher, Anthony D.; Rowlands, Benjamin D.; Kashem, Mohammed A.; Nasrallah, Fatima A.; Rallapalli, Sundari K.; Cook, James M; Balcar, Vladimir J.

    2014-01-01

    Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we used an indirect approach, measuring the effect of alcohol on metabolism of [3-13C]pyruvate in the adult Guinea pig brain cortical tissue slice and comparing the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-13C]ethanol. Ethanol (10, 30 and 60 mM) significantly reduced metabolic flux into all measured isotopomers and reduced all metabolic pool sizes. The metabolic profiles of these three concentrations of ethanol were similar and clustered with that of the α4β3δ positive allosteric modulator DS2 (4-Chloro-N-[2-(2-thienyl)imidazo[1,2a]-pyridin-3-yl]benzamide). Ethanol at a very low concentration (0.1 mM) produced a metabolic profile which clustered with those from inhibitors of GABA uptake, and ligands showing affinity for α5, and to a lesser extent, α1-containing GABA(A)R. There was no measureable metabolism of [1,2-13C]ethanol with no significant incorporation of 13C from [1,2-13C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabeled ethanol. The reduction in metabolism seen in the presence of ethanol is therefore likely to be due to its actions at neurotransmitter receptors, particularly α4β3δ receptors, and not because ethanol is substituting as a substrate or because of the effects of ethanol catabolites acetaldehyde or acetate. We suggest that the stimulatory effects of very low concentrations of ethanol are due to release of GABA via GAT1 and the subsequent interaction of this GABA with local α5-containing, and to a lesser extent, α1-containing GABA(A)R. PMID:24313287

  6. Bacteriophage preparation lytic for Shigella significantly reduces Shigella sonnei contamination in various foods

    PubMed Central

    Woolston, Joelle; Li, Manrong; Das, Chythanya; Sulakvelidze, Alexander

    2017-01-01

    ShigaShield™ is a phage preparation composed of five lytic bacteriophages that specifically target pathogenic Shigella species found in contaminated waters and foods. In this study, we examined the efficacy of various doses (9x105-9x107 PFU/g) of ShigaShield™ in removing experimentally added Shigella on deli meat, smoked salmon, pre-cooked chicken, lettuce, melon and yogurt. The highest dose (2x107 or 9x107 PFU/g) of ShigaShield™ applied to each food type resulted in at least 1 log (90%) reduction of Shigella in all the food types. There was significant (P<0.01) reduction in the Shigella levels in all phage treated foods compared to controls, except for the lowest phage dose (9x105 PFU/g) on melon where reduction was only ca. 45% (0.25 log). The genomes of each component phage in the cocktail were fully sequenced and analyzed, and they were found not to contain any “undesirable genes” including those listed in the US Code for Federal Regulations (40 CFR Ch1). Our data suggest that ShigaShield™ (and similar phage preparations with potent lytic activity against Shigella spp.) may offer a safe and effective approach for reducing the levels of Shigella in various foods that may be contaminated with the bacterium. PMID:28362863

  7. Sulfide-driven autotrophic denitrification significantly reduces N2O emissions.

    PubMed

    Yang, Weiming; Zhao, Qing; Lu, Hui; Ding, Zhi; Meng, Liao; Chen, Guang-Hao

    2016-03-01

    The Sulfate reduction-Autotrophic denitrification-Nitrification Integrated (SANI) process build on anaerobic carbon conversion through biological sulfate reduction and autotrophic denitrification by using the sulfide byproduct from the previous reaction. This study confirmed extra decreases in N2O emissions from the sulfide-driven autotrophic denitrification by investigating N2O reduction, accumulation, and emission in the presence of different sulfide/nitrate (S/N) mass ratios at pH 7 in a long-term laboratory-scale granular sludge autotrophic denitrification reactor. The N2O reduction rate was linearly proportional to the sulfide concentration, which confirmed that no sulfide inhibition of N2O reductase occurred. At S/N = 5.0 g-S/g-N, this rate resulted by sulfide-driven autotrophic denitrifying granular sludge (average granule size = 701 μm) was 27.7 mg-N/g-VSS/h (i.e., 2 and 4 times greater than those at 2.5 and 0.8 g-S/g-N, respectively). Sulfide actually stimulates rather than inhibits N2O reduction no matter what granule size of sulfide-driven autotrophic denitrifying sludge engaged. The accumulations of N2O, nitrite and free nitrous acid (FNA) with average granule size 701 μm of sulfide-driven autotrophic denitrifying granular sludge engaged at S/N = 5.0 g-S/g-N were 4.7%, 11.4% and 4.2% relative to those at 3.0 g-S/g-N, respectively. The accumulation of FNA can inhibit N2O reduction and increase N2O accumulation during sulfide-driven autotrophic denitrification. In addition, the N2O gas emission level from the reactor significantly increased from 14.1 ± 0.5 ppmv (0.002% of the N load) to 3707.4 ± 36.7 ppmv (0.405% of the N load) as the S/N mass ratio in the influent decreased from 2.1 to 1.4 g-S/g-N over the course of the 120-day continuous monitoring period. Sulfide-driven autotrophic denitrification may significantly reduce greenhouse gas emissions from biological nutrient removal when sulfur conversion processes are applied.

  8. A PDE-based Regularization Algorithm toward Reducing Speckle Tracking Noise: A Feasibility Study for Ultrasound Breast Elastography

    PubMed Central

    Guo, Li; Xu, Yan; Xu, Zhengfu; Jiang, Jingfeng

    2015-01-01

    Obtaining accurate ultrasonically-estimated displacements along both axial (parallel to the acoustic beam) and lateral (perpendicular to the beam) directions is an important task for various clinical elastography applications (e.g. modulus reconstruction and temperature imaging). In this study, a partial differential equation (PDE)-based regularization algorithm was proposed to enhance motion tracking accuracy. More specifically, the proposed PDE-based algorithm, utilizing two-dimensional displacement estimates from a conventional elastography system, attempted to iteratively reduce noise contained in the original displacement estimates by mathematical regularization. In this study, the physical constraint used by the above-mentioned mathematical regularization was tissue incompressibility. This proposed algorithm was tested using computer-simulated data, a tissue-mimicking phantom and in vivo breast lesion data. Computer simulation results showed that the method significantly improved the accuracy of lateral tracking (e.g. 17X at 0.5% compression). From in vivo breast lesion data investigated, we have found that, as compared to the conventional method, higher quality axial and lateral strain images (e.g. at least 78% improvements among the estimated contrast-to-noise ratios of lateral strain images) were obtained. Our initial results demonstrated that this conceptually and computationally simple method could be useful to improve the image quality for ultrasound elastography with current clinical equipment as a post-processing tool. PMID:25452434

  9. Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic

    NASA Technical Reports Server (NTRS)

    Neuman, Frank; Erzberger, Heinz

    1991-01-01

    The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.

  10. An algorithm for finding biologically significant features in microarray data based on a priori manifold learning.

    PubMed

    Hira, Zena M; Trigeorgis, George; Gillies, Duncan F

    2014-01-01

    Microarray databases are a large source of genetic data, which, upon proper analysis, could enhance our understanding of biology and medicine. Many microarray experiments have been designed to investigate the genetic mechanisms of cancer, and analytical approaches have been applied in order to classify different types of cancer or distinguish between cancerous and non-cancerous tissue. However, microarrays are high-dimensional datasets with high levels of noise and this causes problems when using machine learning methods. A popular approach to this problem is to search for a set of features that will simplify the structure and to some degree remove the noise from the data. The most widely used approach to feature extraction is principal component analysis (PCA) which assumes a multivariate Gaussian model of the data. More recently, non-linear methods have been investigated. Among these, manifold learning algorithms, for example Isomap, aim to project the data from a higher dimensional space onto a lower dimension one. We have proposed a priori manifold learning for finding a manifold in which a representative set of microarray data is fused with relevant data taken from the KEGG pathway database. Once the manifold has been constructed the raw microarray data is projected onto it and clustering and classification can take place. In contrast to earlier fusion based methods, the prior knowledge from the KEGG databases is not used in, and does not bias the classification process--it merely acts as an aid to find the best space in which to search the data. In our experiments we have found that using our new manifold method gives better classification results than using either PCA or conventional Isomap.

  11. Design of a fast echo matching algorithm to reduce crosstalk with Doppler shifts in ultrasonic ranging

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Guo, Rui; Wu, Jun-an

    2017-02-01

    Crosstalk is a main factor for wrong distance measurement by ultrasonic sensors, and this problem becomes more difficult to deal with under Doppler effects. In this paper, crosstalk reduction with Doppler shifts on small platforms is focused on, and a fast echo matching algorithm (FEMA) is proposed on the basis of chaotic sequences and pulse coding technology, then verified through applying it to match practical echoes. Finally, we introduce how to select both better mapping methods for chaotic sequences, and algorithm parameters for higher achievable maximum of cross-correlation peaks. The results indicate the following: logistic mapping is preferred to generate good chaotic sequences, with high autocorrelation even when the length is very limited; FEMA can not only match echoes and calculate distance accurately with an error degree mostly below 5%, but also generates nearly the same calculation cost level for static or kinematic ranging, much lower than that by direct Doppler compensation (DDC) with the same frequency compensation step; The sensitivity to threshold value selection and performance of FEMA depend significantly on the achievable maximum of cross-correlation peaks, and a higher peak is preferred, which can be considered as a criterion for algorithm parameter optimization under practical conditions.

  12. Iterative positioning algorithm to reduce the impact of diffuse reflection on an indoor visible light positioning system

    NASA Astrophysics Data System (ADS)

    Huang, Heqing; Feng, Lihui; Guo, Peng; Yang, Aiying; Ni, Guoqiang

    2016-06-01

    Recently, indoor visible light localization has become attractive. Unfortunately, its performance is limited by diffuse reflection. The diffuse reflection is estimated by the bilinear interpolation-based method. A received signal strength-based iterative visible light positioning algorithm is proposed to reduce the influence of diffuse reflection by subtracting the estimated diffuse reflection signal from the received signal. Simulations are made to evaluate the proposed iterative positioning algorithm in a typical scenario with different parameters of the field-of-view (FOV) of the receiver and the reflectivity of the wall. Results show that the proposed algorithm can reduce the average positioning error by 12 times in a typical scenario and can reduce the positioning error greatly with various FOV of the receiver and the reflectivity of the wall. The proposed algorithm is effective and robust to reduce the degradation caused by diffuse reflection in a positioning system and will have many potential applications in indoor localization scenarios.

  13. Soil nitrate reducing processes - drivers, mechanisms for spatial variation, and significance for nitrous oxide production.

    PubMed

    Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M; Daniell, Tim J

    2012-01-01

    The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate ([Formula: see text]) and production of the potent greenhouse gas, nitrous oxide (N(2)O). A number of factors are known to control these processes, including O(2) concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N(2)O production from soils.

  14. Soil nitrate reducing processes – drivers, mechanisms for spatial variation, and significance for nitrous oxide production

    PubMed Central

    Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M.; Daniell, Tim J.

    2012-01-01

    The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate (NO3−) and production of the potent greenhouse gas, nitrous oxide (N2O). A number of factors are known to control these processes, including O2 concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N2O production from soils. PMID:23264770

  15. Selecting the optimum quasi-steady-state species for reduced chemical kinetic mechanisms using a genetic algorithm

    SciTech Connect

    Montgomery, Christopher J.; Yang, Chongguan; Parkinson, Alan R.; Chen, J.-Y.

    2006-01-01

    A genetic optimization algorithm has been applied to the selection of quasi-steady-state (QSS) species in reduced chemical kinetic mechanisms. The algorithm seeks to minimize the error between reduced and detailed chemistry for simple reactor calculations approximating conditions of interest for a computational fluid dynamics simulation. The genetic algorithm does not guarantee that the global optimum will be found, but much greater accuracy can be obtained than by choosing QSS species through a simple kinetic criterion or by human trial and error. The algorithm is demonstrated for methane-air combustion over a range of temperatures and stoichiometries and for homogeneous charge compression ignition engine combustion. The results are in excellent agreement with those predicted by the baseline mechanism. A factor of two reduction in the number of species was obtained for a skeletal mechanism that had already been greatly reduced from the parent detailed mechanism.

  16. The Potential for Bayesian Compressive Sensing to Significantly Reduce Electron Dose in High Resolution STEM Images

    SciTech Connect

    Stevens, Andrew J.; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D.

    2014-02-11

    The use of high resolution imaging methods in the scanning transmission electron microscope (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example in the study of organic systems, in tomography and during in-situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high resolution STEM images. These experiments successively reduce the number of pixels in the image (thereby reducing the overall dose while maintaining the high resolution information) and show promising results for reconstructing images from this reduced set of randomly collected measurements. We show that this approach is valid for both atomic resolution images and nanometer resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these post acquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or alignment of the microscope itself.

  17. MO-FG-204-03: Using Edge-Preserving Algorithm for Significantly Improved Image-Domain Material Decomposition in Dual Energy CT

    SciTech Connect

    Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L

    2015-06-15

    Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR

  18. Modeling an aquatic ecosystem: application of an evolutionary algorithm with genetic doping to reduce prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Buscema, Massimo

    2016-04-01

    Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.

  19. A constrained reduced-dimensionality search algorithm to follow chemical reactions on potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Lankau, Timm; Yu, Chin-Hui

    2013-06-01

    A constrained reduced-dimensionality algorithm can be used to efficiently locate transition states and products in reactions involving conformational changes. The search path (SP) is constructed stepwise from linear combinations of a small set of manually chosen internal coordinates, namely the predictors. The majority of the internal coordinates, the correctors, are optimized at every step of the SP to minimize the total energy of the system so that the path becomes a minimum energy path connecting products and transition states with the reactants. Problems arise when the set of predictors needs to include weak coordinates, for example, dihedral angles, as well as strong ones such as bond distances. Two principal constraining methods for the weak coordinates are proposed to mend this situation: static and dynamic constraints. Dynamic constraints are automatically activated and revoked depending on the state of the weak coordinates among the predictors, while static ones require preset control factors and act permanently. All these methods enable the successful application (4 reactions are presented involving cyclohexane, alanine dipeptide, trimethylsulfonium chloride, and azafulvene) of the reduced dimensionality method to reactions where the reaction path covers large conformational changes in addition to the formation/breaking of chemical bonds. Dynamic constraints are found to be the most efficient method as they require neither additional information about the geometry of the transition state nor fine tuning of control parameters.

  20. Significant long-term increase of fossil fuel CO2 uptake from reduced marine calcification

    NASA Astrophysics Data System (ADS)

    Ridgwell, A.; Zondervan, I.; Hargreaves, J. C.; Bijma, J.; Lenton, T. M.

    2006-11-01

    Analysis of available plankton manipulation experiments demonstrates a previously unrecognized wide range of sensitivities of biogenic calcification to simulated anthropogenic acidification of the ocean, with the "lab rat" of planktic calcifiers, Emiliania huxleyi not representative of calcification generally. We assess the implications of the experimental uncertainty in plankton calcification response by creating an ensemble of realizations of an Earth system model that encapsulates a comparable range of uncertainty in calcification response. We predict a substantial future reduction in marine carbonate production, with ocean CO2 sequestration across the model ensemble enhanced by between 62 and 199 PgC by the year 3000, equivalent to a reduction in the atmospheric fossil fuel CO2 burden at that time of up to 13%. Concurrent changes in ocean circulation and surface temperatures contribute about one third to the overall importance of reduced plankton calcification.

  1. Combining contact tracing with targeted indoor residual spraying significantly reduces dengue transmission.

    PubMed

    Vazquez-Prokopec, Gonzalo M; Montgomery, Brian L; Horne, Peter; Clennon, Julie A; Ritchie, Scott A

    2017-02-01

    The widespread transmission of dengue viruses (DENV), coupled with the alarming increase of birth defects and neurological disorders associated with Zika virus, has put the world in dire need of more efficacious tools for Aedes aegypti-borne disease mitigation. We quantitatively investigated the epidemiological value of location-based contact tracing (identifying potential out-of-home exposure locations by phone interviews) to infer transmission foci where high-quality insecticide applications can be targeted. Space-time statistical modeling of data from a large epidemic affecting Cairns, Australia, in 2008-2009 revealed a complex pattern of transmission driven primarily by human mobility (Cairns accounted for ~60% of virus transmission to and from residents of satellite towns, and 57% of all potential exposure locations were nonresidential). Targeted indoor residual spraying with insecticides in potential exposure locations reduced the probability of future DENV transmission by 86 to 96%, compared to unsprayed premises. Our findings provide strong evidence for the effectiveness of combining contact tracing with residual spraying within a developed urban center, and should be directly applicable to areas with similar characteristics (for example, southern USA, Europe, or Caribbean countries) that need to control localized Aedes-borne virus transmission or to protect pregnant women's homes in areas with active Zika transmission. Future theoretical and empirical research should focus on evaluation of the applicability and scalability of this approach to endemic areas with variable population size and force of DENV infection.

  2. Inviting consumers to downsize fast-food portions significantly reduces calorie consumption.

    PubMed

    Schwartz, Janet; Riis, Jason; Elbel, Brian; Ariely, Dan

    2012-02-01

    Policies that mandate calorie labeling in fast-food and chain restaurants have had little or no observable impact on calorie consumption to date. In three field experiments, we tested an alternative approach: activating consumers' self-control by having servers ask customers if they wanted to downsize portions of three starchy side dishes at a Chinese fast-food restaurant. We consistently found that 14-33 percent of customers accepted the downsizing offer, and they did so whether or not they were given a nominal twenty-five-cent discount. Overall, those who accepted smaller portions did not compensate by ordering more calories in their entrées, and the total calories served to them were, on average, reduced by more than 200. We also found that accepting the downsizing offer did not change the amount of uneaten food left at the end of the meal, so the calorie savings during purchasing translated into calorie savings during consumption. Labeling the calorie content of food during one of the experiments had no measurable impact on ordering behavior. If anything, the downsizing offer was less effective in changing customers' ordering patterns with the calorie labeling present. These findings highlight the potential importance of portion-control interventions that specifically activate consumers' self-control.

  3. Combining contact tracing with targeted indoor residual spraying significantly reduces dengue transmission

    PubMed Central

    Vazquez-Prokopec, Gonzalo M.; Montgomery, Brian L.; Horne, Peter; Clennon, Julie A.; Ritchie, Scott A.

    2017-01-01

    The widespread transmission of dengue viruses (DENV), coupled with the alarming increase of birth defects and neurological disorders associated with Zika virus, has put the world in dire need of more efficacious tools for Aedes aegypti–borne disease mitigation. We quantitatively investigated the epidemiological value of location-based contact tracing (identifying potential out-of-home exposure locations by phone interviews) to infer transmission foci where high-quality insecticide applications can be targeted. Space-time statistical modeling of data from a large epidemic affecting Cairns, Australia, in 2008–2009 revealed a complex pattern of transmission driven primarily by human mobility (Cairns accounted for ~60% of virus transmission to and from residents of satellite towns, and 57% of all potential exposure locations were nonresidential). Targeted indoor residual spraying with insecticides in potential exposure locations reduced the probability of future DENV transmission by 86 to 96%, compared to unsprayed premises. Our findings provide strong evidence for the effectiveness of combining contact tracing with residual spraying within a developed urban center, and should be directly applicable to areas with similar characteristics (for example, southern USA, Europe, or Caribbean countries) that need to control localized Aedes-borne virus transmission or to protect pregnant women’s homes in areas with active Zika transmission. Future theoretical and empirical research should focus on evaluation of the applicability and scalability of this approach to endemic areas with variable population size and force of DENV infection. PMID:28232955

  4. The potential for Bayesian compressive sensing to significantly reduce electron dose in high-resolution STEM images.

    PubMed

    Stevens, Andrew; Yang, Hao; Carin, Lawrence; Arslan, Ilke; Browning, Nigel D

    2014-02-01

    The use of high-resolution imaging methods in scanning transmission electron microscopy (STEM) is limited in many cases by the sensitivity of the sample to the beam and the onset of electron beam damage (for example, in the study of organic systems, in tomography and during in situ experiments). To demonstrate that alternative strategies for image acquisition can help alleviate this beam damage issue, here we apply compressive sensing via Bayesian dictionary learning to high-resolution STEM images. These computational algorithms have been applied to a set of images with a reduced number of sampled pixels in the image. For a reduction in the number of pixels down to 5% of the original image, the algorithms can recover the original image from the reduced data set. We show that this approach is valid for both atomic-resolution images and nanometer-resolution studies, such as those that might be used in tomography datasets, by applying the method to images of strontium titanate and zeolites. As STEM images are acquired pixel by pixel while the beam is scanned over the surface of the sample, these postacquisition manipulations of the images can, in principle, be directly implemented as a low-dose acquisition method with no change in the electron optics or the alignment of the microscope itself.

  5. Cleanroom Maintenance Significantly Reduces Abundance but Not Diversity of Indoor Microbiomes

    PubMed Central

    Mahnert, Alexander; Vaishampayan, Parag; Probst, Alexander J.; Auerbach, Anna; Moissl-Eichinger, Christine; Venkateswaran, Kasthuri; Berg, Gabriele

    2015-01-01

    Cleanrooms have been considered microbially-reduced environments and are used to protect human health and industrial product assembly. However, recent analyses have deciphered a rather broad diversity of microbes in cleanrooms, whose origin as well as physiological status has not been fully understood. Here, we examined the input of intact microbial cells from a surrounding built environment into a spacecraft assembly cleanroom by applying a molecular viability assay based on propidium monoazide (PMA). The controlled cleanroom (CCR) was characterized by ~6.2*103 16S rRNA gene copies of intact bacterial cells per m2 floor surface, which only represented 1% of the total community that could be captured via molecular assays without viability marker. This was in contrast to the uncontrolled adjoining facility (UAF) that had 12 times more living bacteria. Regarding diversity measures retrieved from 16S rRNA Illumina-tag analyzes, we observed, however, only a minor drop in the cleanroom facility allowing the conclusion that the number but not the diversity of microbes is strongly affected by cleaning procedures. Network analyses allowed tracking a substantial input of living microbes to the cleanroom and a potential enrichment of survival specialists like bacterial spore formers and archaeal halophiles and mesophiles. Moreover, the cleanroom harbored a unique community including 11 exclusive genera, e.g., Haloferax and Sporosarcina, which are herein suggested as indicators of cleanroom environments. In sum, our findings provide evidence that archaea are alive in cleanrooms and that cleaning efforts and cleanroom maintenance substantially decrease the number but not the diversity of indoor microbiomes. PMID:26273838

  6. Bacteriophage Combinations Significantly Reduce Clostridium difficile Growth In Vitro and Proliferation In Vivo

    PubMed Central

    Nale, Janet Y.; Spencer, Janice; Hargreaves, Katherine R.; Buckley, Anthony M.; Trzepiński, Przemysław

    2015-01-01

    The microbiome dysbiosis caused by antibiotic treatment has been associated with both susceptibility to and relapse of Clostridium difficile infection (CDI). Bacteriophage (phage) therapy offers target specificity and dose amplification in situ, but few studies have focused on its use in CDI treatment. This mainly reflects the lack of strictly virulent phages that target this pathogen. While it is widely accepted that temperate phages are unsuitable for therapeutic purposes due to their transduction potential, analysis of seven C. difficile phages confirmed that this impact could be curtailed by the application of multiple phage types. Here, host range analysis of six myoviruses and one siphovirus was conducted on 80 strains representing 21 major epidemic and clinically severe ribotypes. The phages had complementary coverage, lysing 18 and 62 of the ribotypes and strains tested, respectively. Single-phage treatments of ribotype 076, 014/020, and 027 strains showed an initial reduction in the bacterial load followed by the emergence of phage-resistant colonies. However, these colonies remained susceptible to infection with an unrelated phage. In contrast, specific phage combinations caused the complete lysis of C. difficile in vitro and prevented the appearance of resistant/lysogenic clones. Using a hamster model, the oral delivery of optimized phage combinations resulted in reduced C. difficile colonization at 36 h postinfection. Interestingly, free phages were recovered from the bowel at this time. In a challenge model of the disease, phage treatment delayed the onset of symptoms by 33 h compared to the time of onset of symptoms in untreated animals. These data demonstrate the therapeutic potential of phage combinations to treat CDI. PMID:26643348

  7. Rifampicin and rifapentine significantly reduce concentrations of bedaquiline, a new anti-TB drug

    PubMed Central

    Svensson, Elin M.; Murray, Stephen; Karlsson, Mats O.; Dooley, Kelly E.

    2015-01-01

    Objectives Bedaquiline is the first drug of a new class approved for the treatment of TB in decades. Bedaquiline is metabolized by cytochrome P450 (CYP) 3A4 to a less-active M2 metabolite. Its terminal half-life is extremely long (5–6 months), complicating evaluations of drug–drug interactions. Rifampicin and rifapentine, two anti-TB drugs now being optimized to shorten TB treatment duration, are potent inducers of CYP3A4. This analysis aimed to predict the effect of repeated doses of rifampicin or rifapentine on the steady-state pharmacokinetics of bedaquiline and its M2 metabolite from single-dose data using a model-based approach. Methods Pharmacokinetic data for bedaquiline and M2 were obtained from a Phase I study involving 32 individuals each receiving two doses of bedaquiline, alone or together with multiple-dose rifampicin or rifapentine. Sampling was performed over 14 days following each bedaquiline dose. Pharmacokinetic analyses were performed using non-linear mixed-effects modelling. Models were used to simulate potential dose adjustments. Results Rifamycin co-administration increased bedaquiline clearance substantially: 4.78-fold [relative standard error (RSE) 9.10%] with rifampicin and 3.96-fold (RSE 5.00%) with rifapentine. Induction of M2 clearance was equally strong. Average steady-state concentrations of bedaquiline and M2 are predicted to decrease by 79% and 75% when given with rifampicin or rifapentine, respectively. Simulations indicated that increasing the bedaquiline dosage to mitigate the interaction would yield elevated M2 concentrations during the first treatment weeks. Conclusions Rifamycin antibiotics reduce bedaquiline concentrations substantially. In line with current treatment guidelines for drug-susceptible TB, concomitant use is not recommended, even with dose adjustment. PMID:25535219

  8. Cleanroom Maintenance Significantly Reduces Abundance but Not Diversity of Indoor Microbiomes.

    PubMed

    Mahnert, Alexander; Vaishampayan, Parag; Probst, Alexander J; Auerbach, Anna; Moissl-Eichinger, Christine; Venkateswaran, Kasthuri; Berg, Gabriele

    2015-01-01

    Cleanrooms have been considered microbially-reduced environments and are used to protect human health and industrial product assembly. However, recent analyses have deciphered a rather broad diversity of microbes in cleanrooms, whose origin as well as physiological status has not been fully understood. Here, we examined the input of intact microbial cells from a surrounding built environment into a spacecraft assembly cleanroom by applying a molecular viability assay based on propidium monoazide (PMA). The controlled cleanroom (CCR) was characterized by ~6.2*103 16S rRNA gene copies of intact bacterial cells per m2 floor surface, which only represented 1% of the total community that could be captured via molecular assays without viability marker. This was in contrast to the uncontrolled adjoining facility (UAF) that had 12 times more living bacteria. Regarding diversity measures retrieved from 16S rRNA Illumina-tag analyzes, we observed, however, only a minor drop in the cleanroom facility allowing the conclusion that the number but not the diversity of microbes is strongly affected by cleaning procedures. Network analyses allowed tracking a substantial input of living microbes to the cleanroom and a potential enrichment of survival specialists like bacterial spore formers and archaeal halophiles and mesophiles. Moreover, the cleanroom harbored a unique community including 11 exclusive genera, e.g., Haloferax and Sporosarcina, which are herein suggested as indicators of cleanroom environments. In sum, our findings provide evidence that archaea are alive in cleanrooms and that cleaning efforts and cleanroom maintenance substantially decrease the number but not the diversity of indoor microbiomes.

  9. Social networking strategies that aim to reduce obesity have achieved significant although modest results.

    PubMed

    Ashrafian, Hutan; Toma, Tania; Harling, Leanne; Kerr, Karen; Athanasiou, Thanos; Darzi, Ara

    2014-09-01

    The global epidemic of obesity continues to escalate. Obesity accounts for an increasing proportion of the international socioeconomic burden of noncommunicable disease. Online social networking services provide an effective medium through which information may be exchanged between obese and overweight patients and their health care providers, potentially contributing to superior weight-loss outcomes. We performed a systematic review and meta-analysis to assess the role of these services in modifying body mass index (BMI). Our analysis of twelve studies found that interventions using social networking services produced a modest but significant 0.64 percent reduction in BMI from baseline for the 941 people who participated in the studies' interventions. We recommend that social networking services that target obesity should be the subject of further clinical trials. Additionally, we recommend that policy makers adopt reforms that promote the use of anti-obesity social networking services, facilitate multistakeholder partnerships in such services, and create a supportive environment to confront obesity and its associated noncommunicable diseases.

  10. Application of small-diameter inertial grade gyroscopes significantly reduces borehole position uncertainty

    SciTech Connect

    Uttecht, G.W.; de Wardt, J.P.

    1983-02-01

    Initial tests with a new directional survey tool show a significant enhancement in attainable accuracy over conventional instrumentation. Two prototype systems, developed over the last two years by Gyrodata, Inc., have recently been tested in a well in West Texas. Although many more tests are required, preliminary results indicate that the original design objective for borehole position uncertainty less than 1.7 feet per 1,000 feet of hole--has been met. The Gyrodata Wellbore Surveyor employs an inertial grade rate gyro adapted from the aerospace industry. In combination with its other sensors and electronics, the device can sense the orientation of the earth's spin vector at each independent survey station. As a result, the major systematic errors associated with conventional gyros--geographical reference and unaccountable drift--are eliminated. Other sources of inaccuracy are minimized by the system's measuring techniques and operational procedures, and additional benefits should arise from faster survey speed and increased reliability. A true north reference device can also employ a small outside diameter since it requires only one gyro and one accelerometer, rather than the two or three of each needed in an inertial navigation system.

  11. Colchicine Significantly Reduces Incident Cancer in Gout Male Patients: A 12-Year Cohort Study.

    PubMed

    Kuo, Ming-Chun; Chang, Shun-Jen; Hsieh, Ming-Chia

    2015-12-01

    Patients with gout are more likely to develop most cancers than subjects without gout. Colchicine has been used for the treatment and prevention of gouty arthritis and has been reported to have an anticancer effect in vitro. However, to date no study has evaluated the relationship between colchicine use and incident cancers in patients with gout. This study enrolled male patients with gout identified in Taiwan's National Health Insurance Database for the years 1998 to 2011. Each gout patient was matched with 4 male controls by age and by month and year of first diagnosis, and was followed up until 2011. The study excluded those who were diagnosed with diabetes or any type of cancer within the year following enrollment. We calculated hazard ratio (HR), aged-adjusted standardized incidence ratio, and incidence of 1000 person-years analyses to evaluate cancer risk. A total of 24,050 male patients with gout and 76,129 male nongout controls were included. Patients with gout had a higher rate of incident all-cause cancers than controls (6.68% vs 6.43%, P = 0.006). A total of 13,679 patients with gout were defined as having been ever-users of colchicine and 10,371 patients with gout were defined as being never-users of colchicine. Ever-users of colchicine had a significantly lower HR of incident all-cause cancers than never-users of colchicine after adjustment for age (HR = 0.85, 95% CI = 0.77-0.94; P = 0.001). In conclusion, colchicine use was associated with a decreased risk of incident all-cause cancers in male Taiwanese patients with gout.

  12. Thyroid function appears to be significantly reduced in Space-borne MDS mice

    NASA Astrophysics Data System (ADS)

    Saverio Ambesi-Impiombato, Francesco; Curcio, Francesco; Fontanini, Elisabetta; Perrella, Giuseppina; Spelat, Renza; Zambito, Anna Maria; Damaskopoulou, Eleni; Peverini, Manola; Albi, Elisabetta

    It is known that prolonged space flights induced changes in human cardiovascular, muscu-loskeletal and nervous systems whose function is regulated by the thyroid gland but, until now, no data were reported about thyroid damage during space missions. We have demonstrated in vitro that, during space missions (Italian Soyuz Mission "ENEIDE" in 2005, Shuttle STS-120 "ESPERIA" in 2007), thyroid in vitro cultured cells did not respond to thyroid stimulating hor-mone (TSH) treatment; they appeared healthy and alive, despite their being in a pro-apopotic state characterised by a variation of sphingomyelin metabolism and consequent increase in ce-ramide content. The insensitivity to TSH was largely due to a rearrangement of specific cell membrane microdomains, acting as platforms for TSH-receptor (TEXUS-44 mission in 2008). To study if these effects were present also in vivo, as part of the Mouse Drawer System (MDS) Tissue Sharing Program, we performed experiments in mice maintained onboard the Interna-tional Space Station during the long-duration (90 days) exploration mission STS-129. After return to earth, the thyroids isolated from the 3 animals were in part immediately frozen to study the morphological modification in space and in part immediately used to study the effect of TSH treatment. For this purpose small fragments of tissue were treated with 10-7 or 10-8 M TSH for 1 hour by using untreated fragments as controls. Then the fragments were fixed with absolute ethanol for 10 min at room temperature and centrifuged for 20 min. at 3000 x g. The supernatants were used for cAMP analysis whereas the pellet were used for protein amount determination and for immunoblotting analysis of TSH-receptor, sphingomyelinase and sphingomyelin-synthase. The results showed a modification of the thyroid structure and also the values of cAMP production after treatment with 10-7 M TSH for 1 hour were significantly lower than those obtained in Earth's gravity. The treatment with TSH

  13. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    PubMed Central

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed. PMID:27446945

  14. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm.

    PubMed

    Zhang, Lei; Wang, Linlin; Du, Bochuan; Wang, Tianjiao; Tian, Pu; Tian, Suyan

    2016-01-01

    Among non-small cell lung cancer (NSCLC), adenocarcinoma (AC), and squamous cell carcinoma (SCC) are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR), can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.

  15. The tensor hypercontracted parametric reduced density matrix algorithm: coupled-cluster accuracy with O(r(4)) scaling.

    PubMed

    Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David

    2013-08-07

    Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).

  16. 26 CFR 54.4980F-1 - Notice requirements for certain pension plan amendments significantly reducing the rate of future...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... whether an amendment is a section 204(h) amendment. Thus, for example, provisions relating to the right to... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... (CONTINUED) PENSION EXCISE TAXES § 54.4980F-1 Notice requirements for certain pension plan...

  17. 26 CFR 54.4980F-1 - Notice requirements for certain pension plan amendments significantly reducing the rate of future...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... whether an amendment is a section 204(h) amendment. Thus, for example, provisions relating to the right to... amendments significantly reducing the rate of future benefit accrual. 54.4980F-1 Section 54.4980F-1 Internal... (CONTINUED) PENSION EXCISE TAXES § 54.4980F-1 Notice requirements for certain pension plan...

  18. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  19. On the retrieval of significant wave heights from spaceborne Synthetic Aperture Radar using the Max-Planck Institut algorithm.

    PubMed

    Violante-Carvalho, Nelson

    2005-12-01

    Synthetic Aperture Radar (SAR) onboard satellites is the only source of directional wave spectra with continuous and global coverage. Millions of SAR Wave Mode (SWM) imagettes have been acquired since the launch in the early 1990's of the first European Remote Sensing Satellite ERS-1 and its successors ERS-2 and ENVISAT, which has opened up many possibilities specially for wave data assimilation purposes. The main aim of data assimilation is to improve the forecasting introducing available observations into the modeling procedures in order to minimize the differences between model estimates and measurements. However there are limitations in the retrieval of the directional spectrum from SAR images due to nonlinearities in the mapping mechanism. The Max-Planck Institut (MPI) scheme, the first proposed and most widely used algorithm to retrieve directional wave spectra from SAR images, is employed to compare significant wave heights retrieved from ERS-1 SAR against buoy measurements and against the WAM wave model. It is shown that for periods shorter than 12 seconds the WAM model performs better than the MPI, despite the fact that the model is used as first guess to the MPI method, that is the retrieval is deteriorating the first guess. For periods longer than 12 seconds, the part of the spectrum that is directly measured by SAR, the performance of the MPI scheme is at least as good as the WAM model.

  20. Testing and Development of the Onsite Earthquake Early Warning Algorithm to Reduce Event Uncertainties

    NASA Astrophysics Data System (ADS)

    Andrews, J. R.; Cochran, E. S.; Hauksson, E.; Felizardo, C.; Liu, T.; Ross, Z.; Heaton, T. H.

    2015-12-01

    Primary metrics for measuring earthquake early warning (EEW) system and algorithm performance are the rate of false alarms and the uncertainty in earthquake parameters. The Onsite algorithm, currently one of three EEW algorithms implemented in ShakeAlert, uses the ground-motion period parameter (τc) and peak initial displacement parameter (Pd) to estimate the magnitude and expected ground shaking of an ongoing earthquake. It is the only algorithm originally designed to issue single station alerts, necessitating that results from individual stations be as reliable and accurate as possible.The ShakeAlert system has been undergoing testing on continuous real-time data in California for several years, and the latest version of the Onsite algorithm for several months. This permits analysis of the response to a range of signals, from environmental noise to hardware testing and maintenance procedures to moderate or large earthquake signals at varying distances from the networks. We find that our existing discriminator, relying only on τc and Pd, while performing well to exclude large teleseismic events, is less effective for moderate regional events and can also incorrectly exclude data from local events. Motivated by these experiences, we use a collection of waveforms from potentially problematic 'noise' events and real earthquakes to explore methods to discriminate real and false events, using the ground motion and period parameters available in Onsite's processing methodology. Once an event is correctly identified, a magnitude and location estimate is critical to determining the expected ground shaking. Scatter in the measured parameters translates to higher than desired uncertainty in Onsite's current calculations We present an overview of alternative methods, including incorporation of polarization information, to improve parameter determination for a test suite including both large (M4 to M7) events and three years of small to moderate events across California.

  1. Significantly Reduced Intensity of Infection but Persistent Prevalence of Schistosomiasis in a Highly Endemic Region in Mali after Repeated Treatment

    PubMed Central

    Landouré, Aly; Dembélé, Robert; Goita, Seydou; Kané, Mamadou; Tuinsma, Marjon; Sacko, Moussa; Toubali, Emily; French, Michael D.; Keita, Adama D.; Fenwick, Alan; Traoré, Mamadou S.; Zhang, Yaobi

    2012-01-01

    Background Preventive chemotherapy against schistosomiasis has been implemented since 2005 in Mali, targeting school-age children and adults at high risk. A cross-sectional survey was conducted in 2010 to evaluate the impact of repeated treatment among school-age children in the highly-endemic region of Segou. Methodology/Principal Findings The survey was conducted in six sentinel schools in three highly-endemic districts, and 640 school children aged 7–14 years were examined. Infections with Schistosoma haematobium and S. mansoni were diagnosed with the urine filtration and the Kato-Katz method respectively. Overall prevalence of S. haematobium infection was 61.7%, a significant reduction of 30% from the baseline in 2004 (p<0.01), while overall prevalence of S. mansoni infection was 12.7% which was not significantly different from the baseline. Overall mean intensity of S. haematobium and S. mansoni infection was 180.4 eggs/10 ml of urine and 88.2 epg in 2004 respectively. These were reduced to 33.2 eggs/10 ml of urine and 43.2 epg in 2010 respectively, a significant reduction of 81.6% and 51% (p<0.001). The proportion of heavy S. haematobium infections was reduced from 48.8% in 2004 to 13.8% in 2010, and the proportion of moderate and heavy S. mansoni infection was reduced from 15.6% in 2004 to 9.4% in 2010, both significantly (p<0.01). Mathematical modelling suggests that the observed results were in line with the expected changes. Conclusions/Significance Significant reduction in intensity of infection on both infections and modest but significant reduction in S. haematobium prevalence were achieved in highly-endemic Segou region after repeated chemotherapy. However, persistent prevalence of both infections and relatively high level of intensity of S. mansoni infection suggest that more intensified control measures be implemented in order to achieve the goal of schistosomiasis elimination. In addition, closer monitoring and evaluation activities are needed in

  2. Reduced brain levels of DHEAS in hepatic coma patients: significance for increased GABAergic tone in hepatic encephalopathy.

    PubMed

    Ahboucha, Samir; Talani, Giuseppe; Fanutza, Tomas; Sanna, Enrico; Biggio, Giovanni; Gamrani, Halima; Butterworth, Roger F

    2012-07-01

    Increased neurosteroids with allosteric modulatory activity on GABA(A) receptors such as 3α-5α tertrahydroprogesterone; allopregnanolone (ALLO), are candidates to explain the phenomenon of "increased GABAergic tone" in hepatic encephalopathy (HE). However, it is not known how changes of other GABA(A) receptor modulators such as dehydroepiandrosterone sulfate (DHEAS) contribute to altered GABAergic tone in HE. Concentrations of DHEAS were measured by radioimmunoassay in frontal cortex samples obtained at autopsy from 11 cirrhotic patients who died in hepatic coma and from an equal number of controls matched for age, gender, and autopsy delay intervals free from hepatic or neurological diseases. To assess whether reduced brain DHEAS contributes to increased GABAergic tone, in vitro patch clamp recordings in rat prefrontal cortex neurons were performed. A significant reduction of DHEAS (5.81±0.88 ng/g tissue) compared to control values (9.70±0.79 ng/g, p<0.01) was found. Brain levels of DHEAS in patients with liver disease who died without HE (11.43±1.74 ng/g tissue), and in a patient who died in uremic coma (12.56 ng/g tissue) were within the control range. Increasing ALLO enhances GABAergic tonic currents concentration-dependently, but increasing DHEAS reduces these currents. High concentrations of DHEAS (50 μM) reduce GABAergic tonic currents in the presence of ALLO, whereas reduced concentrations of DHEAS (1 μM) further stimulate these currents. These findings demonstrate that decreased concentrations of DHEAS together with increased brain concentrations of ALLO increase GABAergic tonic currents synergistically; suggesting that reduced brain DHEAS could further increase GABAergic tone in human HE.

  3. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  4. Bacteriophage cocktail significantly reduces or eliminates Listeria monocytogenes contamination on lettuce, apples, cheese, smoked salmon and frozen foods.

    PubMed

    Perera, Meenu N; Abuladze, Tamar; Li, Manrong; Woolston, Joelle; Sulakvelidze, Alexander

    2015-12-01

    ListShield™, a commercially available bacteriophage cocktail that specifically targets Listeria monocytogenes, was evaluated as a bio-control agent for L. monocytogenes in various Ready-To-Eat foods. ListShield™ treatment of experimentally contaminated lettuce, cheese, smoked salmon, and frozen entrèes significantly reduced (p < 0.05) L. monocytogenes contamination by 91% (1.1 log), 82% (0.7 log), 90% (1.0 log), and 99% (2.2 log), respectively. ListShield™ application, alone or combined with an antioxidant/anti-browning solution, resulted in a statistically significant (p < 0.001) 93% (1.1 log) reduction of L. monocytogenes contamination on apple slices after 24 h at 4 °C. Treatment of smoked salmon from a commercial processing facility with ListShield™ eliminated L. monocytogenes (no detectable L. monocytogenes) in both the naturally contaminated and experimentally contaminated salmon fillets. The organoleptic quality of foods was not affected by application of ListShield™, as no differences in the color, taste, or appearance were detectable. Bio-control of L. monocytogenes with lytic bacteriophage preparations such as ListShield™ can offer an environmentally-friendly, green approach for reducing the risk of listeriosis associated with the consumption of various foods that may be contaminated with L. monocytogenes.

  5. A new optimization framework using genetic algorithm and artificial neural network to reduce uncertainties in petroleum reservoir models

    NASA Astrophysics Data System (ADS)

    Maschio, Célio; José Schiozer, Denis

    2015-01-01

    In this article, a new optimization framework to reduce uncertainties in petroleum reservoir attributes using artificial intelligence techniques (neural network and genetic algorithm) is proposed. Instead of using the deterministic values of the reservoir properties, as in a conventional process, the parameters of the probability density function of each uncertain attribute are set as design variables in an optimization process using a genetic algorithm. The objective function (OF) is based on the misfit of a set of models, sampled from the probability density function, and a symmetry factor (which represents the distribution of curves around the history) is used as weight in the OF. Artificial neural networks are trained to represent the production curves of each well and the proxy models generated are used to evaluate the OF in the optimization process. The proposed method was applied to a reservoir with 16 uncertain attributes and promising results were obtained.

  6. A novel topical formulation containing strontium chloride significantly reduces the intensity and duration of cowhage-induced itch.

    PubMed

    Papoiu, Alexandru D P; Valdes-Rodriguez, Rodrigo; Nattkemper, Leigh A; Chan, Yiong-Huak; Hahn, Gary S; Yosipovitch, Gil

    2013-09-04

    The aim of this double-blinded, vehicle-controlled study was to test the antipruritic efficacy of topical strontium to relieve a nonhistaminergic form of itch that would be clinically relevant for chronic pruritic diseases. Itch induced with cowhage is mediated by PAR2 receptors which are considered to play a major role in itch of atopic dermatitis and possibly other acute and chronic pruritic conditions. The topical strontium hydrogel formulation (TriCalm®) was tested in a head-to-head comparison with 2 common topical formulations marketed as antipruritics: hydrocortisone and diphenhydramine, for their ability to relieve cowhage-induced itch. Topically-applied strontium salts were previously found to be effective for reducing histamine-induced and IgE-mediated itch in humans. However, histamine is not considered the critical mediator in the majority of skin diseases presenting with chronic pruritus. The current study enrolled 32 healthy subjects in which itch was induced with cowhage before and after skin treatment with a gel containing 4% SrCl2, control vehicle, topical 1% hydrocortisone and topical 2% diphenhydramine. Strontium significantly reduced the peak intensity and duration of cowhage-induced itch when compared to the control itch curve, and was significantly superior to the other two over-the-counter antipruritic agents and its own vehicle in antipruritic effect. We hereby show that a 4% topical strontium formulation has a robust antipruritic effect, not only against histamine-mediated itch, but also for non-histaminergic pruritus induced via the PAR2 pathway, using cowhage.

  7. The Deutch-Jozsa algorithm as a suitable framework for MapReduce in a quantum computer

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    The essence of the MapReduce paradigm is a parallel, distributed algorithm across hundreds or thousands machines. In crude fashion this parallelism reminds us of the method of computation by quantum parallelism which is possible only with quantum computers. Deutsch and Jozsa showed that there is a class of problems which can be solved more efficiently by quantum computer than by any classical or stochastic method. The method of computation by quantum parallelism solves the problem with certainty in exponentially less time than any classical computation. This leads to question would it be possible to implement the MapReduce paradigm in a quantum computer and harness this incredible speedup over the classical computation performed by the current computers. Although present quantum computers are not robust enough for code writing and execution, it is worth to explore this question from a theoretical point of view. We will show from a theoretical point of view that the Deutsch-Jozsa algorithm is a suitable framework to implement the MapReduce paradigm in a quantum computer.

  8. ZnO nanowire/reduced graphene oxide nanocomposites for significantly enhanced photocatalytic degradation of Rhodamine 6G

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Jing; Su, Yanjie; Xu, Minghan; Yang, Zhi; Zhang, Yafei

    2014-02-01

    We have demonstrated a facile and low-cost approach to synthesize ZnO nanowire (NW)/reduced graphene oxide (RGO) nanocomposites, in which ZnO NWs and graphene oxide (GO) were produced in large scale separately and then hybridized into ZnO NW/RGO nanocomposites by mechanical mixing and low-temperature thermal reduction. Rhodamine 6G (Rh6G) was used as a model dye to evaluate the photocatalytic properties of ZnO NW/RGO nanocomposites. The obtained nanocomposites show significantly enhanced photocatalytic performance, which took only 10 min to decompose over 98% Rh6G. Finally the mechanism of the great enhancement about photocatalytic activity of ZnO NW/RGO nanocomposites is studied. It is mainly attributed to that RGO nanosheets can transfer the electrons of ZnO NWs excited by ultraviolet (UV) irradiation, increase electron migration efficiency, and then longer the lifetime of the holes in ZnO NWs. The high charge separation efficiency of photo-generated electron-hole pairs directly leads to the lower recombination rate of ZnO NW/RGO nanocomposites, makes more effective electrons and holes to participate the radical reactions with Rh6G, thus significantly improving the photocatalytic properties. The high degradation efficiency makes the ZnO NW/RGO nanocomposites promising candidates in the application of environmental pollutant and wastewater treatment.

  9. Modest hypoxia significantly reduces triglyceride content and lipid droplet size in 3T3-L1 adipocytes

    SciTech Connect

    Hashimoto, Takeshi; Yokokawa, Takumi; Endo, Yuriko; Iwanaka, Nobumasa; Higashida, Kazuhiko; Taguchi, Sadayoshi

    2013-10-11

    Highlights: •Long-term hypoxia decreased the size of LDs and lipid storage in 3T3-L1 adipocytes. •Long-term hypoxia increased basal lipolysis in 3T3-L1 adipocytes. •Hypoxia decreased lipid-associated proteins in 3T3-L1 adipocytes. •Hypoxia decreased basal glucose uptake and lipogenic proteins in 3T3-L1 adipocytes. •Hypoxia-mediated lipogenesis may be an attractive therapeutic target against obesity. -- Abstract: Background: A previous study has demonstrated that endurance training under hypoxia results in a greater reduction in body fat mass compared to exercise under normoxia. However, the cellular and molecular mechanisms that underlie this hypoxia-mediated reduction in fat mass remain uncertain. Here, we examine the effects of modest hypoxia on adipocyte function. Methods: Differentiated 3T3-L1 adipocytes were incubated at 5% O{sub 2} for 1 week (long-term hypoxia, HL) or one day (short-term hypoxia, HS) and compared with a normoxia control (NC). Results: HL, but not HS, resulted in a significant reduction in lipid droplet size and triglyceride content (by 50%) compared to NC (p < 0.01). As estimated by glycerol release, isoproterenol-induced lipolysis was significantly lowered by hypoxia, whereas the release of free fatty acids under the basal condition was prominently enhanced with HL compared to NC or HS (p < 0.01). Lipolysis-associated proteins, such as perilipin 1 and hormone-sensitive lipase, were unchanged, whereas adipose triglyceride lipase and its activator protein CGI-58 were decreased with HL in comparison to NC. Interestingly, such lipogenic proteins as fatty acid synthase, lipin-1, and peroxisome proliferator-activated receptor gamma were decreased. Furthermore, the uptake of glucose, the major precursor of 3-glycerol phosphate for triglyceride synthesis, was significantly reduced in HL compared to NC or HS (p < 0.01). Conclusion: We conclude that hypoxia has a direct impact on reducing the triglyceride content and lipid droplet size via

  10. Design of an efficient real-time algorithm using reduced feature dimension for recognition of speed limit signs.

    PubMed

    Cho, Hanmin; Han, Seungwha; Hwang, Sun-Young

    2013-01-01

    We propose a real-time algorithm for recognition of speed limit signs from a moving vehicle. Linear Discriminant Analysis (LDA) required for classification is performed by using Discrete Cosine Transform (DCT) coefficients. To reduce feature dimension in LDA, DCT coefficients are selected by a devised discriminant function derived from information obtained by training. Binarization and thinning are performed on a Region of Interest (ROI) obtained by preprocessing a detected ROI prior to DCT for further reduction of computation time in DCT. This process is performed on a sequence of image frames to increase the hit rate of recognition. Experimental results show that arithmetic operations are reduced by about 60%, while hit rates reach about 100% compared to previous works.

  11. Taurolidine-citrate lock solution (TauroLock) significantly reduces CVAD-associated grampositive infections in pediatric cancer patients

    PubMed Central

    Simon, Arne; Ammann, Roland A; Wiszniewsky, Gertrud; Bode, Udo; Fleischhack, Gudrun; Besuden, Mette M

    2008-01-01

    Background Taurolidin/Citrate (TauroLock™), a lock solution with broad spectrum antimicrobial activity, may prevent bloodstream infection (BSI) due to coagulase-negative staphylococci (CoNS or 'MRSE' in case of methicillin-resistant isolates) in pediatric cancer patients with a long term central venous access device (CVAD, Port- or/Broviac-/Hickman-catheter type). Methods In a single center prospective 48-months cohort study we compared all patients receiving anticancer chemotherapy from April 2003 to March 2005 (group 1, heparin lock with 200 IU/ml sterile normal saline 0.9%; Canusal® Wockhardt UK Ltd, Wrexham, Wales) and all patients from April 2005 to March 2007 (group 2; taurolidine 1.35%/Sodium Citrate 4%; TauroLock™, Tauropharm, Waldbüttelbrunn, Germany). Results In group 1 (heparin), 90 patients had 98 CVAD in use during the surveillance period. 14 of 30 (47%) BSI were 'primary Gram positive BSI due to CoNS (n = 4) or MRSE (n = 10)' [incidence density (ID); 2.30 per 1000 inpatient CVAD-utilization days]. In group 2 (TauroLock™), 89 patients had 95 CVAD in use during the surveillance period. 3 of 25 (12%) BSI were caused by CoNS. (ID, 0.45). The difference in the ID between the two groups was statistically significant (P = 0.004). Conclusion The use of Taurolidin/Citrate (TauroLock™) significantly reduced the number and incidence density of primary catheter-associated BSI due to CoNS and MRSE in pediatric cancer patients. PMID:18664278

  12. Involvement of the PRKCB1 gene in autistic disorder: significant genetic association and reduced neocortical gene expression.

    PubMed

    Lintas, C; Sacco, R; Garbett, K; Mirnics, K; Militerni, R; Bravaccio, C; Curatolo, P; Manzi, B; Schneider, C; Melmed, R; Elia, M; Pascucci, T; Puglisi-Allegra, S; Reichelt, K-L; Persico, A M

    2009-07-01

    Protein kinase C enzymes play an important role in signal transduction, regulation of gene expression and control of cell division and differentiation. The fsI and betaII isoenzymes result from the alternative splicing of the PKCbeta gene (PRKCB1), previously found to be associated with autism. We performed a family-based association study in 229 simplex and 5 multiplex families, and a postmortem study of PRKCB1 gene expression in temporocortical gray matter (BA41/42) of 11 autistic patients and controls. PRKCB1 gene haplotypes are significantly associated with autism (P<0.05) and have the autistic endophenotype of enhanced oligopeptiduria (P<0.05). Temporocortical PRKCB1 gene expression was reduced on average by 35 and 31% for the PRKCB1-1 and PRKCB1-2 isoforms (P<0.01 and <0.05, respectively) according to qPCR. Protein amounts measured for the PKCbetaII isoform were similarly decreased by 35% (P=0.05). Decreased gene expression characterized patients carrying the 'normal' PRKCB1 alleles, whereas patients homozygous for the autism-associated alleles displayed mRNA levels comparable to those of controls. Whole genome expression analysis unveiled a partial disruption in the coordinated expression of PKCbeta-driven genes, including several cytokines. These results confirm the association between autism and PRKCB1 gene variants, point toward PKCbeta roles in altered epithelial permeability, demonstrate a significant downregulation of brain PRKCB1 gene expression in autism and suggest that it could represent a compensatory adjustment aimed at limiting an ongoing dysreactive immune process. Altogether, these data underscore potential PKCbeta roles in autism pathogenesis and spur interest in the identification and functional characterization of PRKCB1 gene variants conferring autism vulnerability.

  13. Significant change of local atomic configurations at surface of reduced activation Eurofer steels induced by hydrogenation treatments

    NASA Astrophysics Data System (ADS)

    Greculeasa, S. G.; Palade, P.; Schinteie, G.; Kuncser, A.; Stanciu, A.; Lungu, G. A.; Porosnicu, C.; Lungu, C. P.; Kuncser, V.

    2017-04-01

    Reduced-activation steels such as Eurofer alloys are candidates for supporting plasma facing components in tokamak-like nuclear fusion reactors. In order to investigate the impact of hydrogen/deuterium insertion in their crystalline lattice, annealing treatments in hydrogen atmosphere have been applied on Eurofer slabs. The resulting samples have been analyzed with respect to local structure and atomic configuration both before and after successive annealing treatments, by X-ray diffractometry (XRD), scanning electron microscopy and energy dispersive spectroscopy (SEM-EDS), X-ray photoelectron spectroscopy (XPS) and conversion electron Mössbauer spectroscopy (CEMS). The corroborated data point out for a bcc type structure of the non-hydrogenated alloy, with an average alloy composition approaching Fe0.9Cr0.1 along a depth of about 100 nm. EDS elemental maps do not indicate surface inhomogeneities in concentration whereas the Mössbauer spectra prove significant deviations from a homogeneous alloying. The hydrogenation increases the expulsion of the Cr atoms toward the surface layer and decreases their oxidation, with considerable influence on the surface properties of the steel. The hydrogenation treatment is therefore proposed as a potential alternative for a convenient engineering of the surface of different Fe-Cr based alloys.

  14. Optical trapping of nanoparticles with significantly reduced laser powers by using counter-propagating beams (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Zhao, Chenglong; LeBrun, Thomas W.

    2015-08-01

    Gold nanoparticles (GNP) have wide applications ranging from nanoscale heating to cancer therapy and biological sensing. Optical trapping of GNPs as small as 18 nm has been successfully achieved with laser power as high as 855 mW, but such high powers can damage trapped particles (particularly biological systems) as well heat the fluid, thereby destabilizing the trap. In this article, we show that counter propagating beams (CPB) can successfully trap GNP with laser powers reduced by a factor of 50 compared to that with a single beam. The trapping position of a GNP inside a counter-propagating trap can be easily modulated by either changing the relative power or position of the two beams. Furthermore, we find that under our conditions while a single-beam most stably traps a single particle, the counter-propagating beam can more easily trap multiple particles. This (CPB) trap is compatible with the feedback control system we recently demonstrated to increase the trapping lifetimes of nanoparticles by more than an order of magnitude. Thus, we believe that the future development of advanced trapping techniques combining counter-propagating traps together with control systems should significantly extend the capabilities of optical manipulation of nanoparticles for prototyping and testing 3D nanodevices and bio-sensing.

  15. Documentation for subroutine REDUC3, an algorithm for the linear filtering of gridded magnetic data

    USGS Publications Warehouse

    Blakely, Richard J.

    1977-01-01

    Subroutine REDUC3 transforms a total field anomaly h1(x,y) , measured on a horizontal and rectangular grid, into a new anomaly h2(x,y). This new anomaly is produced by the same source as h1(x,y) , but (1) is observed at a different elevation, (2) has a source with a different direction of magnetization, and/or (3) has a different direction of residual field. Case 1 is tantamount to upward or downward continuation. Cases 2 and 3 are 'reduction to the pole', if the new inclinations of both the magnetization and regional field are 90 degrees. REDUC3 is a filtering operation applied in the wave-number domain. It first Fourier transforms h1(x,y) , multiplies by the appropriate filter, and inverse Fourier transforms the result to obtain h2(x,y). No assumptions are required about the shape of the source or how the intensity of magnetization varies within it.

  16. Generation of SNP datasets for orangutan population genomics using improved reduced-representation sequencing and direct comparisons of SNP calling algorithms

    PubMed Central

    2014-01-01

    Background High-throughput sequencing has opened up exciting possibilities in population and conservation genetics by enabling the assessment of genetic variation at genome-wide scales. One approach to reduce genome complexity, i.e. investigating only parts of the genome, is reduced-representation library (RRL) sequencing. Like similar approaches, RRL sequencing reduces ascertainment bias due to simultaneous discovery and genotyping of single-nucleotide polymorphisms (SNPs) and does not require reference genomes. Yet, generating such datasets remains challenging due to laboratory and bioinformatical issues. In the laboratory, current protocols require improvements with regards to sequencing homologous fragments to reduce the number of missing genotypes. From the bioinformatical perspective, the reliance of most studies on a single SNP caller disregards the possibility that different algorithms may produce disparate SNP datasets. Results We present an improved RRL (iRRL) protocol that maximizes the generation of homologous DNA sequences, thus achieving improved genotyping-by-sequencing efficiency. Our modifications facilitate generation of single-sample libraries, enabling individual genotype assignments instead of pooled-sample analysis. We sequenced ~1% of the orangutan genome with 41-fold median coverage in 31 wild-born individuals from two populations. SNPs and genotypes were called using three different algorithms. We obtained substantially different SNP datasets depending on the SNP caller. Genotype validations revealed that the Unified Genotyper of the Genome Analysis Toolkit and SAMtools performed significantly better than a caller from CLC Genomics Workbench (CLC). Of all conflicting genotype calls, CLC was only correct in 17% of the cases. Furthermore, conflicting genotypes between two algorithms showed a systematic bias in that one caller almost exclusively assigned heterozygotes, while the other one almost exclusively assigned homozygotes. Conclusions

  17. A multi-channel feedback algorithm for the development of active liners to reduce noise in flow duct applications

    NASA Astrophysics Data System (ADS)

    Mazeaud, B.; Galland, M.-A.

    2007-10-01

    The present paper deals with the design and development of the active part of a hybrid acoustic treatment combining porous material properties and active control techniques. Such an acoustic system was developed to reduce evolutionary tones in flow duct applications. Attention was particularly focused on the optimization process of the controller part of the hybrid cell. A piezo-electric transducer combining efficiency and compactness was selected as a secondary source. A digital adaptive feedback control algorithm was specially developed in order to operate independently cell by cell, and to facilitate a subsequent increase in the liner surface. An adaptive bandpass filter was used to prevent the development of instabilities due to the coupling occurring between cells. Special care was taken in the development of such systems for time-varying primary signals. An automatic frequency detection loop was therefore introduced in the control algorithm, enabling the continuous adaptation of the bandpass filtering. The multi-cell structure was experimentally validated for a four-cell system located on a duct wall in the presence of flow. Substantial noise reduction was obtained throughout the 0.7-2.5 kHz frequency range, with flow velocities up to 50 m/s.

  18. Oxidation of naturally reduced uranium in aquifer sediments by dissolved oxygen and its potential significance to uranium plume persistence

    NASA Astrophysics Data System (ADS)

    Davis, J. A.; Smith, R. L.; Bohlke, J. K.; Jemison, N.; Xiang, H.; Repert, D. A.; Yuan, X.; Williams, K. H.

    2015-12-01

    The occurrence of naturally reduced zones is common in alluvial aquifers in the western U.S.A. due to the burial of woody debris in flood plains. Such reduced zones are usually heterogeneously dispersed in these aquifers and characterized by high concentrations of organic carbon, reduced mineral phases, and reduced forms of metals, including uranium(IV). The persistence of high concentrations of dissolved uranium(VI) at uranium-contaminated aquifers on the Colorado Plateau has been attributed to slow oxidation of insoluble uranium(IV) mineral phases found in association with these reducing zones, although there is little understanding of the relative importance of various potential oxidants. Four field experiments were conducted within an alluvial aquifer adjacent to the Colorado River near Rifle, CO, wherein groundwater associated with the naturally reduced zones was pumped into a gas-impermeable tank, mixed with a conservative tracer (Br-), bubbled with a gas phase composed of 97% O2 and 3% CO2, and then returned to the subsurface in the same well from which it was withdrawn. Within minutes of re-injection of the oxygenated groundwater, dissolved uranium(VI) concentrations increased from less than 1 μM to greater than 2.5 μM, demonstrating that oxygen can be an important oxidant for uranium in such field systems if supplied to the naturally reduced zones. Dissolved Fe(II) concentrations decreased to the detection limit, but increases in sulfate could not be detected due to high background concentrations. Changes in nitrogen species concentrations were variable. The results contrast with other laboratory and field results in which oxygen was introduced to systems containing high concentrations of mackinawite (FeS), rather than the more crystalline iron sulfides found in aged, naturally reduced zones. The flux of oxygen to the naturally reduced zones in the alluvial aquifers occurs mainly through interactions between groundwater and gas phases at the water table

  19. Feasibility of an automatic computer-assisted algorithm for the detection of significant coronary artery disease in patients presenting with acute chest pain.

    PubMed

    Kang, Ki-woon; Chang, Hyuk-jae; Shim, Hackjoon; Kim, Young-jin; Choi, Byoung-wook; Yang, Woo-in; Shim, Jee-young; Ha, Jongwon; Chung, Namsik

    2012-04-01

    Automatic computer-assisted detection (auto-CAD) of significant coronary artery disease (CAD) in coronary computed tomography angiography (cCTA) has been shown to have relatively high accuracy. However, to date, scarce data are available regarding the performance of auto-CAD in the setting of acute chest pain. This study sought to demonstrate the feasibility of an auto-CAD algorithm for cCTA in patients presenting with acute chest pain. We retrospectively investigated 398 consecutive patients (229 male, mean age 50±21 years) who had acute chest pain and underwent cCTA between Apr 2007 and Jan 2011 in the emergency department (ED). All cCTA data were analyzed using an auto-CAD algorithm for the detection of >50% CAD on cCTA. The accuracy of auto-CAD was compared with the formal radiology report. In 380 of 398 patients (18 were excluded due to failure of data processing), per-patient analysis of auto-CAD revealed the following: sensitivity 94%, specificity 63%, positive predictive value (PPV) 76%, and negative predictive value (NPV) 89%. After the exclusion of 37 cases that were interpreted as invalid by the auto-CAD algorithm, the NPV was further increased up to 97%, considering the false-negative cases in the formal radiology report, and was confirmed by subsequent invasive angiogram during the index visit. We successfully demonstrated the high accuracy of an auto-CAD algorithm, compared with the formal radiology report, for the detection of >50% CAD on cCTA in the setting of acute chest pain. The auto-CAD algorithm can be used to facilitate the decision-making process in the ED.

  20. Silencing porcine CMAH and GGTA1 genes significantly reduces xenogeneic consumption of human platelets by porcine livers

    PubMed Central

    Butler, James R.; Paris, Leela L.; Blankenship, Ross L.; Sidner, Richard A.; Martens, Gregory R.; Ladowski, Joeseph M.; Li, Ping; Estrada, Jose L; Tector, Matthew; Tector, A. Joseph

    2015-01-01

    Background A profound thrombocytopenia limits hepatic xenotransplantation in the pig-to-primate model. Porcine livers also have shown the ability to phagocytose human platelets in the absence of immune-mediate injury. Recently, inactivation of the porcine ASGR1 gene has been shown to decrease this phenomenon. Inactivating GGTA1 and CMAH genes has reduced the antibody-mediated barrier to xenotransplantation; herein we describe the effect that these modifications have on xenogeneic consumption of human platelets in the absence of immune-mediated graft injury. Methods WT, ASGR1−/−, GGTA1−/−, and GGTA1−/−CMAH−/− knockout pigs were compared for their xenogeneic hepatic consumption of human platelets. An in vitro assay was established to measure the association of human platelets with liver sinusoidal endothelial cells (LSECs) by immunohistochemistry. Perfusion models were used to measure human platelet uptake in livers from WT, ASGR1−/−, GGTA1−/−, and GGTA1−/− CMAH−/− pigs. Results GGTA1−/−, CMAH−/− LSECs exhibited reduced levels of human platelet binding in vitro, when compared to GGTA1−/− and WT LSECs. In a continuous perfusion model, GGTA1−/− CMAH−/− livers consumed fewer human platelets than GGTA1−/− and WT livers. GGTA1−/− CMAH−/− livers also consumed fewer human platelets than ASGR1−/− livers in a single pass model. Conclusions Silencing the porcine carbohydrate genes necessary to avoid antibody-mediated rejection in a pig-to-human model also reduces the xenogeneic consumption of human platelets by the porcine liver. The combination of these genetic modifications may be an effective strategy to limit the thrombocytopenia associated with pig-to-human hepatic xenotransplantation. PMID:26906939

  1. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    SciTech Connect

    Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin

    2015-08-15

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  2. Implementation and Operational Research: Expedited Results Delivery Systems Using GPRS Technology Significantly Reduce Early Infant Diagnosis Test Turnaround Times.

    PubMed

    Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh

    2015-09-01

    The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays.

  3. A pilot study of the clinical and statistical significance of a program to reduce eating disorder risk factors in children.

    PubMed

    Escoto Ponce de León, M C; Mancilla Díaz, J M; Camacho Ruiz, E J

    2008-09-01

    The current study used clinical and statistical significance tests to investigate the effects of two forms (didactic or interactive) of a universal prevention program on attitudes about shape and weight, eating behaviors, the influence of body aesthetic models, and self-esteem. Three schools were randomly assigned to one, interactive, didactic, or a control condition. Children (61 girls and 59 boys, age 9-11 years) were evaluated at pre-intervention, post-intervention, and at 6-month follow-up. Programs comprised eight, 90-min sessions. Statistical and clinical significance tests showed more changes in boys and girls with the interactive program versus the didactic intervention and control groups. The findings support the use of interactive programs that highlight identified risk factors and construction of identity based on positive traits distinct to physical appearance.

  4. In a randomized placebo-controlled add-on study orlistat significantly reduced clozapine-induced constipation.

    PubMed

    Chukhin, Evgeny; Takala, Pirjo; Hakko, Helinä; Raidma, Mirjam; Putkonen, Hanna; Räsänen, Pirkko; Terevnikov, Viacheslav; Stenberg, Jan-Henry; Eronen, Markku; Joffe, Grigori

    2013-03-01

    Constipation is a common and potentially fatal side effect of clozapine treatment. Another important side effect of clozapine may also be significant weight gain. Orlistat is a weight-control medication that is known to induce loose stools as a common side effect. This study aimed to explore whether orlistat used to control clozapine-induced weight gain can simultaneously tackle clozapine-related constipation. In this 16-week randomized-controlled study, clozapine-treated patients received add-on orlistat (n=30) or add-on placebo (n=24). Colonic function was measured using the Bristol Stool Form Scale. There was a significant (P=0.039) difference in the prevalence of constipation in favor of orlistat over placebo in completers (n=40) at the endpoint. A decrease in the prevalence of constipation within the orlistat group (P=0.035) was observed (vs. no statistically significant changes in the placebo group). In clozapine-treated patients, orlistat may be beneficial not only for weight control but also as a laxative. As no established treatments for clozapine-induced constipation exist, orlistat can be considered for this population, although more studies are required.

  5. Mefenamic acid in combination with ribavirin shows significant effects in reducing chikungunya virus infection in vitro and in vivo.

    PubMed

    Rothan, Hussin A; Bahrani, Hirbod; Abdulrahman, Ammar Y; Mohamed, Zulqarnain; Teoh, Teow Chong; Othman, Shatrah; Rashid, Nurshamimi Nor; Rahman, Noorsaadah A; Yusof, Rohana

    2016-03-01

    Chikungunya virus (CHIKV) infection is a persistent problem worldwide due to efficient adaptation of the viral vectors, Aedes aegypti and Aedes albopictus mosquitoes. Therefore, the absence of effective anti-CHIKV drugs to combat chikungunya outbreaks often leads to a significant impact on public health care. In this study, we investigated the antiviral activity of drugs that are used to alleviate infection symptoms, namely, the non-steroidal anti-inflammatory drugs (NSAIDs), on the premise that active compounds with potential antiviral and anti-inflammatory activities could be directly subjected for human use to treat CHIKV infections. Amongst the various NSAID compounds, Mefenamic acid (MEFE) and Meclofenamic acid (MECLO) showed considerable antiviral activity against viral replication individually or in combination with the common antiviral drug, Ribavirin (RIBA). The 50% effective concentration (EC50) was estimated to be 13 μM for MEFE, 18 μM for MECLO and 10 μM for RIBA, while MEFE + RIBA (1:1) exhibited an EC50 of 3 μM, and MECLO + RIBA (1:1) was 5 μM. Because MEFE is commercially available and its synthesis is easier compared with MECLO, MEFE was selected for further in vivo antiviral activity analysis. Treatment with MEFE + RIBA resulted in a significant reduction of hypertrophic effects by CHIKV on the mouse liver and spleen. Viral titre quantification in the blood of CHIKV-infected mice through the plaque formation assay revealed that treatment with MEFE + RIBA exhibited a 6.5-fold reduction compared with untreated controls. In conclusion, our study demonstrated that MEFE in combination with RIBA exhibited significant anti-CHIKV activity by impairing viral replication in vitro and in vivo. Indeed, this finding may lead to an even broader application of these combinatorial treatments against other viral infections.

  6. Reduced expression of the long non-coding RNA AI364715 in gastric cancer and its clinical significance.

    PubMed

    Zhu, Shengqian; Mao, Jinqin; Shao, Yongfu; Chen, Fang; Zhu, Xiaoqin; Xu, Dingli; Zhang, Xinjun; Guo, Junming

    2015-09-01

    Long non-coding RNA (lncRNA), which is greater than 200 nucleotides, is a class of RNA molecules without protein coding function. In recent years, studies have shown that lncRNAs are associated with cancers. They are affecting the occurrence and development of cancers. However, the diagnostic significances of lncRNAs in gastric cancer are largely unknown. In this study, we focused on AI364715, one typical lncRNA. A total of 186 samples were collected from two cancer centers. To find the potential association between its level and gastric cancer, we first collected 75 paired gastric cancer tissues and normal tissues, which are 5 cm away from the edge of carcinoma. Besides, 18 human healthy gastric mucosa and 18 gastric precancerous lesions (dysplasia) were also collected. Quantitative reverse transcription-polymerase chain reaction (RT-PCR) was first used to detect the expression level of AI364715 at multiple stages of gastric tumorigenesis. Then, the relationships between AI364715 level and the clinicopathological factors of patients with gastric cancer were analyzed. The results showed that the expression level of AI364715 in gastric cancer tissues was downregulated. Meanwhile, its expression level was closely associated with tumor size and differentiation. More importantly, AI364715 expression level was significantly changed in dysplasia, the typical precancerous lesions. Taken together, AI364715 may be a potential biomarker for the diagnosis of gastric cancer.

  7. Coadministration of Pinellia ternata Can Significantly Reduce Aconitum carmichaelii to Inhibit CYP3A Activity in Rats

    PubMed Central

    Wu, Jinjun; Cheng, Zaixing; Zhu, Lijun; Lu, Linlin; Zhang, Guiyu; Wang, Ying; Xu, Ying; Lin, Na; Liu, Zhongqiu

    2014-01-01

    Chuanwu (CW), the mother root of Aconitum carmichaelii Debx., is a traditional Chinese medicine (TCM) for treating traumatic injuries, rheumatoid arthritis, and tumors. CW coadministered with banxia (BX), the root of Pinellia ternata, is also widely prescribed in clinical practice. However, the mechanism of this combination is yet deciphered. Current study aimed to investigate the effects of CW, including raw chuanwu (RCW) and processed chuanwu (PCW) alone, as well as CW coadministered with BX on CYP3A activity. Buspirone (BP) and testosterone (Tes) were used as specific probe substrates in vivo and ex vivo, respectively. CYP3A activity was determined by the metabolites formation ratios from the substrates. Compared with those in the control group, the metabolites formation ratios significantly decreased in the RCW and PCW alone groups, accompanied by a marked decrease in CYP3A protein and mRNA levels. However, there was a significant increase in those ratios in the RCW-BX and PCW-BX groups compared to the RCW and PCW alone groups. The results indicated that both RCW and PCW can inhibit CYP3A activity in rats because of downregulation of CYP3A protein and mRNA levels. Decreases in CYP3A activity can be reversed by coadministration with BX. PMID:25371696

  8. Reduced turnover times make flexible optical reusable scope with EndoSheath(®) Technology significantly cost-effective.

    PubMed

    Gupta, Deepak; Srirajakalidindi, Arvind; Wang, Hong

    2012-07-01

    EndoSheath bronchoscopy (Vision Sciences, Inc.) uses a sterile, disposable microbial barrier that may meet the growing needs for safe, efficient, and cost effective flexible bronchoscopy. The purpose of this open-label comparative study was to compare and calculate the costs-per-airway-procedure of the reusable fiberscope when used with and without EndoSheath(®) Technology; and to record the turnover time from the completion of the use of each scope until its readiness again for the next use. Seventy-five new patients' airways requiring airway maneuvers and manipulations with Vision Sciences, Inc., reusable fiberscope with EndoSheath(®) Technology were evaluated for the costs comparisons with reassessed historical costs data for Olympus scope assisted tracheal intubations. As compared to costs of an intubation ($158.50) with Olympus scope at our institute, the intubation costs with Vision Sciences, Inc., reusable fiberscope with EndoSheath technology was $81.50 (P < 0.001). The mean turnover time was 5.44 min with EndoSheath technology as compared to previously reported 30 min with Olympus fiberscope (P < 0.001). Based on our institutional experience, Vision Sciences, Inc., reusable fiberscope with EndoSheath technology is significantly cost effective as compared to the Olympus scope with significantly improved turnover times.

  9. Reduced capacity of tumour blood vessels to produce endothelium-derived relaxing factor: significance for blood flow modification.

    PubMed Central

    Tozer, G. M.; Prise, V. E.; Bell, K. M.; Dennis, M. F.; Stratford, M. R.; Chaplin, D. J.

    1996-01-01

    The effect of nitric oxide-dependent vasodilators on vascular resistance of tumours and normal tissue was determined with the aim of modifying tumour blood flow for therapeutic benefit. Isolated preparations of the rat P22 tumour and normal rat hindlimb were perfused ex vivo. The effects on tissue vascular resistance of administration of sodium nitroprusside (SNP) and the diazeniumdiolate (or NONO-ate) NOC-7, vasodilators which act via direct release of nitric oxide (NO), were compared with the effects of acetylcholine (ACh), a vasodilator which acts primarily via receptor stimulation of endothelial cells to release NO in the form of endothelium-derived relaxing factor (EDRF). SNP and NOC-7 effectively dilated tumour blood vessels after preconstriction with phenylephrine (PE) or potassium chloride (KCl) as indicated by a decrease in vascular resistance. SNP also effectively dilated normal rat hindlimb vessels after PE/KCl constriction. Vasodilatation in the tumour preparations was accompanied by a significant rise in nitrite levels measured in the tumour effluent. ACh induced a significant vasodilation in the normal hindlimb but an anomalous vasoconstriction in the tumour. This result suggests that tumours, unlike normal tissues are incapable of releasing NO (EDRF) in response to ACh. Capacity for EDRF production may represent a difference between tumour and normal tissue blood vessels, which could be exploited for selective pharmacological manipulation of tumour blood flow. PMID:8980396

  10. Electrographic seizures are significantly reduced by in vivo inhibition of neuronal uptake of extracellular glutamine in rat hippocampus

    PubMed Central

    Kanamori, Keiko; Ross, Brian D.

    2013-01-01

    Summary Rats were given unilateral kainate injection into hippocampal CA3 region, and the effect of chronic electrographic seizures on extracellular glutamine (GLNECF) was examined in those with low and steady levels of extracellular glutamate (GLUECF). GLNECF, collected by microdialysis in awake rats for 5 h, decreased to 62 ± 4.4% of the initial concentration (n = 6). This change correlated with the frequency and magnitude of seizure activity, and occurred in the ipsilateral but not in contralateral hippocampus, nor in kainate-injected rats that did not undergo seizure (n = 6). Hippocampal intracellular GLN did not differ between the Seizure and No-Seizure Groups. These results suggested an intriguing possibility that seizure-induced decrease of GLNECF reflects not decreased GLN efflux into the extracellular fluid, but increased uptake into neurons. To examine this possibility, neuronal uptake of GLNECF was inhibited in vivo by intrahippocampal perfusion of 2-(methylamino)isobutyrate, a competitive and reversible inhibitor of the sodium-coupled neutral amino acid transporter (SNAT) subtypes 1 and 2, as demonstrated by 1.8 ± 0.17 fold elevation of GLNECF (n = 7). The frequency of electrographic seizures during uptake inhibition was reduced to 35 ± 7% (n = 7) of the frequency in pre-perfusion period, and returned to 88 ± 9% in the post-perfusion period. These novel in vivo results strongly suggest that, in this well-established animal model of temporal-lobe epilepsy, the observed seizure-induced decrease of GLNECF reflects its increased uptake into neurons to sustain enhanced glutamatergic epileptiform activity, thereby demonstrating a possible new target for anti-seizure therapies. PMID:24070846

  11. Precision feeding can significantly reduce lysine intake and nitrogen excretion without compromising the performance of growing pigs.

    PubMed

    Andretta, I; Pomar, C; Rivest, J; Pomar, J; Radünz, J

    2016-07-01

    This study was developed to assess the impact on performance, nutrient balance, serum parameters and feeding costs resulting from the switching of conventional to precision-feeding programs for growing-finishing pigs. A total of 70 pigs (30.4±2.2 kg BW) were used in a performance trial (84 days). The five treatments used in this experiment were a three-phase group-feeding program (control) obtained with fixed blending proportions of feeds A (high nutrient density) and B (low nutrient density); against four individual daily-phase feeding programs in which the blending proportions of feeds A and B were updated daily to meet 110%, 100%, 90% or 80% of the lysine requirements estimated using a mathematical model. Feed intake was recorded automatically by a computerized device in the feeders, and the pigs were weighed weekly during the project. Body composition traits were estimated by scanning with an ultrasound device and densitometer every 28 days. Nitrogen and phosphorus excretions were calculated by the difference between retention (obtained from densitometer measurements) and intake. Feeding costs were assessed using 2013 ingredient cost data. Feed intake, feed efficiency, back fat thickness, body fat mass and serum contents of total protein and phosphorus were similar among treatments. Feeding pigs in a daily-basis program providing 110%, 100% or 90% of the estimated individual lysine requirements also did not influence BW, body protein mass, weight gain and nitrogen retention in comparison with the animals in the group-feeding program. However, feeding pigs individually with diets tailored to match 100% of nutrient requirements made it possible to reduce (P<0.05) digestible lysine intake by 26%, estimated nitrogen excretion by 30% and feeding costs by US$7.60/pig (-10%) relative to group feeding. Precision feeding is an effective approach to make pig production more sustainable without compromising growth performance.

  12. Matching for the nonconventional MHC-I MICA gene significantly reduces the incidence of acute and chronic GVHD

    PubMed Central

    Carapito, Raphael; Jung, Nicolas; Kwemou, Marius; Untrau, Meiggie; Michel, Sandra; Pichot, Angélique; Giacometti, Gaëlle; Macquin, Cécile; Ilias, Wassila; Morlon, Aurore; Kotova, Irina; Apostolova, Petya; Schmitt-Graeff, Annette; Cesbron, Anne; Gagne, Katia; Oudshoorn, Machteld; van der Holt, Bronno; Labalette, Myriam; Spierings, Eric; Picard, Christophe; Loiseau, Pascale; Tamouza, Ryad; Toubert, Antoine; Parissiadis, Anne; Dubois, Valérie; Lafarge, Xavier; Maumy-Bertrand, Myriam; Bertrand, Frédéric; Vago, Luca; Ciceri, Fabio; Paillard, Catherine; Querol, Sergi; Sierra, Jorge; Fleischhauer, Katharina; Nagler, Arnon; Labopin, Myriam; Inoko, Hidetoshi; von dem Borne, Peter A.; Kuball, Jürgen; Ota, Masao; Katsuyama, Yoshihiko; Michallet, Mauricette; Lioure, Bruno; Peffault de Latour, Régis; Blaise, Didier; Cornelissen, Jan J.; Yakoub-Agha, Ibrahim; Claas, Frans; Moreau, Philippe; Milpied, Noël; Charron, Dominique; Mohty, Mohamad; Zeiser, Robert; Socié, Gérard

    2016-01-01

    Graft-versus-host disease (GVHD) is among the most challenging complications in unrelated donor hematopoietic cell transplantation (HCT). The highly polymorphic MHC class I chain–related gene A, MICA, encodes a stress-induced glycoprotein expressed primarily on epithelia. MICA interacts with the invariant activating receptor NKG2D, expressed by cytotoxic lymphocytes, and is located in the MHC, next to HLA-B. Hence, MICA has the requisite attributes of a bona fide transplantation antigen. Using high-resolution sequence-based genotyping of MICA, we retrospectively analyzed the clinical effect of MICA mismatches in a multicenter cohort of 922 unrelated donor HLA-A, HLA-B, HLA-C, HLA-DRB1, and HLA-DQB1 10/10 allele-matched HCT pairs. Among the 922 pairs, 113 (12.3%) were mismatched in MICA. MICA mismatches were significantly associated with an increased incidence of grade III-IV acute GVHD (hazard ratio [HR], 1.83; 95% confidence interval [CI], 1.50-2.23; P < .001), chronic GVHD (HR, 1.50; 95% CI, 1.45-1.55; P < .001), and nonelapse mortality (HR, 1.35; 95% CI, 1.24-1.46; P < .001). The increased risk for GVHD was mirrored by a lower risk for relapse (HR, 0.50; 95% CI, 0.43-0.59; P < .001), indicating a possible graft-versus-leukemia effect. In conclusion, when possible, selecting a MICA-matched donor significantly influences key clinical outcomes of HCT in which a marked reduction of GVHD is paramount. The tight linkage disequilibrium between MICA and HLA-B renders identifying a MICA-matched donor readily feasible in clinical practice. PMID:27549307

  13. Matching for the nonconventional MHC-I MICA gene significantly reduces the incidence of acute and chronic GVHD.

    PubMed

    Carapito, Raphael; Jung, Nicolas; Kwemou, Marius; Untrau, Meiggie; Michel, Sandra; Pichot, Angélique; Giacometti, Gaëlle; Macquin, Cécile; Ilias, Wassila; Morlon, Aurore; Kotova, Irina; Apostolova, Petya; Schmitt-Graeff, Annette; Cesbron, Anne; Gagne, Katia; Oudshoorn, Machteld; van der Holt, Bronno; Labalette, Myriam; Spierings, Eric; Picard, Christophe; Loiseau, Pascale; Tamouza, Ryad; Toubert, Antoine; Parissiadis, Anne; Dubois, Valérie; Lafarge, Xavier; Maumy-Bertrand, Myriam; Bertrand, Frédéric; Vago, Luca; Ciceri, Fabio; Paillard, Catherine; Querol, Sergi; Sierra, Jorge; Fleischhauer, Katharina; Nagler, Arnon; Labopin, Myriam; Inoko, Hidetoshi; von dem Borne, Peter A; Kuball, Jürgen; Ota, Masao; Katsuyama, Yoshihiko; Michallet, Mauricette; Lioure, Bruno; Peffault de Latour, Régis; Blaise, Didier; Cornelissen, Jan J; Yakoub-Agha, Ibrahim; Claas, Frans; Moreau, Philippe; Milpied, Noël; Charron, Dominique; Mohty, Mohamad; Zeiser, Robert; Socié, Gérard; Bahram, Seiamak

    2016-10-13

    Graft-versus-host disease (GVHD) is among the most challenging complications in unrelated donor hematopoietic cell transplantation (HCT). The highly polymorphic MHC class I chain-related gene A, MICA, encodes a stress-induced glycoprotein expressed primarily on epithelia. MICA interacts with the invariant activating receptor NKG2D, expressed by cytotoxic lymphocytes, and is located in the MHC, next to HLA-B Hence, MICA has the requisite attributes of a bona fide transplantation antigen. Using high-resolution sequence-based genotyping of MICA, we retrospectively analyzed the clinical effect of MICA mismatches in a multicenter cohort of 922 unrelated donor HLA-A, HLA-B, HLA-C, HLA-DRB1, and HLA-DQB1 10/10 allele-matched HCT pairs. Among the 922 pairs, 113 (12.3%) were mismatched in MICA MICA mismatches were significantly associated with an increased incidence of grade III-IV acute GVHD (hazard ratio [HR], 1.83; 95% confidence interval [CI], 1.50-2.23; P < .001), chronic GVHD (HR, 1.50; 95% CI, 1.45-1.55; P < .001), and nonelapse mortality (HR, 1.35; 95% CI, 1.24-1.46; P < .001). The increased risk for GVHD was mirrored by a lower risk for relapse (HR, 0.50; 95% CI, 0.43-0.59; P < .001), indicating a possible graft-versus-leukemia effect. In conclusion, when possible, selecting a MICA-matched donor significantly influences key clinical outcomes of HCT in which a marked reduction of GVHD is paramount. The tight linkage disequilibrium between MICA and HLA-B renders identifying a MICA-matched donor readily feasible in clinical practice.

  14. On the Simulation of Sea States with High Significant Wave Height for the Validation of Parameter Retrieval Algorithms for Future Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Kuschenerus, Mieke; Cullen, Robert

    2016-08-01

    To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.

  15. Guanine polynucleotides are self-antigens for human natural autoantibodies and are significantly reduced in the human genome

    PubMed Central

    Fattal, Ittai; Shental, Noam; Ben-Dor, Shifra; Molad, Yair; Gabrielli, Armando; Pokroy-Shapira, Elisheva; Oren, Shirly; Livneh, Avi; Langevitz, Pnina; Zandman-Goddard, Gisele; Sarig, Ofer; Margalit, Raanan; Gafter, Uzi; Domany, Eytan; Cohen, Irun R

    2015-01-01

    In the course of investigating anti-DNA autoantibodies, we examined IgM and IgG antibodies to poly-G and other oligonucleotides in the sera of healthy persons and those diagnosed with systemic lupus erythematosus (SLE), scleroderma (SSc), or pemphigus vulgaris (PV); we used an antigen microarray and informatic analysis. We now report that all of the 135 humans studied, irrespective of health or autoimmune disease, manifested relatively high amounts of IgG antibodies binding to the 20-mer G oligonucleotide (G20); no participants entirely lacked this reactivity. IgG antibodies to homo-nucleotides A20, C20 or T20 were present only in the sera of SLE patients who were positive for antibodies to dsDNA. The prevalence of anti-G20 antibodies led us to survey human, mouse and Drosophila melanogaster (fruit fly) genomes for runs of T20 and G20 or more: runs of T20 appear > 170 000 times compared with only 93 runs of G20 or more in the human genome; of these runs, 40 were close to brain-associated genes. Mouse and fruit fly genomes showed significantly lower T20/G20 ratios than did human genomes. Moreover, sera from both healthy and SLE mice contained relatively little or no anti-G20 antibodies; so natural anti-G20 antibodies appear to be characteristic of humans. These unexpected observations invite investigation of the immune functions of anti-G20 antibodies in human health and disease and of runs of G20 in the human genome. PMID:26227667

  16. Guanine polynucleotides are self-antigens for human natural autoantibodies and are significantly reduced in the human genome.

    PubMed

    Fattal, Ittai; Shental, Noam; Ben-Dor, Shifra; Molad, Yair; Gabrielli, Armando; Pokroy-Shapira, Elisheva; Oren, Shirly; Livneh, Avi; Langevitz, Pnina; Zandman-Goddard, Gisele; Sarig, Ofer; Margalit, Raanan; Gafter, Uzi; Domany, Eytan; Cohen, Irun R

    2015-11-01

    In the course of investigating anti-DNA autoantibodies, we examined IgM and IgG antibodies to poly-G and other oligonucleotides in the sera of healthy persons and those diagnosed with systemic lupus erythematosus (SLE), scleroderma (SSc), or pemphigus vulgaris (PV); we used an antigen microarray and informatic analysis. We now report that all of the 135 humans studied, irrespective of health or autoimmune disease, manifested relatively high amounts of IgG antibodies binding to the 20-mer G oligonucleotide (G20); no participants entirely lacked this reactivity. IgG antibodies to homo-nucleotides A20, C20 or T20 were present only in the sera of SLE patients who were positive for antibodies to dsDNA. The prevalence of anti-G20 antibodies led us to survey human, mouse and Drosophila melanogaster (fruit fly) genomes for runs of T20 and G20 or more: runs of T20 appear > 170,000 times compared with only 93 runs of G20 or more in the human genome; of these runs, 40 were close to brain-associated genes. Mouse and fruit fly genomes showed significantly lower T20/G20 ratios than did human genomes. Moreover, sera from both healthy and SLE mice contained relatively little or no anti-G20 antibodies; so natural anti-G20 antibodies appear to be characteristic of humans. These unexpected observations invite investigation of the immune functions of anti-G20 antibodies in human health and disease and of runs of G20 in the human genome.

  17. Computer order entry systems in the emergency department significantly reduce the time to medication delivery for high acuity patients

    PubMed Central

    2013-01-01

    Background Computerized physician order entry (CPOE) systems are designed to increase safety and improve quality of care; however, their impact on efficiency in the ED has not yet been validated. This study examined the impact of CPOE on process times for medication delivery, laboratory utilization and diagnostic imaging in the early, late and control phases of a regional ED-CPOE implementation. Methods Setting: Three tertiary care hospitals serving a population in excess of 1 million inhabitants that initiated the same CPOE system during the same 3-week time window. Patients were stratified into three groupings: Control, Early CPOE and Late CPOE (n = 200 patients per group/hospital site). Eligible patients consisted of a stratified (40% CTAS 2 and 60% CTAS 3) random sample of all patients seen 30 days preceding CPOE implementation (Control), 30 days immediately after CPOE implementation (Early CPOE) and 5–6 months after CPOE implementation (Late CPOE). Primary outcomes were time to (TT) from physician assignment (MD-sign) up to MD-order completion. An ANOVA and t-test were employed for statistical analysis. Results In comparison with control, TT 1st MD-Ordered Medication decreased in both the Early and Late CPOE groups (102.6 min control, 62.8 Early and 65.7 late, p < 0.001). TT 1st MD-ordered laboratory results increased in both the Early and Late CPOE groups compared to Control (76.4, 85.3 and 73.8 min, respectively, p < 0.001). TT 1st X-Ray also significantly increased in both the Early and Late CPOE groups (80.4, 84.8 min, respectively, compared to 68.1, p < 0.001). Given that CT and ultrasound imaging inherently takes increased time, these imaging studies were not included, and only X-ray was examined. There was no statistical difference found between TT discharge and consult request. Conclusions Regional implementation of CPOE afforded important efficiencies in time to medication delivery for high acuity ED patients. Increased times observed for laboratory

  18. Prognostic Significance of Active and Modified forms of Endothelin 1 in Patients with Heart Failure with Reduced Ejection Fraction

    PubMed Central

    Gottlieb, Stephen S.; Harris, Kristie; Todd, John; Estis, Joel; Christenson, Robert H.; Torres, Victoria; Whittaker, Kerry; Rebuck, Heather; Wawrzyniak, Andrew; Krantz, David S.

    2015-01-01

    Objectives Concentrations of endothelin I (ET1) are elevated in CHF patients, and, like other biomarkers that reflect hemodynamic status and cardiac pathophysiology, are prognostic. The Singulex assay (Sgx-ET1) measures the active form of ET1, with a short in-vivo half-life and C-terminal endothelin-1 (CT-ET1) is measured by the Brahms assay and is a modified (degraded) product with longer half-life. We aimed to determine the prognostic importance of active and modified forms of endothelin 1 (Singulex and Brahms assays) in comparison with other commonly measured biomarkers of inflammation, hemodynamic status and cardiac physiology in CHF. Design & Methods Plasma biomarkers (Sgx-ET1, CT-ET1, NTproBNP, IL-6, TNFα, cTnI, VEGF, hs-CRP, Galectin-3, ST2) were measured in 134 NYHA class II and III CHF patients with systolic dysfunction. Prognostic importance of biomarkers for hospitalization or death were calculated by both logistic regression and Kaplan-Meier survival analyses. Results CT-ET1 (OR 5.2, 95% CI 1.7–15.7) and Sgx-ET1 (OR 2.9, CI 1.1–7.7) were independent predictors of hospitalization and death and additively predicted events after adjusting for age, sex and other significant biomarkers. Other biomarkers did not improve the model. Similarly, in Cox regression analysis, only CT-ET1 (HR 3.4, 95% CI 1.4–8.4), VEGF (2.7, 95% CI 1.3–5.4) and Sgx-ET1 (HR 2.6, 95% CI 1.2–5.6) were independently prognostic. Conclusions Elevated concentrations of endothelin 1 predict mortality and hospitalizations in HF patients. Endothelin 1 was more prognostic than commonly obtained hemodynamic, inflammatory and fibrotic biomarkers. Two different assays of endothelin 1 independently and synergistically were prognostic, suggesting either complementary information or extreme prognostic importance. PMID:25541019

  19. Reducing cross-sectional data using a genetic algorithm method and effects on cross-section geometry and steady-flow profiles

    USGS Publications Warehouse

    Berenbrock, Charles E.

    2015-01-01

    The effects of reduced cross-sectional data points on steady-flow profiles were also determined. Thirty-five cross sections of the original steady-flow model of the Kootenai River were used. These two methods were tested for all cross sections with each cross section resolution reduced to 10, 20 and 30 data points, that is, six tests were completed for each of the thirty-five cross sections. Generally, differences from the original water-surface elevation were smaller as the number of data points in reduced cross sections increased, but this was not always the case, especially in the braided reach. Differences were smaller for reduced cross sections developed by the genetic algorithm method than the standard algorithm method.

  20. From meatless Mondays to meatless Sundays: motivations for meat reduction among vegetarians and semi-vegetarians who mildly or significantly reduce their meat intake.

    PubMed

    De Backer, Charlotte J S; Hudders, Liselot

    2014-01-01

    This study explores vegetarians' and semi-vegetarians' motives for reducing their meat intake. Participants are categorized as vegetarians (remove all meat from their diet); semi-vegetarians (significantly reduce meat intake: at least three days a week); or light semi-vegetarians (mildly reduce meat intake: once or twice a week). Most differences appear between vegetarians and both groups of semi-vegetarians. Animal-rights and ecological concerns, together with taste preferences, predict vegetarianism, while an increase in health motives increases the odds of being semi-vegetarian. Even within each group, subgroups with different motives appear, and it is recommended that future researchers pay more attention to these differences.

  1. Varying protein source and quantity does not significantly improve weight loss, fat loss, or satiety in reduced energy diets among midlife adults

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This pilot study tested whether varying protein source and quantity in a reduced energy diet would result in significant differences in weight, body composition, and renin angiotensin aldosterone system activity in midlife adults. Eighteen subjects enrolled in a 5 month weight reduction study, invol...

  2. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    &MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and detection and responses that can be tested in VMET and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM. The plan for VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI

  3. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection.

    PubMed

    Bechet, P; Mitran, R; Munteanu, M

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  4. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  5. Intra-articular (IA) ropivacaine microparticle suspensions reduce pain, inflammation, cytokine, and substance p levels significantly more than oral or IA celecoxib in a rat model of arthritis.

    PubMed

    Rabinow, Barrett; Werling, Jane; Bendele, Alison; Gass, Jerome; Bogseth, Roy; Balla, Kelly; Valaitis, Paul; Hutchcraft, Audrey; Graham, Sabine

    2015-02-01

    Current therapeutic treatment options for osteoarthritis entail significant safety concerns. A novel ropivacaine crystalline microsuspension for bolus intra-articular (IA) delivery was thus developed and studied in a peptidoglycan polysaccharide (PGPS)-induced ankle swelling rat model. Compared with celecoxib controls, both oral and IA, ropivacaine IA treatment resulted in a significant reduction of pain upon successive PGPS reactivation, as demonstrated in two different pain models, gait analysis and incapacitance testing. The reduction in pain was attended by a significant reduction in histological inflammation, which in turn was accompanied by significant reductions in the cytokines IL-18 and IL-1β. This may have been due to inhibition of substance P, which was also significantly reduced. Pharmacokinetic analysis indicated that the analgesic effects outlasted measurable ropivacaine levels in either blood or tissue. The results are discussed in the context of pharmacologic mechanisms both of local anesthetics as well as inflammatory arthritis.

  6. New dispenser types for integrated pest management of agriculturally significant insect pests: an algorithm with specialized searching capacity in electronic data bases.

    PubMed

    Hummel, H E; Eisinger, M T; Hein, D F; Breuer, M; Schmid, S; Leithold, G

    2012-01-01

    Pheromone effects discovered some 130 years, but scientifically defined just half a century ago, are a great bonus for basic and applied biology. Specifically, pest management efforts have been advanced in many insect orders, either for purposes or monitoring, mass trapping, or for mating disruption. Finding and applying a new search algorithm, nearly 20,000 entries in the pheromone literature have been counted, a number much higher than originally anticipated. This compilation contains identified and thus synthesizable structures for all major orders of insects. Among them are hundreds of agriculturally significant insect pests whose aggregated damages and costly control measures range in the multibillions of dollars annually. Unfortunately, and despite a lot of effort within the international entomological scene, the number of efficient and cheap engineering solutions for dispensing pheromones under variable field conditions is uncomfortably lagging behind. Some innovative approaches are cited from the relevant literature in an attempt to rectify this situation. Recently, specifically designed electrospun organic nanofibers offer a lot of promise. With their use, the mating communication of vineyard insects like Lobesia botrana (Lep.: Tortricidae) can be disrupted for periods of seven weeks.

  7. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  8. Multi-objective teaching-learning-based optimization algorithm for reducing carbon emissions and operation time in turning operations

    NASA Astrophysics Data System (ADS)

    Lin, Wenwen; Yu, D. Y.; Wang, S.; Zhang, Chaoyong; Zhang, Sanqiang; Tian, Huiyu; Luo, Min; Liu, Shengqiang

    2015-07-01

    In addition to energy consumption, the use of cutting fluids, deposition of worn tools and certain other manufacturing activities can have environmental impacts. All these activities cause carbon emission directly or indirectly; therefore, carbon emission can be used as an environmental criterion for machining systems. In this article, a direct method is proposed to quantify the carbon emissions in turning operations. To determine the coefficients in the quantitative method, real experimental data were obtained and analysed in MATLAB. Moreover, a multi-objective teaching-learning-based optimization algorithm is proposed, and two objectives to minimize carbon emissions and operation time are considered simultaneously. Cutting parameters were optimized by the proposed algorithm. Finally, the analytic hierarchy process was used to determine the optimal solution, which was found to be more environmentally friendly than the cutting parameters determined by the design of experiments method.

  9. Prospective Evaluation of Prior Image Constrained Compressed Sensing (PICCS) Algorithm in Abdominal CT: A comparison of reduced dose with standard dose imaging

    PubMed Central

    Lubner, Meghan G.; Pickhardt, Perry J.; Kim, David H.; Tang, Jie; Munoz del Rio, Alejandro; Chen, Guang-Hong

    2014-01-01

    Purpose To prospectively study CT dose reduction using the “prior image constrained compressed sensing” (PICCS) reconstruction technique. Methods Immediately following routine standard dose (SD) abdominal MDCT, 50 patients (mean age, 57.7 years; mean BMI, 28.8) underwent a second reduced-dose (RD) scan (targeted dose reduction, 70-90%). DLP, CTDIvol and SSDE were compared. Several reconstruction algorithms (FBP, ASIR, and PICCS) were applied to the RD series. SD images with FBP served as reference standard. Two blinded readers evaluated each series for subjective image quality and focal lesion detection. Results Mean DLP, CTDIvol, and SSDE for RD series was 140.3 mGy*cm (median 79.4), 3.7 mGy (median 1.8), and 4.2 mGy (median 2.3) compared with 493.7 mGy*cm (median 345.8), 12.9 mGy (median 7.9 mGy) and 14.6 mGy (median 10.1) for SD series, respectively. Mean effective patient diameter was 30.1 cm (median 30), which translates to a mean SSDE reduction of 72% (p<0.001). RD-PICCS image quality score was 2.8±0.5, improved over the RD-FBP (1.7±0.7) and RD-ASIR(1.9±0.8)(p<0.001), but lower than SD (3.5±0.5)(p<0.001). Readers detected 81% (184/228) of focal lesions on RD-PICCS series, versus 67% (153/228) and 65% (149/228) for RD-FBP and RD-ASIR, respectively. Mean image noise was significantly reduced on RD-PICCS series (13.9 HU) compared with RD-FBP (57.2) and RD-ASIR (44.1) (p<0.001). Conclusion PICCS allows for marked dose reduction at abdominal CT with improved image quality and diagnostic performance over reduced-dose FBP and ASIR. Further study is needed to determine indication-specific dose reduction levels that preserve acceptable diagnostic accuracy relative to higher-dose protocols. PMID:24943136

  10. MapReduce Algorithms for Inferring Gene Regulatory Networks from Time-Series Microarray Data Using an Information-Theoretic Approach

    PubMed Central

    Abduallah, Yasser; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel

    2017-01-01

    Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool. PMID:28243601

  11. MapReduce Algorithms for Inferring Gene Regulatory Networks from Time-Series Microarray Data Using an Information-Theoretic Approach.

    PubMed

    Abduallah, Yasser; Turki, Turki; Byron, Kevin; Du, Zongxuan; Cervantes-Cervantes, Miguel; Wang, Jason T L

    2017-01-01

    Gene regulation is a series of processes that control gene expression and its extent. The connections among genes and their regulatory molecules, usually transcription factors, and a descriptive model of such connections are known as gene regulatory networks (GRNs). Elucidating GRNs is crucial to understand the inner workings of the cell and the complexity of gene interactions. To date, numerous algorithms have been developed to infer gene regulatory networks. However, as the number of identified genes increases and the complexity of their interactions is uncovered, networks and their regulatory mechanisms become cumbersome to test. Furthermore, prodding through experimental results requires an enormous amount of computation, resulting in slow data processing. Therefore, new approaches are needed to expeditiously analyze copious amounts of experimental data resulting from cellular GRNs. To meet this need, cloud computing is promising as reported in the literature. Here, we propose new MapReduce algorithms for inferring gene regulatory networks on a Hadoop cluster in a cloud environment. These algorithms employ an information-theoretic approach to infer GRNs using time-series microarray data. Experimental results show that our MapReduce program is much faster than an existing tool while achieving slightly better prediction accuracy than the existing tool.

  12. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    ) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure

  13. CD28⁻ CD8⁺ T cells are significantly reduced and correlate with disease duration in juveniles with type 1 diabetes.

    PubMed

    Yarde, Danielle N; Lorenzo-Arteaga, Kristina; Corley, Kevin P; Cabrera, Monina; Sarvetnick, Nora E

    2014-10-01

    Type 1 diabetes (T1D) is a chronic disease caused by autoimmune destruction of insulin-producing pancreatic β-cells. T1D is typically diagnosed in children, but information regarding immune cell subsets in juveniles with T1D is scarce. Therefore, we studied various lymphocytic populations found in the peripheral blood of juveniles with T1D compared to age-matched controls (ages 2-17). One population of interest is the CD28(-) CD8(+) T cell subset, which are late-differentiated cells also described as suppressors. These cells are altered in a number of disease states and have been shown to be reduced in adults with T1D. We found that the proportion of CD28(-) cells within the CD8(+) T cell population is significantly reduced in juvenile type 1 diabetics. Furthermore, this reduction is not correlated with age in T1D juveniles, although a significant negative correlation between proportion CD28(-) CD8(+) T cells and age was observed in the healthy controls. Finally, correlation analysis revealed a significant and negative correlation between the proportion of CD28(-) CD8(+) T cells and T1D disease duration. These findings show that the CD28(-) CD8(+) T cell population is perturbed following onset of disease and may prove to be a valuable marker for monitoring the progression of T1D.

  14. Decreased expression of the Ets family transcription factor Fli-1 markedly prolongs survival and significantly reduces renal disease in MRL/lpr mice.

    PubMed

    Zhang, Xian K; Gallant, Sarah; Molano, Ivan; Moussa, Omar M; Ruiz, Phillip; Spyropoulos, Demetri D; Watson, Dennis K; Gilkeson, Gary

    2004-11-15

    Increased Fli-1 mRNA is present in PBLs from systemic lupus erythematosus patients, and transgenic overexpression of Fli-1 in normal mice leads to a lupus-like disease. We report in this study that MRL/lpr mice, an animal model of systemic lupus erythematosus, have increased splenic expression of Fli-1 protein compared with BALB/c mice. Using mice with targeted gene disruption, we examined the effect of reduced Fli-1 expression on disease development in MRL/lpr mice. Complete knockout of Fli-1 is lethal in utero. Fli-1 protein expression in heterozygous MRL/lpr (Fli-1(+/-)) mice was reduced by 50% compared with wild-type MRL/lpr (Fli-1(+/+)) mice. Fli-1(+/-) MRL/lpr mice had significantly decreased serum levels of total IgG and anti-dsDNA Abs as disease progressed. Fli-1(+/-) MRL/lpr mice had significantly increased splenic CD8(+) and naive T cells compared with Fli-1(+/+) MRL/lpr mice. Both in vivo and in vitro production of MCP-1 were significantly decreased in Fli-1(+/-) MRL/lpr mice. The Fli-1(+/-) mice had markedly decreased proteinuria and significantly lower pathologic renal scores. At 48 wk of age, survival was significantly increased in the Fli-1(+/-) MRL/lpr mice, as 100% of Fli-1(+/-) MRL/lpr mice were alive, in contrast to only 27% of Fli-1(+/+) mice. These findings indicate that Fli-1 expression is important in lupus-like disease development, and that modulation of Fli-1 expression profoundly decreases renal disease and improves survival in MRL/lpr mice.

  15. Acquiring a Pet Dog Significantly Reduces Stress of Primary Carers for Children with Autism Spectrum Disorder: A Prospective Case Control Study.

    PubMed

    Wright, H F; Hall, S; Hames, A; Hardiman, J; Mills, R; Mills, D S

    2015-08-01

    This study describes the impact of pet dogs on stress of primary carers of children with Autism Spectrum Disorder (ASD). Stress levels of 38 primary carers acquiring a dog and 24 controls not acquiring a dog were sampled at: Pre-intervention (17 weeks before acquiring a dog), post-intervention (3-10 weeks after acquisition) and follow-up (25-40 weeks after acquisition), using the Parenting Stress Index. Analysis revealed significant improvements in the intervention compared to the control group for Total Stress, Parental Distress and Difficult Child. A significant number of parents in the intervention group moved from clinically high to normal levels of Parental Distress. The results highlight the potential of pet dogs to reduce stress in primary carers of children with an ASD.

  16. The Small Molecule Inhibitor G6 Significantly Reduces Bone Marrow Fibrosis and the Mutant Burden in a Mouse Model of Jak2-Mediated Myelofibrosis

    PubMed Central

    Kirabo, Annet; Park, Sung O.; Wamsley, Heather L.; Gali, Meghanath; Baskin, Rebekah; Reinhard, Mary K.; Zhao, Zhizhuang J.; Bisht, Kirpal S.; Keserű, György M.; Cogle, Christopher R.; Sayeski, Peter P.

    2013-01-01

    Philadelphia chromosome–negative myeloproliferative neoplasms, including polycythemia vera, essential thrombocytosis, and myelofibrosis, are disorders characterized by abnormal hematopoiesis. Among these myeloproliferative neoplasms, myelofibrosis has the most unfavorable prognosis. Furthermore, currently available therapies for myelofibrosis have little to no efficacy in the bone marrow and hence, are palliative. We recently developed a Janus kinase 2 (Jak2) small molecule inhibitor called G6 and found that it exhibits marked efficacy in a xenograft model of Jak2-V617F–mediated hyperplasia and a transgenic mouse model of Jak2-V617F–mediated polycythemia vera/essential thrombocytosis. However, its efficacy in Jak2-mediated myelofibrosis has not previously been examined. Here, we hypothesized that G6 would be efficacious in Jak2-V617F–mediated myelofibrosis. To test this, mice expressing the human Jak2-V617F cDNA under the control of the vav promoter were administered G6 or vehicle control solution, and efficacy was determined by measuring parameters within the peripheral blood, liver, spleen, and bone marrow. We found that G6 significantly reduced extramedullary hematopoiesis in the liver and splenomegaly. In the bone marrow, G6 significantly reduced pathogenic Jak/STAT signaling by 53%, megakaryocytic hyperplasia by 70%, and the Jak2 mutant burden by 68%. Furthermore, G6 significantly improved the myeloid to erythroid ratio and significantly reversed the myelofibrosis. Collectively, these results indicate that G6 is efficacious in Jak2-V617F–mediated myelofibrosis, and given its bone marrow efficacy, it may alter the natural history of this disease. PMID:22796437

  17. The Activating NKG2C Receptor Is Significantly Reduced in NK Cells after Allogeneic Stem Cell Transplantation in Patients with Severe Graft-versus-Host Disease.

    PubMed

    Kordelas, Lambros; Steckel, Nina-Kristin; Horn, Peter A; Beelen, Dietrich W; Rebmann, Vera

    2016-10-27

    Natural killer (NK) cells play a central role in the innate immune system. In allogeneic stem cell transplantation (alloSCT), alloreactive NK cells derived by the graft are discussed to mediate the elimination of leukemic cells and dendritic cells in the patient and thereby to reduce the risk for leukemic relapses and graft-versus-host reactions. The alloreactivity of NK cells is determined by various receptors including the activating CD94/NKG2C and the inhibitory CD94/NKG2A receptors, which both recognize the non-classical human leukocyte antigen E (HLA-E). Here we analyze the contribution of these receptors to NK cell alloreactivity in 26 patients over the course of the first year after alloSCT due to acute myeloid leukemia, myelodysplastic syndrome and T cell Non-Hodgkin-Lymphoma. Our results show that NK cells expressing the activating CD94/NKG2C receptor are significantly reduced in patients after alloSCT with severe acute and chronic graft-versus-host disease (GvHD). Moreover, the ratio of CD94/NKG2C to CD94/NKG2A was reduced in patients with severe acute and chronic GvHD after receiving an HLA-mismatched graft. Collectively, these results provide evidence for the first time that CD94/NKG2C is involved in GvHD prevention.

  18. The ACAT inhibitor VULM1457 significantly reduced production and secretion of adrenomedullin (AM) and down-regulated AM receptors on human hepatoblastic cells.

    PubMed

    Drímal, J; Fáberová, V; Schmidtová, L; Bednáriková, M; Drímal, J; Drímal, D

    2005-12-01

    Acyl-CoA:cholesterol acyltransferase (ACAT) is an important enzyme in the pathways of cholesterol esterification. It has been shown that new ACAT inhibitor 1-(2,6-diisopropyl-phenyl)-3-[4-(4'-nitrophenylthio)phenyl] urea (VULM1457) significantly reduced atherogenic activity in animal experimental atherosclerosis. Proliferative hormone adrenomedullin (AM) has been shown to be released in response to hypoxia, however, its role in cellular protection has remained elusive. The effect of increased local production of AM in cells and resultant down-regulation of AM receptors has not been investigated yet. We hypothesized that increased expression of AM in hypoxic cells was the result of excessive AM production with resultant AM receptor down-regulation, surface-membrane protein degradation and that the new specific ACAT inhibitor would reduce AM induction in hypoxia and thus proliferation of cells. In order to investigate specific cellular AM signaling and protection induced by VULM1457, we characterized specific surface-membrane [125I]AM receptors expressed on cells, evaluated AM secretion (RIA assays), AM mRNA expression in cultured cells (RT-PCR analysis) and proliferation (incorporation of [3H]thymidine) in control, hypoxic and metabolically stressed human hepatoblastoma cell lines exposed to gradually increasing concentrations of VULM1457. The new ACAT inhibitor VULM1457 in concentration 0.03 and 0.1 micromol/l significantly down-regulated specific AM receptors on HepG2 cells, reduced AM secretion of HepG2 cells exposed to hypoxia. These results suggest that VULM1457, as new member of ACAT family of inhibitors could negatively regulate cell proliferation induced by AM, which may correlate with down-regulation of membrane-bound AM receptors on HepG2 cells, and moreover, with the induction and expression of AM in hypoxia.

  19. A significant correlation between the plasma levels of coenzyme Q10 and vitamin B-6 and a reduced risk of coronary artery disease.

    PubMed

    Lee, Bor-Jen; Yen, Chi-Hua; Hsu, Hui-Chen; Lin, Jui-Yuan; Hsia, Simon; Lin, Ping-Ting

    2012-10-01

    Coronary artery disease (CAD) is the leading cause of death worldwide. The purpose of this study was to investigate the relationship between plasma levels of coenzyme Q10 and vitamin B-6 and the risk of CAD. Patients with at least 50% stenosis of one major coronary artery identified by cardiac catheterization were assigned to the case group (n = 45). The control group (n = 89) comprised healthy individuals with normal blood biochemistry. The plasma concentrations of coenzyme Q10 and vitamin B-6 (pyridoxal 5'-phosphate) and the lipid profiles of the participants were measured. Subjects with CAD had significantly lower plasma levels of coenzyme Q10 and vitamin B-6 compared to the control group. The plasma coenzyme Q10 concentration (β = 1.06, P = .02) and the ratio of coenzyme Q10 to total cholesterol (β = .28, P = .01) were positively correlated with vitamin B-6 status. Subjects with higher coenzyme Q10 concentration (≥516.0 nmol/L) had a significantly lower risk of CAD, even after adjusting for the risk factors for CAD. Subjects with higher pyridoxal 5'-phosphate concentration (≥59.7 nmol/L) also had a significantly lower risk of CAD, but the relationship lost its statistical significance after adjusting for the risk factors of CAD. There was a significant correlation between the plasma levels of coenzyme Q10 and vitamin B-6 and a reduced risk of CAD. Further study is needed to examine the benefits of administering coenzyme Q10 in combination with vitamin B-6 to CAD patients, especially those with low coenzyme Q10 level.

  20. Oxygen-modifying treatment with ARCON reduces the prognostic significance of hemoglobin in squamous cell carcinoma of the head and neck

    SciTech Connect

    Hoogsteen, Ilse J. . E-mail: i.hoogsteen@rther.umcn.nl; Pop, Lucas A.M.; Marres, Henri A.M.; Hoogen, Franciscus J.A. van den; Kaanders, Johannes H.A.M.

    2006-01-01

    Purpose: To evaluate the prognostic significance of hemoglobin (Hb) levels measured before and during treatment with accelerated radiotherapy with carbogen and nicotinamide (ARCON). Methods and Materials: Two hundred fifteen patients with locally advanced tumors of the head and neck were included in a phase II trial of ARCON. This treatment regimen combines accelerated radiotherapy for reduction of repopulation with carbogen breathing and nicotinamide to reduce hypoxia. In these patients, Hb levels were measured before, during, and after radiotherapy. Results: Preirradiation and postirradiation Hb levels were available for 206 and 195 patients respectively. Hb levels below normal were most frequently seen among patients with T4 (p < 0.001) and N2 (p < 0.01) disease. Patients with a larynx tumor had significantly higher Hb levels (p < 0.01) than other tumor sites. During radiotherapy, 69 patients experienced a decrease in Hb level. In a multivariate analysis there was no prognostic impact of Hb level on locoregional control, disease-free survival, and overall survival. Primary tumor site was independently prognostic for locoregional control (p = 0.018), and gender was the only prognostic factor for disease-free and overall survival (p < 0.05). High locoregional control rates were obtained for tumors of the larynx (77%) and oropharynx (72%). Conclusion: Hemoglobin level was not found to be of prognostic significance for outcome in patients with squamous cell carcinoma of the head and neck after oxygen-modifying treatment with ARCON.

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW

  2. Combined steam and ultrasound treatment of broilers at slaughter: a promising intervention to significantly reduce numbers of naturally occurring campylobacters on carcasses.

    PubMed

    Musavian, Hanieh S; Krebs, Niels H; Nonboe, Ulf; Corry, Janet E L; Purnell, Graham

    2014-04-17

    Steam or hot water decontamination treatment of broiler carcasses is hampered by process limitations due to prolonged treatment times and adverse changes to the epidermis. In this study, a combination of steam with ultrasound (SonoSteam®) was investigated on naturally contaminated broilers that were processed at conventional slaughter speeds of 8,500 birds per hour in a Danish broiler plant. Industrial-scale SonoSteam equipment was installed in the evisceration room, before the inside/outside carcass washer. The SonoSteam treatment was evaluated in two separate trials performed on two different dates. Numbers of naturally occurring Campylobacter spp. and TVC were determined from paired samples of skin excised from opposite sides of the breast of the same carcass, before and after treatments. Sampling was performed at two different points on the line: i) before and after the SonoSteam treatment and ii) before the SonoSteam treatment and after 80 min of air chilling. A total of 44 carcasses were examined in the two trials. Results from the first trial showed that the mean initial Campylobacter contamination level of 2.35 log₁₀ CFU was significantly reduced (n=12, p<0.001) to 1.40 log₁₀ CFU after treatment. A significant reduction (n=11, p<0.001) was also observed with samples analyzed before SonoSteam treatment (2.64 log₁₀ CFU) and after air chilling (1.44 log₁₀ CFU). In the second trial, significant reductions (n=10, p<0.05) were obtained for carcasses analyzed before (mean level of 2.23 log₁₀ CFU) and after the treatment (mean level of 1.36 log₁₀ CFU). Significant reductions (n=11, p<0.01) were also found for Campylobacter numbers analyzed before the SonoSteam treatment (2.02 log₁₀ CFU) and after the air chilling treatment (1.37 log₁₀ CFU). The effect of air chilling without SonoSteam treatment was determined using 12 carcasses pre- and postchill. Results showed insignificant reductions of 0.09 log₁₀ from a mean initial level of

  3. Hypoxis hemerocallidea Significantly Reduced Hyperglycaemia and Hyperglycaemic-Induced Oxidative Stress in the Liver and Kidney Tissues of Streptozotocin-Induced Diabetic Male Wistar Rats

    PubMed Central

    Oguntibeju, Oluwafemi O.; Meyer, Samantha; Aboua, Yapo G.; Goboza, Mediline

    2016-01-01

    Background. Hypoxis hemerocallidea is a native plant that grows in the Southern African regions and is well known for its beneficial medicinal effects in the treatment of diabetes, cancer, and high blood pressure. Aim. This study evaluated the effects of Hypoxis hemerocallidea on oxidative stress biomarkers, hepatic injury, and other selected biomarkers in the liver and kidneys of healthy nondiabetic and streptozotocin- (STZ-) induced diabetic male Wistar rats. Materials and Methods. Rats were injected intraperitoneally with 50 mg/kg of STZ to induce diabetes. The plant extract-Hypoxis hemerocallidea (200 mg/kg or 800 mg/kg) aqueous solution was administered (daily) orally for 6 weeks. Antioxidant activities were analysed using a Multiskan Spectrum plate reader while other serum biomarkers were measured using the RANDOX chemistry analyser. Results. Both dosages (200 mg/kg and 800 mg/kg) of Hypoxis hemerocallidea significantly reduced the blood glucose levels in STZ-induced diabetic groups. Activities of liver enzymes were increased in the diabetic control and in the diabetic group treated with 800 mg/kg, whereas the 200 mg/kg dosage ameliorated hepatic injury. In the hepatic tissue, the oxygen radical absorbance capacity (ORAC), ferric reducing antioxidant power (FRAP), catalase, and total glutathione were reduced in the diabetic control group. However treatment with both doses improved the antioxidant status. The FRAP and the catalase activities in the kidney were elevated in the STZ-induced diabetic group treated with 800 mg/kg of the extract possibly due to compensatory responses. Conclusion. Hypoxis hemerocallidea demonstrated antihyperglycemic and antioxidant effects especially in the liver tissue. PMID:27403200

  4. Cleaning with a wet sterile gauze significantly reduces contamination of sutures, instruments, and surgical gloves in an ex-vivo pelvic flexure enterotomy model in horses.

    PubMed

    Giusto, Gessica; Tramuta, Clara; Caramello, Vittorio; Comino, Francesco; Nebbia, Patrizia; Robino, Patrizia; Singer, Ellen; Grego, Elena; Gandini, Marco

    2017-01-01

    The objective of this study was to investigate whether cleaning surgical materials used to close pelvic flexure enterotomies with a wet sterile gauze will reduce contamination and whether the use of a full thickness appositional suture pattern (F) or a partial thickness inverting (or Cushing) suture pattern (C) would make a difference in the level of contamination. Large colon specimens were assigned to group F or C and divided into subgroups N and G. In group G, a wet sterile gauze was passed over the suture material, another over the instruments, and another over the gloves. In group N, no treatment was applied. The bacterial concentration was measured by optical density (OD) at 24 h. The OD of subgroup CG was lower than that of subgroup CN (P = 0.019). The OD of subgroup FG was lower than that of subgroup FN (P = 0.02). The OD of subgroups CG, CN, FG, and FN was lower than that of the negative control (P < 0.003, P < 0.001, P < 0.001, and P < 0.00). The use of a sterile wet gauze significantly reduced contamination of suture materials. A partial thickness inverting suture pattern did not produce less contamination than a full thickness appositional suture pattern.

  5. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  6. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    SciTech Connect

    Young, S; Lo, P; Kim, G; Hsu, W; Hoffman, J; Brown, M; McNitt-Gray, M

    2015-06-15

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131.

  7. The GALAD scoring algorithm based on AFP, AFP-L3, and DCP significantly improves detection of BCLC early stage hepatocellular carcinoma.

    PubMed

    Best, J; Bilgi, H; Heider, D; Schotten, C; Manka, P; Bedreli, S; Gorray, M; Ertle, J; van Grunsven, L A; Dechêne, A

    2016-12-01

    Background: Hepatocellular carcinoma (HCC) is one of the leading causes of death in cirrhotic patients worldwide. The detection rate for early stage HCC remains low despite screening programs. Thus, the majority of HCC cases are detected at advanced tumor stages with limited treatment options. To facilitate earlier diagnosis, this study aims to validate the added benefit of the combination of AFP, the novel biomarkers AFP-L3, DCP, and an associated novel diagnostic algorithm called GALAD. Material and methods: Between 2007 and 2008 and from 2010 to 2012, 285 patients newly diagnosed with HCC and 402 control patients suffering from chronic liver disease were enrolled. AFP, AFP-L3, and DCP were measured using the µTASWako i30 automated immunoanalyzer. The diagnostic performance of biomarkers was measured as single parameters and in a logistic regression model. Furthermore, a diagnostic algorithm (GALAD) based on gender, age, and the biomarkers mentioned above was validated. Results: AFP, AFP-L3, and DCP showed comparable sensitivities and specifities for HCC detection. The combination of all biomarkers had the highest sensitivity with decreased specificity. In contrast, utilization of the biomarker-based GALAD score resulted in a superior specificity of 93.3 % and sensitivity of 85.6 %. In the scenario of BCLC 0/A stage HCC, the GALAD algorithm provided the highest overall AUROC with 0.9242, which was superior to any other marker combination. Conclusions: We could demonstrate in our cohort the superior detection of early stage HCC with the combined use of the respective biomarkers and in particular GALAD even in AFP-negative tumors.

  8. A proper choice of route significantly reduces air pollution exposure--a study on bicycle and bus trips in urban streets.

    PubMed

    Hertel, Ole; Hvidberg, Martin; Ketzel, Matthias; Storm, Lars; Stausgaard, Lizzi

    2008-01-15

    A proper selection of route through the urban area may significantly reduce the air pollution exposure. This is the main conclusion from the presented study. Air pollution exposure is determined for two selected cohorts along the route going from home to working place, and back from working place to home. Exposure is determined with a street pollution model for three scenarios: bicycling along the shortest possible route, bicycling along the low exposure route along less trafficked streets, and finally taking the shortest trip using public transport. Furthermore, calculations are performed for the cases the trip takes place inside as well as outside the traffic rush hours. The results show that the accumulated air pollution exposure for the low exposure route is between 10% and 30% lower for the primary pollutants (NO(x) and CO). However, the difference is insignificant and in some cases even negative for the secondary pollutants (NO(2) and PM(10)/PM(2.5)). Considering only the contribution from traffic in the travelled streets, the accumulated air pollution exposure is between 54% and 67% lower for the low exposure route. The bus is generally following highly trafficked streets, and the accumulated exposure along the bus route is therefore between 79% and 115% higher than the high exposure bicycle route (the short bicycle route). Travelling outside the rush hour time periods reduces the accumulated exposure between 10% and 30% for the primary pollutants, and between 5% and 20% for the secondary pollutants. The study indicates that a web based route planner for selecting the low exposure route through the city might be a good service for the public. In addition the public may be advised to travel outside rush hour time periods.

  9. Mass Administration of Ivermectin for the Elimination of Onchocerciasis Significantly Reduced and Maintained Low the Prevalence of Strongyloides stercoralis in Esmeraldas, Ecuador

    PubMed Central

    Anselmi, Mariella; Buonfrate, Dora; Guevara Espinoza, Angel; Prandi, Rosanna; Marquez, Monica; Gobbo, Maria; Montresor, Antonio; Albonico, Marco; Racines Orbe, Marcia; Bisoffi, Zeno

    2015-01-01

    Objectives To evaluate the effect of ivermectin mass drug administration on strongyloidiasis and other soil transmitted helminthiases. Methods We conducted a retrospective analysis of data collected in Esmeraldas (Ecuador) during surveys conducted in areas where ivermectin was annually administered to the entire population for the control of onchocerciasis. Data from 5 surveys, conducted between 1990 (before the start of the distribution of ivermectin) and 2013 (six years after the interruption of the intervention) were analyzed. The surveys also comprised areas where ivermectin was not distributed because onchocerciasis was not endemic. Different laboratory techniques were used in the different surveys (direct fecal smear, formol-ether concentration, IFAT and IVD ELISA for Strongyloides stercoralis). Results In the areas where ivermectin was distributed the strongyloidiasis prevalence fell from 6.8% in 1990 to zero in 1996 and 1999. In 2013 prevalence in children was zero with stool examination and 1.3% with serology, in adult 0.7% and 2.7%. In areas not covered by ivermectin distribution the prevalence was 23.5% and 16.1% in 1996 and 1999, respectively. In 2013 the prevalence was 0.6% with fecal exam and 9.3% with serology in children and 2.3% and 17.9% in adults. Regarding other soil transmitted helminthiases: in areas where ivermectin was distributed the prevalence of T. trichiura was significantly reduced, while A. lumbricoides and hookworms were seemingly unaffected. Conclusions Periodic mass distribution of ivermectin had a significant impact on the prevalence of strongyloidiasis, less on trichuriasis and apparently no effect on ascariasis and hookworm infections. PMID:26540412

  10. Immunization of teenagers with a fifth dose of reduced DTaP-IPV induces high levels of pertussis antibodies with a significant increase in opsonophagocytic activity.

    PubMed

    Aase, Audun; Herstad, Tove Karin; Merino, Samuel; Bolstad, Merete; Sandbu, Synne; Bakke, Hilde; Aaberge, Ingeborg S

    2011-08-01

    Waning vaccine-induced immunity against Bordetella pertussis is observed among adolescents and adults. A high incidence of pertussis has been reported in this population, which serves as a reservoir for B. pertussis. A fifth dose of reduced antigen of diphtheria-tetanus-acellular-pertussis and inactivated polio vaccine was given as a booster dose to healthy teenagers. The antibody activity against B. pertussis antigens was measured prior to and 4 to 8 weeks after the booster by different assays: enzyme-linked immunosorbent assays (ELISAs) of IgG and IgA against pertussis toxin (PT) and filamentous hemagglutinin (FHA), IgG against pertactin (PRN), opsonophagocytic activity (OPA), and IgG binding to live B. pertussis. There was a significant increase in the IgG activity against PT, FHA, and PRN following the booster immunization (P < 0.001). The prebooster sera showed a geometric mean OPA titer of 65.1 and IgG binding to live bacteria at a geometric mean concentration of 164.9 arbitrary units (AU)/ml. Following the fifth dose, the OPA increased to a titer of 360.4, and the IgG concentration against live bacteria increased to 833.4 AU/ml (P < 0.001 for both). The correlation analyses between the different assays suggest that antibodies against FHA and PRN contribute the most to the OPA and IgG binding.

  11. Procalcitonin Biomarker Algorithm Reduces Antibiotic Prescriptions, Duration of Therapy, and Costs in Chronic Obstructive Pulmonary Disease: A Comparison in the Netherlands, Germany, and the United Kingdom.

    PubMed

    van der Maas, Marloes E; Mantjes, Gertjan; Steuten, Lotte M G

    2017-04-01

    Antibiotics are often recommended as treatment for patients with chronic obstructive pulmonary disease (COPD) exacerbations. However, not all COPD exacerbations are caused by bacterial infections and there is consequently considerable misuse and overuse of antibiotics among patients with COPD. This poses a severe burden on healthcare resources such as increased risk of developing antibiotic resistance. The biomarker procalcitonin (PCT) displays specificity to distinguish bacterial inflammations from nonbacterial inflammations and may therefore help to rationalize antibiotic prescriptions. We report in this study, a three-country comparison of the health and economic consequences of a PCT biomarker-guided prescription and clinical decision-making strategy compared to current practice in hospitalized patients with COPD exacerbations. A decision tree was developed, comparing the expected costs and effects of the PCT algorithm to current practice in the Netherlands, Germany, and the United Kingdom. The time horizon of the model captured the length of hospital stay and a societal perspective was also adopted. The primary health outcome was the duration of antibiotic therapy. The incremental cost-effectiveness ratio was defined as the incremental costs per antibiotic day avoided. The incremental cost savings per day on antibiotic therapy avoided were (in Euros) €90 in the Netherlands, €125 in Germany, and €52 in the United Kingdom. Probabilistic sensitivity analyses showed that in the majority of simulations, the PCT biomarker strategy was superior to current practice (the Netherlands: 58%, Germany: 58%, and the United Kingdom: 57%). In conclusion, the PCT biomarker algorithm to optimize antibiotic prescriptions in COPD is likely to be cost-effective compared to current practice. Both the percentage of patients who start with antibiotic treatment as well as the duration of antibiotic therapy are reduced with the PCT decision algorithm, leading to a decrease in

  12. Transfection of Sclerotinia sclerotiorum with In Vitro Transcripts of a Naturally Occurring Interspecific Recombinant of Sclerotinia sclerotiorum Hypovirus 2 Significantly Reduces Virulence of the Fungus

    PubMed Central

    Marzano, Shin-Yi Lee; Hobbs, Houston A.; Nelson, Berlin D.; Hartman, Glen L.; Eastburn, Darin M.; McCoppin, Nancy K.

    2015-01-01

    ABSTRACT A recombinant strain of Sclerotinia sclerotiorum hypovirus 2 (SsHV2) was identified from a North American Sclerotinia sclerotiorum isolate (328) from lettuce (Lactuca sativa L.) by high-throughput sequencing of total RNA. The 5′- and 3′-terminal regions of the genome were determined by rapid amplification of cDNA ends. The assembled nucleotide sequence was up to 92% identical to two recently reported SsHV2 strains but contained a deletion near its 5′ terminus of more than 1.2 kb relative to the other SsHV2 strains and an insertion of 524 nucleotides (nt) that was distantly related to Valsa ceratosperma hypovirus 1. This suggests that the new isolate is a heterologous recombinant of SsHV2 with a yet-uncharacterized hypovirus. We named the new strain Sclerotinia sclerotiorum hypovirus 2 Lactuca (SsHV2L) and deposited the sequence in GenBank with accession number KF898354. Sclerotinia sclerotiorum isolate 328 was coinfected with a strain of Sclerotinia sclerotiorum endornavirus 1 and was debilitated compared to cultures of the same isolate that had been cured of virus infection by cycloheximide treatment and hyphal tipping. To determine whether SsHV2L alone could induce hypovirulence in S. sclerotiorum, a full-length cDNA of the 14,538-nt viral genome was cloned. Transcripts corresponding to the viral RNA were synthesized in vitro and transfected into a virus-free isolate of S. sclerotiorum, DK3. Isolate DK3 transfected with SsHV2L was hypovirulent on soybean and lettuce and exhibited delayed maturation of sclerotia relative to virus-free DK3, completing Koch's postulates for the association of hypovirulence with SsHV2L. IMPORTANCE A cosmopolitan fungus, Sclerotinia sclerotiorum infects more than 400 plant species and causes a plant disease known as white mold that produces significant yield losses in major crops annually. Mycoviruses have been used successfully to reduce losses caused by fungal plant pathogens, but definitive relationships between

  13. 830 nm light-emitting diode (led) phototherapy significantly reduced return-to-play in injured university athletes: a pilot study

    PubMed Central

    Vasily, David B; Bradle, Jeanna; Rudio, Catharine; Calderhead, R Glen

    2016-01-01

    Background and Aims: For any committed athlete, getting back to conditioning and participation post-injury (return to play [RTP]) needs to be as swift as possible. The effects of near-infrared light-emitting diode (LED) therapy on pain control, blood flow enhancement and relaxation of muscle spasm (all aspects in the treatment of musculoskeletal injury) have attracted attention. The present pilot study was undertaken to assess the role of 830 nm LED phototherapy in safely accelerating RTP in injured university athletes. Subjects and Methods: Over a 15-month period, a total of 395 injuries including sprains, strains, ligament damage, tendonitis and contusions were treated with 1,669 sessions of 830 nm LED phototherapy (mean of 4.3 treatments per injury, range 2 – 6). Efficacy was measured with pain attenuation on a visual analog scale (VAS) and the RTP period compared with historically-based anticipated RTP with conventional therapeutic intervention. Results: A full set of treatment sessions and follow-up data was able to be recorded in 65 informed and consenting subjects who achieved pain relief on the VAS of up to 6 points in from 2–6 sessions. The average LED-mediated RTP in the 65 subjects was significantly shorter at 9.6 days, compared with the mean anticipated RTP of 19.23 days (p = 0.0066, paired two-tailed Student's t-test). A subjective satisfaction survey was carried out among the 112 students with injuries incurred from January to May, 2015. Eighty-eight (78.5%) were either very satisfied or satisfied, and only 8 (7.2%) were dissatisfied. Conclusions: For any motivated athlete, RTP may be the most important factor postinjury based on the resolution of pain and inflammation and repair to tissue trauma. 830 nm LED phototherapy significantly and safely reduced the RTP in dedicated university athletes over a wide range of injuries with no adverse events. One limitation of the present study was the subjective nature of the assessments, and the lack of any

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. A Yersinia pestis-specific, lytic phage preparation significantly reduces viable Y. pestis on various hard surfaces experimentally contaminated with the bacterium

    PubMed Central

    Rashid, Mohammed H.; Revazishvili, Tamara; Dean, Timothy; Butani, Amy; Verratti, Kathleen; Bishop-Lilly, Kimberly A.; Sozhamannan, Shanmuga; Sulakvelidze, Alexander; Rajanna, Chythanya

    2012-01-01

    Five Y. pestis bacteriophages obtained from various sources were characterized to determine their biological properties, including their taxonomic classification, host range and genomic diversity. Four of the phages (YpP-G, Y, R and YpsP-G) belong to the Podoviridae family, and the fifth phage (YpsP-PST) belongs to the Myoviridae family, of the order Caudovirales comprising of double-stranded DNA phages. The genomes of the four Podoviridae phages were fully sequenced and found to be almost identical to each other and to those of two previously characterized Y. pestis phages Yepe2 and φA1122. However, despite their genomic homogeneity, they varied in their ability to lyse Y. pestis and Y. pseudotuberculosis strains. The five phages were combined to yield a “phage cocktail” (tentatively designated “YPP-100”) capable of lysing the 59 Y. pestis strains in our collection. YPP-100 was examined for its ability to decontaminate three different hard surfaces (glass, gypsum board and stainless steel) experimentally contaminated with a mixture of three genetically diverse Y. pestis strains CO92, KIM and 1670G. Five minutes of exposure to YPP-100 preparations containing phage concentrations of ca. 109, 108 and 107 PFU/mL completely eliminated all viable Y. pestis cells from all three surfaces, but a few viable cells were recovered from the stainless steel coupons treated with YPP-100 diluted to contain ca. 106 PFU/mL. However, even that highly diluted preparation significantly (p = < 0.05) reduced Y. pestis levels by ≥ 99.97%. Our data support the idea that Y. pestis phages may be useful for decontaminating various hard surfaces naturally- or intentionally-contaminated with Y. pestis. PMID:23275868

  16. Reducing Human-Tsetse Contact Significantly Enhances the Efficacy of Sleeping Sickness Active Screening Campaigns: A Promising Result in the Context of Elimination

    PubMed Central

    Courtin, Fabrice; Camara, Mamadou; Rayaisse, Jean-Baptiste; Kagbadouno, Moise; Dama, Emilie; Camara, Oumou; Traoré, Ibrahima S.; Rouamba, Jérémi; Peylhard, Moana; Somda, Martin B.; Leno, Mamadou; Lehane, Mike J.; Torr, Steve J.; Solano, Philippe; Jamonneau, Vincent; Bucheton, Bruno

    2015-01-01

    Background Control of gambiense sleeping sickness, a neglected tropical disease targeted for elimination by 2020, relies mainly on mass screening of populations at risk and treatment of cases. This strategy is however challenged by the existence of undetected reservoirs of parasites that contribute to the maintenance of transmission. In this study, performed in the Boffa disease focus of Guinea, we evaluated the value of adding vector control to medical surveys and measured its impact on disease burden. Methods The focus was divided into two parts (screen and treat in the western part; screen and treat plus vector control in the eastern part) separated by the Rio Pongo river. Population census and baseline entomological data were collected from the entire focus at the beginning of the study and insecticide impregnated targets were deployed on the eastern bank only. Medical surveys were performed in both areas in 2012 and 2013. Findings In the vector control area, there was an 80% decrease in tsetse density, resulting in a significant decrease of human tsetse contacts, and a decrease of disease prevalence (from 0.3% to 0.1%; p=0.01), and an almost nil incidence of new infections (<0.1%). In contrast, incidence was 10 times higher in the area without vector control (>1%, p<0.0001) with a disease prevalence increasing slightly (from 0.5 to 0.7%, p=0.34). Interpretation Combining medical and vector control was decisive in reducing T. b. gambiense transmission and in speeding up progress towards elimination. Similar strategies could be applied in other foci. PMID:26267667

  17. WE-A-17A-06: Evaluation of An Automatic Interstitial Catheter Digitization Algorithm That Reduces Treatment Planning Time and Provide Means for Adaptive Re-Planning in HDR Brachytherapy of Gynecologic Cancers

    SciTech Connect

    Dise, J; Liang, X; Lin, L; Teo, B

    2014-06-15

    Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions from day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic

  18. Water in the hydration shell of halide ions has significantly reduced Fermi resonance and moderately enhanced Raman cross section in the OH stretch regions.

    PubMed

    Ahmed, Mohammed; Singh, Ajay K; Mondal, Jahur A; Sarkar, Sisir K

    2013-08-22

    Water in the presence of electrolytes plays an important role in biological and industrial processes. The properties of water, such as the intermolecular coupling, Fermi resonance (FR), hydrogen-bonding, and Raman cross section were investigated by measuring the Raman spectra in the OD and OH stretch regions in presence of alkali halides (NaX; X = F, Cl, Br, I). It is observed that the changes in spectral characteristics by the addition of NaX in D2O are similar to those obtained by the addition of H2O in D2O. The spectral width decreases significantly by the addition of NaX in D2O (H2O) than that in the isotopically diluted water. Quantitative estimation, on the basis of integrated Raman intensity, revealed that the relative Raman cross section, σ(H)/σ(b) (σ(H) and σ(b) are the average Raman cross section of water in the first hydration shell of X(-) and in bulk, respectively), in D2O and H2O is higher than those in the respective isotopically diluted water. These results suggest that water in the hydration shell has reduced FR and intermolecular coupling compared to those in bulk. In the isotopically diluted water, the relative Raman cross section increases with increase in size of the halide ions (σ(H)/σ(b) = 0.6, 1.1, 1.5, and 1.9 for F(-), Cl(-), Br(-), and I(-), respectively), which is assignable to the enhancement of Raman cross section by charge transfer from halide ions to the hydrating water. Nevertheless, the experimentally determined σ(H)/σ(b) is lower than the calculated values obtained on the basis of the energy of the charge transfer state of water. The weak enhancement of σ(H)/σ(b) signifies that the charge transfer transition in the hydration shell of halide ions causes little change in the OD (OH) bond lengths of hydrating water.

  19. Immunization with H7-HCP-tir-intimin significantly reduces colonization and shedding of Escherichia coli O157:H7 in goats.

    PubMed

    Zhang, Xuehan; Yu, Zhengyu; Zhang, Shuping; He, Kongwang

    2014-01-01

    Enterohemorrhagic Escherichia coli (EHEC) O157:H7 is the causative agent of hemorrhagic colitis and hemolytic uremic syndrome in humans. However, the bacterium can colonize the intestines of ruminants without causing clinical signs. EHEC O157:H7 needs flagella (H7) and hemorrhagic coli pili (HCP) to adhere to epithelial cells. Then the bacterium uses the translocated intimin receptor (Tir) and an outer membrane adhesion (Intimin) protein to colonize hosts. This leads to the attachment and effacement of (A/E) lesions. A tetravalent recombinant vaccine (H7-HCP-Tir-Intimin) composed of immunologically important portions of H7, HCP, Tir and Intimin proteins was constructed and its efficacy was evaluated using a caprine model. The results showed that the recombinant vaccine induced strong humoral and mucosal immune responses and protected the subjects from live challenges with EHEC O157:H7 86-24 stain. After a second immunization, the average IgG titer peaked at 7.2 × 10(5). Five days after challenge, E. coli O157:H7 was no longer detectable in the feces of vaccinated goats, but naïve goats shed the bacterium throughout the course of the challenge. Cultures of intestinal tissues showed that vaccination of goats with H7-HCP-Tir-Intimin reduced the amount of intestinal colonization by EHEC O157:H7 effectively. Recombinant H7-HCP-Tir-Intimin protein is an excellent vaccine candidate. Data from the present study warrant further efficacy studies aimed at reducing EHEC O157:H7 load on farms and the contamination of carcasses by this zoonotic pathogen.

  20. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  1. Significant increases in pulping efficiency in C4H-F5H-transformed poplars: improved chemical savings and reduced environmental toxins.

    PubMed

    Huntley, Shannon K; Ellis, Dave; Gilbert, Margarita; Chapple, Clint; Mansfield, Shawn D

    2003-10-08

    The gene encoding ferulate 5-hydroxylase (F5H) was overexpressed in poplar (Populus tremula x Populus alba) using the cinnamate-4-hydroxylase (C4H) promoter to drive expression specifically in cells involved in the lignin biosynthetic pathway and was shown to significantly alter the mole percentage of syringyl subunits in the lignin, as determined by thioacidolysis. Analysis of poplar transformed with a C4H-F5H construct demonstrated significant increases in chemical (kraft) pulping efficiency from greenhouse-grown trees. Compared to wild-type wood, decreases of 23 kappa units and increases of >20 ISO brightness units were observed in trees exhibiting high syringyl monomer concentrations. These changes were associated with no significant modification in total lignin content and no observed phenotypic differences. C4H-F5H-transformed trees could increase pulp throughputs at mills by >60% while concurrently decreasing chemicals employed during processing (chemical pulping and bleaching) and, consequently, the amount of deleterious byproducts released into the environment.

  2. Significantly Improved Sodium-Ion Storage Performance of CuS Nanosheets Anchored into Reduced Graphene Oxide with Ether-Based Electrolyte.

    PubMed

    Li, Jinliang; Yan, Dong; Lu, Ting; Qin, Wei; Yao, Yefeng; Pan, Likun

    2017-01-25

    Currently sodium-ion batteries (SIBs) as energy storage technology have attracted lots of interest due to their safe, cost-effective, and nonpoisonous advantages. However, many challenges remain for development of SIBs with high specific capacity, high rate capability, and long cycle life. Therefore, CuS as an important earth-abundant, low-cost semiconductor was applied as anode of SIBs with ether-based electrolyte instead of conventional ester-based electrolyte. By incorporating reduced graphene oxide (RGO) into CuS nanosheets and optimizing the cutoff voltage, it is found that the sodium-ion storage performance can be greatly enhanced using ether-based electrolyte. The CuS-RGO composites deliver an initial Coulombic efficiency of 94% and a maximum specific capacity of 392.9 mAh g(-1) after 50 cycles at a current density of 100 mA g(-1). And a specific capacity of 345 mAh g(-1) is kept after 450 cycles at a current density of 1 A g(-1). Such an excellent electrochemical performance is ascribed to the conductive network construction of CuS-RGO composites, the suppression of dissolved polysulfide intermediates by using ether-based electrolyte, and the avoidance of conversion-type reaction by optimizing the cutoff voltage.

  3. Ethanol, not detectably metabolized in brain, significantly reduces brain metabolism, probably via action at specific GABA(A) receptors and has measureable metabolic effects at very low concentrations.

    PubMed

    Rae, Caroline D; Davidson, Joanne E; Maher, Anthony D; Rowlands, Benjamin D; Kashem, Mohammed A; Nasrallah, Fatima A; Rallapalli, Sundari K; Cook, James M; Balcar, Vladimir J

    2014-04-01

    Ethanol is a known neuromodulatory agent with reported actions at a range of neurotransmitter receptors. Here, we measured the effect of alcohol on metabolism of [3-¹³C]pyruvate in the adult Guinea pig brain cortical tissue slice and compared the outcomes to those from a library of ligands active in the GABAergic system as well as studying the metabolic fate of [1,2-¹³C]ethanol. Analyses of metabolic profile clusters suggest that the significant reductions in metabolism induced by ethanol (10, 30 and 60 mM) are via action at neurotransmitter receptors, particularly α4β3δ receptors, whereas very low concentrations of ethanol may produce metabolic responses owing to release of GABA via GABA transporter 1 (GAT1) and the subsequent interaction of this GABA with local α5- or α1-containing GABA(A)R. There was no measureable metabolism of [1,2-¹³C]ethanol with no significant incorporation of ¹³C from [1,2-¹³C]ethanol into any measured metabolite above natural abundance, although there were measurable effects on total metabolite sizes similar to those seen with unlabelled ethanol.

  4. Methodology to predict a maximum follow-up period for breast cancer patients without significantly reducing the chance of detecting a local recurrence

    NASA Astrophysics Data System (ADS)

    Mould, Richard F.; Asselain, Bernard; DeRycke, Yann

    2004-03-01

    For breast cancer where the prognosis of early stage disease is very good and even when local recurrences do occur they can present several years after treatment, the hospital resources required for annual follow-up examinations of what can be several hundreds of patients are financially significant. If, therefore, there is some method to estimate a maximum length of follow-up Tmax necessary, then cost savings of physicians' time as well as outpatient workload reductions can be achieved. In modern oncology where expenses continue to increase exponentially due to staff salaries and the expense of chemotherapy drugs and of new treatment and imaging technology, the economic situation can no longer be ignored. The methodology of parametric modelling, based on the lognormal distribution is described, showing that useful estimates for Tmax can be made, by making a trade-off between Tmax and the fraction of patients who will experience a delay in detection of their local recurrence. This trade-off depends on the chosen tail of the lognormal. The methodology is described for stage T1 and T2 breast cancer and it is found that Tmax = 4 years which is a significant reduction on the usual maximum of 10 years of follow-up which is employed by many hospitals for breast cancer patients. The methodology is equally applicable for cancers at other sites where the prognosis is good and some local recurrences may not occur until several years post-treatment.

  5. Overview of an Algorithm Plugin Package (APP)

    NASA Astrophysics Data System (ADS)

    Linda, M.; Tilmes, C.; Fleig, A. J.

    2004-12-01

    Science software that runs operationally is fundamentally different than software that runs on a scientist's desktop. There are complexities in hosting software for automated production that are necessary and significant. Identifying common aspects of these complexities can simplify algorithm integration. We use NASA's MODIS and OMI data production systems as examples. An Algorithm Plugin Package (APP) is science software that is combined with algorithm-unique elements that permit the algorithm to interface with, and function within, the framework of a data processing system. The framework runs algorithms operationally against large quantities of data. The extra algorithm-unique items are constrained by the design of the data processing system. APPs often include infrastructure that is vastly similar. When the common elements in APPs are identified and abstracted, the cost of APP development, testing, and maintenance will be reduced. This paper is an overview of the extra algorithm-unique pieces that are shared between MODAPS and OMIDAPS APPs. Our exploration of APP structure will help builders of other production systems identify their common elements and reduce algorithm integration costs. Our goal is to complete the development of a library of functions and a menu of implementation choices that reflect common needs of APPs. The library and menu will reduce the time and energy required for science developers to integrate algorithms into production systems.

  6. Human Tubal-Derived Mesenchymal Stromal Cells Associated with Low Level Laser Therapy Significantly Reduces Cigarette Smoke-Induced COPD in C57BL/6 mice.

    PubMed

    Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; de Oliveira, Ana Paula Ligeiro

    2015-01-01

    Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J-660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD.

  7. Human Tubal-Derived Mesenchymal Stromal Cells Associated with Low Level Laser Therapy Significantly Reduces Cigarette Smoke–Induced COPD in C57BL/6 mice

    PubMed Central

    Peron, Jean Pierre Schatzmann; de Brito, Auriléia Aparecida; Pelatti, Mayra; Brandão, Wesley Nogueira; Vitoretti, Luana Beatriz; Greiffo, Flávia Regina; da Silveira, Elaine Cristina; Oliveira-Junior, Manuel Carneiro; Maluf, Mariangela; Evangelista, Lucila; Halpern, Silvio; Nisenbaum, Marcelo Gil; Perin, Paulo; Czeresnia, Carlos Eduardo; Câmara, Niels Olsen Saraiva; Aimbire, Flávio; Vieira, Rodolfo de Paula; Zatz, Mayana; Ligeiro de Oliveira, Ana Paula

    2015-01-01

    Cigarette smoke-induced chronic obstructive pulmonary disease is a very debilitating disease, with a very high prevalence worldwide, which results in a expressive economic and social burden. Therefore, new therapeutic approaches to treat these patients are of unquestionable relevance. The use of mesenchymal stromal cells (MSCs) is an innovative and yet accessible approach for pulmonary acute and chronic diseases, mainly due to its important immunoregulatory, anti-fibrogenic, anti-apoptotic and pro-angiogenic. Besides, the use of adjuvant therapies, whose aim is to boost or synergize with their function should be tested. Low level laser (LLL) therapy is a relatively new and promising approach, with very low cost, no invasiveness and no side effects. Here, we aimed to study the effectiveness of human tube derived MSCs (htMSCs) cell therapy associated with a 30mW/3J—660 nm LLL irradiation in experimental cigarette smoke-induced chronic obstructive pulmonary disease. Thus, C57BL/6 mice were exposed to cigarette smoke for 75 days (twice a day) and all experiments were performed on day 76. Experimental groups receive htMSCS either intraperitoneally or intranasally and/or LLL irradiation either alone or in association. We show that co-therapy greatly reduces lung inflammation, lowering the cellular infiltrate and pro-inflammatory cytokine secretion (IL-1β, IL-6, TNF-α and KC), which were followed by decreased mucus production, collagen accumulation and tissue damage. These findings seemed to be secondary to the reduction of both NF-κB and NF-AT activation in lung tissues with a concomitant increase in IL-10. In summary, our data suggests that the concomitant use of MSCs + LLLT may be a promising therapeutic approach for lung inflammatory diseases as COPD. PMID:26322981

  8. Targeting CXCR1/2 Significantly Reduces Breast Cancer Stem Cell Activity and Increases the Efficacy of Inhibiting HER2 via HER2-dependent and -independent Mechanisms

    PubMed Central

    Singh, Jagdeep K.; Farnie, Gillian; Bundred, Nigel J.; Simões, Bruno M; Shergill, Amrita; Landberg, Göran; Howell, Sacha; Clarke, Robert B.

    2012-01-01

    Purpose Breast cancer stem-like cells (CSCs) are an important therapeutic target as they are predicted to be responsible for tumour initiation, maintenance and metastases. Interleukin-8 (IL-8) is upregulated in breast cancer and associated with poor prognosis. Breast cancer cell line studies indicate that IL-8 via its cognate receptors, CXCR1 and CXCR2, is important in regulating breast CSC activity. We investigated the role of IL-8 in the regulation of CSC activity using patient-derived breast cancers and determined the potential benefit of combining CXCR1/2 inhibition with HER2-targeted therapy. Experimental design CSC activity of metastatic and invasive human breast cancers (n=19) was assessed ex vivo using the mammosphere colony forming assay. Results Metastatic fluid IL-8 level correlated directly with mammosphere formation (r=0.652; P<0.05; n=10). Recombinant IL-8 directly increased mammosphere formation/self-renewal in metastatic and invasive breast cancers (n=17). IL-8 induced activation of EGFR/HER2 and downstream signalling pathways and effects were abrogated by inhibition of SRC, EGFR/HER2, PI3K or MEK. Furthermore, lapatinib inhibited the mammosphere-promoting effect of IL-8 in both HER2-positive and negative patient-derived cancers. CXCR1/2 inhibition also blocked the effect of IL-8 on mammosphere formation and added to the efficacy of lapatinib in HER2-positive cancers. Conclusions These studies establish a role for IL-8 in the regulation of patient-derived breast CSC activity and demonstrate that IL-8/CXCR1/2 signalling is partly mediated via a novel SRC and EGFR/HER2-dependent pathway. Combining CXCR1/2 inhibitors with current HER2-targeted therapies has potential as an effective therapeutic strategy to reduce CSC activity in breast cancer and improve the survival of HER2-positive patients. PMID:23149820

  9. HtrA3 Is Downregulated in Cancer Cell Lines and Significantly Reduced in Primary Serous and Granulosa Cell Ovarian Tumors.

    PubMed

    Singh, Harmeet; Li, Ying; Fuller, Peter J; Harrison, Craig; Rao, Jyothsna; Stephens, Andrew N; Nie, Guiying

    2013-01-01

    Objective. The high temperature requirement factor A3 (HtrA3) is a serine protease homologous to bacterial HtrA. Four human HtrAs have been identified. HtrA1 and HtrA3 share a high degree of domain organization and are downregulated in a number of cancers, suggesting a widespread loss of these proteases in cancer. This study examined how extensively the HtrA (HtrA1-3) proteins are downregulated in commonly used cancer cell lines and primary ovarian tumors.Methods. RT-PCR was applied to various cancer cell lines (n=17) derived from the ovary, endometrium, testes, breast, prostate, and colon, and different subtypes of primary ovarian tumors [granulosa cell tumors (n=19), mucinous cystadenocarcinomas (n=6), serous cystadenocarcinomas (n=8)] and normal ovary (n = 9). HtrA3 protein was localized by immunohistochemistry.Results. HtrA3 was extensively downregulated in the cancer cell lines examined including the granulosa cell tumor-derived cell lines. In primary ovarian tumors, the HtrA3 was significantly lower in serous cystadenocarcinoma and granulosa cell tumors. In contrast, HtrA1 and HtrA2 were expressed in all samples with no significant differences between the control and tumors. In normal postmenopausal ovary, HtrA3 protein was localized to lutenizing stromal cells and corpus albicans. In serous cystadenocarcinoma, HtrA3 protein was absent in the papillae but detected in the mesenchymal cyst wall.Conclusion. HtrA3 is more extensively downregulated than HtrA1-2 in cancer cell lines. HtrA3, but not HtrA1 or HtrA2, was decreased in primary ovarian serous cystadenocarcinoma and granulosa cell tumors. This study provides evidence that HtrA3 may be the most relevant HtrA associated with ovarian malignancy.

  10. The chemical digestion of Ti6Al7Nb scaffolds produced by Selective Laser Melting reduces significantly ability of Pseudomonas aeruginosa to form biofilm.

    PubMed

    Junka, Adam F; Szymczyk, Patrycja; Secewicz, Anna; Pawlak, Andrzej; Smutnicka, Danuta; Ziółkowski, Grzegorz; Bartoszewicz, Marzenna; Chlebus, Edward

    2016-01-01

    In our previous work we reported the impact of hydrofluoric and nitric acid used for chemical polishing of Ti-6Al-7Nb scaffolds on decrease of the number of Staphylococcus aureus biofilm forming cells. Herein, we tested impact of the aforementioned substances on biofilm of Gram-negative microorganism, Pseudomonas aeruginosa, dangerous pathogen responsible for plethora of implant-related infections. The Ti-6Al-7Nb scaffolds were manufactured using Selective Laser Melting method. Scaffolds were subjected to chemical polishing using a mixture of nitric acid and fluoride or left intact (control group). Pseudomonal biofilm was allowed to form on scaffolds for 24 hours and was removed by mechanical vortex shaking. The number of pseudomonal cells was estimated by means of quantitative culture and Scanning Electron Microscopy. The presence of nitric acid and fluoride on scaffold surfaces was assessed by means of IR and rentgen spetorscopy. Quantitative data were analysed using the Mann-Whitney test (P ≤ 0.05). Our results indicate that application of chemical polishing correlates with significant drop of biofilm-forming pseudomonal cells on the manufactured Ti-6Al-7Nb scaffolds ( p = 0.0133, Mann-Whitney test) compared to the number of biofilm-forming cells on non-polished scaffolds. As X-ray photoelectron spectroscopy revealed the presence of fluoride and nitrogen on the surface of scaffold, we speculate that drop of biofilm forming cells may be caused by biofilm-supressing activity of these two elements.

  11. Microflow liquid chromatography coupled to mass spectrometry--an approach to significantly increase sensitivity, decrease matrix effects, and reduce organic solvent usage in pesticide residue analysis.

    PubMed

    Uclés Moreno, Ana; Herrera López, Sonia; Reichert, Barbara; Lozano Fernández, Ana; Hernando Guil, María Dolores; Fernández-Alba, Amadeo Rodríguez

    2015-01-20

    This manuscript reports a new pesticide residue analysis method employing a microflow-liquid chromatography system coupled to a triple quadrupole mass spectrometer (microflow-LC-ESI-QqQ-MS). This uses an electrospray ionization source with a narrow tip emitter to generate smaller droplets. A validation study was undertaken to establish performance characteristics for this new approach on 90 pesticide residues, including their degradation products, in three commodities (tomato, pepper, and orange). The significant benefits of the microflow-LC-MS/MS-based method were a high sensitivity gain and a notable reduction in matrix effects delivered by a dilution of the sample (up to 30-fold); this is as a result of competition reduction between the matrix compounds and analytes for charge during ionization. Overall robustness and a capability to withstand long analytical runs using the microflow-LC-MS system have been demonstrated (for 100 consecutive injections without any maintenance being required). Quality controls based on the results of internal standards added at the samples' extraction, dilution, and injection steps were also satisfactory. The LOQ values were mostly 5 μg kg(-1) for almost all pesticide residues. Other benefits were a substantial reduction in solvent usage and waste disposal as well as a decrease in the run-time. The method was successfully applied in the routine analysis of 50 fruit and vegetable samples labeled as organically produced.

  12. Threshold-Based OSIC Detection Algorithm for Per-Antenna-Coded TIMO-OFDM Systems

    NASA Astrophysics Data System (ADS)

    Wang, Xinzheng; Chen, Ming; Zhu, Pengcheng

    Threshold-based ordered successive interference cancellation (OSIC) detection algorithm is proposed for per-antenna-coded (PAC) two-input multiple-output (TIMO) orthogonal frequency division multiplexing (OFDM) systems. Successive interference cancellation (SIC) is performed selectively according to channel conditions. Compared with the conventional OSIC algorithm, the proposed algorithm reduces the complexity significantly with only a slight performance degradation.

  13. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  14. Algorithms for improved performance in cryptographic protocols.

    SciTech Connect

    Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn

    2003-11-01

    Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

  15. Sampling Within k-Means Algorithm to Cluster Large Datasets

    SciTech Connect

    Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj; Ostrouchov, George

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  16. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  17. Combination treatment of human umbilical cord matrix stem cell-based interferon-beta gene therapy and 5-fluorouracil significantly reduces growth of metastatic human breast cancer in SCID mouse lungs.

    PubMed

    Rachakatla, Raja Shekar; Pyle, Marla M; Ayuzawa, Rie; Edwards, Sarah M; Marini, Frank C; Weiss, Mark L; Tamura, Masaaki; Troyer, Deryl

    2008-08-01

    Umbilical cord matrix stem (UCMS) cells that were engineered to express interferon-beta (IFN-beta) were transplanted weekly for three weeks into MDA 231 breast cancer xenografts bearing SCID mice in combination with 5-fluorouracil (5-FU). The UCMS cells were found within lung tumors but not in other tissues. Although both treatments significantly reduced MDA 231 tumor area in the SCID mouse lungs, the combined treatment resulted in a greater reduction in tumor area than by either treatment used alone. These results indicate that a combination treatment of UCMS-IFN-beta cells and 5-FU is a potentially effective therapeutic procedure for breast cancer.

  18. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  19. Declinol, a Complex Containing Kudzu, Bitter Herbs (Gentian, Tangerine Peel) and Bupleurum, Significantly Reduced Alcohol Use Disorders Identification Test (AUDIT) Scores in Moderate to Heavy Drinkers: A Pilot Study

    PubMed Central

    Kushner, Steven; Han, David; Oscar-Berman, Marlene; William Downs, B; Madigan, Margaret A; Giordano, John; Beley, Thomas; Jones, Scott; Barh, Debmayla; Simpatico, Thomas; Dushaj, Kristina; Lohmann, Raquel; Braverman, Eric R; Schoenthaler, Stephen; Ellison, David; Blum, Kenneth

    2013-01-01

    It is well established that inherited human aldehyde dehydrogenase 2 (ALDH-2) deficiency reduces the risk for alcoholism. Kudzu plants and extracts have been used for 1,000 years in traditional Chinese medicine to treat alcoholism. Kudzu contains daidzin, which inhibits ALDH-2 and suppresses heavy drinking in rodents. Decreased drinking due to ALDH-2 inhibition is attributed to aversive properties of acetaldehyde accumulated during alcohol consumption. However not all of the anti-alcohol properties of diadzin are due to inhibition of ALDH-2. This is in agreement with our earlier work showing significant interaction effects of both pyrozole (ALDH-2 inhibitor) and methyl-pyrozole (non-inhibitor) and ethanol’s depressant effects. Moreover, it has been suggested that selective ALDH 2 inhibitors reduce craving for alcohol by increasing dopamine in the nucleus accumbens (NAc). In addition there is significant evidence related to the role of the genetics of bitter receptors (TAS2R) and its stimulation as an aversive mechanism against alcohol intake. The inclusion of bitters such as Gentian & Tangerine Peel in Declinol provides stimulation of gut TAS2R receptors which is potentially synergistic with the effects of Kudzu. Finally the addition of Radix Bupleuri in the Declinol formula may have some protective benefits not only in terms of ethanol induced liver toxicity but neurochemical actions involving endorphins, dopamine and epinephrine. With this information as a rationale, we report herein that this combination significantly reduced Alcohol Use Disorders Identification Test (AUDIT) scores administered to ten heavy drinkers (M=8, F=2; 43.2 ± 14.6 years) attending a recovery program. Specifically, from the pre-post comparison of the AUD scores, it was found that the score of every participant decreased after the intervention which ranged from 1 to 31. The decrease in the scores was found to be statistically significant with the p-value of 0.00298 (two-sided paired

  20. Endomorphin 1[psi] and endomorphin 2[psi], endomorphins analogues containing a reduced (CH2NH) amide bond between Tyr1 and Pro2, display partial agonist potency but significant antinociception.

    PubMed

    Zhao, Qian-Yu; Chen, Qiang; Yang, Ding-Jian; Feng, Yun; Long, Yuan; Wang, Peng; Wang, Rui

    2005-07-22

    Endomorphin 1 (EM1) and endomorphin 2 (EM2) are highly potent and selective mu-opioid receptor agonists and have significant antinociceptive action. In the mu-selective pocket of endomorphins (EMs), Pro2 residue is a spacer and directs the Tyr1 and Trp3/Phe3 side chains into the required orientation. The present work was designed to substitute the peptide bond between Tyr1 and Pro2 of EMs with a reduced (CH2NH) bond and study the agonist potency and antinociception of EM1[psi] (Tyr[psi(CH2NH)]Pro-Trp-Phe-NH2) and EM2[psi] (Tyr[psi(CH2NH)]Pro-Phe-Phe-NH2). Both EM1[psi] and EM2[psi] are partial mu opioid receptor agonists showing significant loss of agonist potency in GPI assay. However, EMs[psi] exhibited potent supraspinal antinociceptive action in vivo. In the mice tail-flick test, EMs[psi] (1, 5, 10 nmol/mouse, i.c.v.) produced potent and short-lasting antinociception in a dose-dependent and naloxone (1 mg/kg) reversed manner. At the highest dose of 10 nmol, the effect of EM2[psi] was prolonged and more significant than that of EM2. In the rat model of formalin injection induced inflammatory pain, EMs[psi] (0.1, 1, 10 nmol/rat, i.c.v.), like EMs, exerted transient but not dose-dependent antinociception. These results suggested that in the mu-selective pocket of EMs, the rigid conformation induced by the peptide bond between Tyr1 and Pro2 is essential to regulate their agonist properties at the mu opioid receptors. However, the increased conformational flexibility induced by the reduced (CH2NH) bond made less influence on their antinociception.

  1. Reduced Basis Method for Nanodevices Simulation

    SciTech Connect

    Pau, George Shu Heng

    2008-05-23

    Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.

  2. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  3. Chlorophyllin significantly reduces benzo[a]pyrene [BP]-DNA adduct formation and alters Cytochrome P450 1A1 and 1B1 expression and EROD activity in normal human mammary epithelial cells (NHMECs)

    PubMed Central

    Keshava, Channa; Divi, Rao L.; Einem, Tracey L.; Richardson, Diana L.; Leonard, Sarah L.; Keshava, Nagalakshmi; Poirier, Miriam C.; Weston, Ainsley

    2008-01-01

    We hypothesized that chlorophyllin (CHLN) would reduce BP-DNA adduct levels. Using NHMECs exposed to 4 μM BP for 24 hr in the presence or absence of 5 μM CHLN, we measured BP-DNA adducts by chemiluminescence immunoassay (CIA). The protocol included the following experimental groups: BP alone, BP given simultaneously with CHLN (BP+CHLN) for 24 hr, CHLN given for 24 hr followed by BP for 24 hr (preCHLN, postBP), and CHLN given for 48 hr with BP added for the last 24 hr (preCHLN, postBP+CHLN). Incubation with CHLN decreased BPdG levels in all groups, with 87 % inhibition in the preCHLN, postBP+CHLN group. To examine metabolic mechanisms, we monitored expression by Affymetrix microarray (U133A), and found BP-induced up-regulation of CYP1A1 and CYP1B1 expression, as well as up-regulation of groups of interferon-inducible, inflammation and signal transduction genes. Incubation of cells with CHLN and BP in any combination decreased expression of many of these genes. Using real time PCR (RT-PCR) the maximal inhibition of BP-induced gene expression, >85% for CYP1A1 and >70% for CYP1B1, was observed in the preCHLN, postBP+CHLN group. To explore the relationship between transcription and enzyme activity, the ethoxyresorufin-O-deethylase (EROD) assay was used to measure the combined CYP1A1 and CYP1B1 activities. BP exposure caused the EROD levels to double, compared to the unexposed controls. The CHLN-exposed groups all showed EROD levels similar to the unexposed controls. Therefore, the addition of CHLN to BP-exposed cells reduced BPdG formation and CYP1A1 and CYP1B1 expression, but EROD activity was not significantly reduced. PMID:19152381

  4. Immunogenicity of a reduced-dose whole killed rabies vaccine is significantly enhanced by ISCOMATRIX™ adjuvant, Merck amorphous aluminum hydroxylphosphate sulfate (MAA) or a synthetic TLR9 agonist in rhesus macaques.

    PubMed

    DiStefano, Daniel; Antonello, Joseph M; Bett, Andrew J; Medi, Muneeswara B; Casimiro, Danilo R; ter Meulen, Jan

    2013-10-01

    There is a need for novel rabies vaccines suitable for short course, pre- and post-exposure prophylactic regimens which require reduced doses of antigen to address the current worldwide supply issue. We evaluated in rhesus macaques the immunogenicity of a quarter-dose of a standard rabies vaccine formulated with Merck's amorphous aluminum hydroxylphosphate sulfate adjuvant, the saponin-based ISCOMATRIX™ adjuvant, or a synthetic TLR9 agonist. All adjuvants significantly increased the magnitude and durability of the humoral immune response as measured by rapid fluorescent focus inhibition test (RFFIT). Several three-dose vaccine regimens resulted in adequate neutralizing antibody of ≥ 0.5 IU/ml earlier than the critical day seven post the first dose. Rabies vaccine with ISCOMATRIX™ adjuvant given at days 0 and 3 resulted in neutralizing antibody titers which developed faster and were up to one log10 higher compared to WHO-recommended intramuscular and intradermal regimens and furthermore, passive administration of human rabies immunoglobulin did not interfere with immunogenicity of this reduced dose, short course vaccine regimen. Adjuvantation of whole-killed rabies vaccine for intramuscular injection may therefore be a viable alternative to intradermal application of non-adjuvanted vaccine for both pre- and post-exposure regimens.

  5. Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.

    New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated

  6. a Distributed Polygon Retrieval Algorithm Using Mapreduce

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Palanisamy, B.; Karimi, H. A.

    2015-07-01

    The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms.

  7. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  8. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  9. Significantly Reduced Genoprevalence of Vaccine-Type HPV-16/18 Infections among Vaccinated Compared to Non-Vaccinated Young Women 5.5 Years after a Bivalent HPV-16/18 Vaccine (Cervarix®) Pilot Project in Uganda

    PubMed Central

    Berggren, Vanja; Wabinga, Henry; Lillsunde-Larsson, Gabriella; Helenius, Gisela; Kaliff, Malin; Karlsson, Mats; Kirimunda, Samuel; Musubika, Caroline; Andersson, Sören

    2016-01-01

    The objective of this study was to determine the prevalence and some predictors for vaccine and non-vaccine types of HPV infections among bivalent HPV vaccinated and non-vaccinated young women in Uganda. This was a comparative cross sectional study 5.5 years after a bivalent HPV 16/18 vaccination (Cervarix®, GlaxoSmithKline, Belgium) pilot project in western Uganda. Cervical swabs were collected between July 2014-August 2014 and analyzed with a HPV genotyping test, CLART® HPV2 assay (Genomica, Madrid Spain) which is based on PCR followed by microarray for determination of genotype. Blood samples were also tested for HIV and syphilis infections as well as CD4 and CD8 lymphocyte levels. The age range of the participants was 15–24 years and mean age was 18.6(SD 1.4). Vaccine-type HPV-16/18 strains were significantly less prevalent among vaccinated women compared to non-vaccinated women (0.5% vs 5.6%, p 0.006, OR 95% CI 0.08(0.01–0.64). At type-specific level, significant difference was observed for HPV16 only. Other STIs (HIV/syphilis) were important risk factors for HPV infections including both vaccine types and non-vaccine types. In addition, for non-vaccine HPV types, living in an urban area, having a low BMI, low CD4 count and having had a high number of life time sexual partners were also significant risk factors. Our data concurs with the existing literature from other parts of the world regarding the effectiveness of bivalent HPV-16/18 vaccine in reducing the prevalence of HPV infections particularly vaccine HPV- 16/18 strains among vaccinated women. This study reinforces the recommendation to vaccinate young girls before sexual debut and integrate other STI particularly HIV and syphilis interventions into HPV vaccination packages. PMID:27482705

  10. Dimensionality Reduction Particle Swarm Algorithm for High Dimensional Clustering

    SciTech Connect

    Cui, Xiaohui; ST Charles, Jesse Lee; Potok, Thomas E; Beaver, Justin M

    2008-01-01

    The Particle Swarm Optimization (PSO) clustering algorithm can generate more compact clustering results than the traditional K-means clustering algorithm. However, when clustering high dimensional datasets, the PSO clustering algorithm is notoriously slow because its computation cost increases exponentially with the size of the dataset dimension. Dimensionality reduction techniques offer solutions that both significantly improve the computation time, and yield reasonably accurate clustering results in high dimensional data analysis. In this paper, we introduce research that combines different dimensionality reduction techniques with the PSO clustering algorithm in order to reduce the complexity of high dimensional datasets and speed up the PSO clustering process. We report significant improvements in total runtime. Moreover, the clustering accuracy of the dimensionality reduction PSO clustering algorithm is comparable to the one that uses full dimension space.

  11. Significant Reduction of Late Toxicities in Patients With Extremity Sarcoma Treated With Image-Guided Radiation Therapy to a Reduced Target Volume: Results of Radiation Therapy Oncology Group RTOG-0630 Trial

    PubMed Central

    Wang, Dian; Zhang, Qiang; Eisenberg, Burton L.; Kane, John M.; Li, X. Allen; Lucas, David; Petersen, Ivy A.; DeLaney, Thomas F.; Freeman, Carolyn R.; Finkelstein, Steven E.; Hitchcock, Ying J.; Bedi, Manpreet; Singh, Anurag K.; Dundas, George; Kirsch, David G.

    2015-01-01

    Purpose We performed a multi-institutional prospective phase II trial to assess late toxicities in patients with extremity soft tissue sarcoma (STS) treated with preoperative image-guided radiation therapy (IGRT) to a reduced target volume. Patients and Methods Patients with extremity STS received IGRT with (cohort A) or without (cohort B) chemotherapy followed by limb-sparing resection. Daily pretreatment images were coregistered with digitally reconstructed radiographs so that the patient position could be adjusted before each treatment. All patients received IGRT to reduced tumor volumes according to strict protocol guidelines. Late toxicities were assessed at 2 years. Results In all, 98 patients were accrued (cohort A, 12; cohort B, 86). Cohort A was closed prematurely because of poor accrual and is not reported. Seventy-nine eligible patients from cohort B form the basis of this report. At a median follow-up of 3.6 years, five patients did not have surgery because of disease progression. There were five local treatment failures, all of which were in field. Of the 57 patients assessed for late toxicities at 2 years, 10.5% experienced at least one grade ≥ 2 toxicity as compared with 37% of patients in the National Cancer Institute of Canada SR2 (CAN-NCIC-SR2: Phase III Randomized Study of Pre- vs Postoperative Radiotherapy in Curable Extremity Soft Tissue Sarcoma) trial receiving preoperative radiation therapy without IGRT (P < .001). Conclusion The significant reduction of late toxicities in patients with extremity STS who were treated with preoperative IGRT and absence of marginal-field recurrences suggest that the target volumes used in the Radiation Therapy Oncology Group RTOG-0630 (A Phase II Trial of Image-Guided Preoperative Radiotherapy for Primary Soft Tissue Sarcomas of the Extremity) study are appropriate for preoperative IGRT for extremity STS. PMID:25667281

  12. Chlorophyllin significantly reduces benzo[a]pyrene-DNA adduct formation and alters cytochrome P450 1A1 and 1B1 expression and EROD activity in normal human mammary epithelial cells.

    PubMed

    Keshava, Channa; Divi, Rao L; Einem, Tracey L; Richardson, Diana L; Leonard, Sarah L; Keshava, Nagalakshmi; Poirier, Miriam C; Weston, Ainsley

    2009-03-01

    We hypothesized that chlorophyllin (CHLN) would reduce benzo[a]pyrene-DNA (BP-DNA) adduct levels. Using normal human mammary epithelial cells (NHMECs) exposed to 4 microM BP for 24 hr in the presence or absence of 5 microM CHLN, we measured BP-DNA adducts by chemiluminescence immunoassay (CIA). The protocol included the following experimental groups: BP alone, BP given simultaneously with CHLN (BP+CHLN) for 24 hr, CHLN given for 24 hr followed by BP for 24 hr (preCHLN, postBP), and CHLN given for 48 hr with BP added for the last 24 hr (preCHLN, postBP+CHLN). Incubation with CHLN decreased BPdG levels in all groups, with 87% inhibition in the preCHLN, postBP+CHLN group. To examine metabolic mechanisms, we monitored expression by Affymetrix microarray (U133A), and found BP-induced up-regulation of CYP1A1 and CYP1B1 expression, as well as up-regulation of groups of interferon-inducible, inflammation and signal transduction genes. Incubation of cells with CHLN and BP in any combination decreased expression of many of these genes. Using reverse transcription real time PCR (RT-PCR) the maximal inhibition of BP-induced gene expression, >85% for CYP1A1 and >70% for CYP1B1, was observed in the preCHLN, postBP+CHLN group. To explore the relationship between transcription and enzyme activity, the ethoxyresorufin-O-deethylase (EROD) assay was used to measure the combined CYP1A1 and CYP1B1 activities. BP exposure caused the EROD levels to double, when compared with the unexposed controls. The CHLN-exposed groups all showed EROD levels similar to the unexposed controls. Therefore, the addition of CHLN to BP-exposed cells reduced BPdG formation and CYP1A1 and CYP1B1 expression, but EROD activity was not significantly reduced.

  13. Memory-hazard-aware k-buffer algorithm for order-independent transparency rendering.

    PubMed

    Zhang, Nan

    2014-02-01

    The (k)-buffer algorithm is an efficient GPU-based fragment level sorting algorithm for rendering transparent surfaces. Because of the inherent massive parallelism of GPU stream processors, this algorithm suffers serious read-after-write memory hazards now. In this paper, we introduce an improved (k)-buffer algorithm with error correction coding to combat memory hazards. Our algorithm results in significantly reduced artifacts. While preserving all the merits of the original algorithm, it requires merely OpenGL 3.x support from the GPU, instead of the atomic operations appearing only in the latest OpenGL 4.2 standard. Our algorithm is simple to implement and efficient in performance. Future GPU support for improving this algorithm is also proposed.

  14. Memory-Hazard-Aware K-Buffer Algorithm for Order-Independent Transparency Rendering.

    PubMed

    Zhang, Nan

    2013-04-04

    The k-buffer algorithm is an efficient GPU based fragment level sorting algorithm for rendering transparent surfaces. Because of the inherent massive parallelism of GPU stream processors, this algorithm suffers serious read-after-write memory hazards now. In this paper, we introduce an improved k-buffer algorithm with error correction coding to combat memory hazards. Our algorithm results in significantly reduced artifacts. While preserving all the merits of the original algorithm, it requires merely OpenGL 3.x support from the GPU, instead of the atomic operations appearing only in the latest OpenGL 4.2 standard. Our algorithm is simple to implement and efficient in performance. Future GPU support for improving this algorithm is also proposed.

  15. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  16. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    NASA Astrophysics Data System (ADS)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  17. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  18. Indirect antiglobulin test-crossmatch using low-ionic-strength saline-albumin enhancement medium and reduced incubation time: effectiveness in the detection of most clinically significant antibodies and impact on blood utilization.

    PubMed

    Dinardo, C L; Bonifácio, S L; Mendrone, A

    2014-01-01

    Indirect antiglobulin test-crossmatch (IAT-XM) using enhancement media such as low-ionic-strength saline (LISS) and polyethylene glycol (PEG) usually requires 15 minutes of incubation. These methods are necessary when testing samples from blood recipients who have a higher risk of alloimmunization. In emergency situations, IAT-XM can be time-consuming and can influence presurgery routine, resulting in more red blood cell (RBC) units being tested and stored to avoid the transfusion of uncrossmatched ones. The objective of this study was to evaluate the performance of a LISS-albumin enhancer to intensify antigen-antibody reaction after 5 minutes of 37oC incubation and compare this performance with that of other enhancers, gel, and conventional tube testing. Second, the study evaluated the impact of this method's implementation in the C:T ratio (crossmatched to transfused RBC units) of a transfusion laboratory. Ninety serum samples containing alloantibodies of potential clinical significance were tested against phenotyped RBCs using four different methods: (1) tube with LISS-albumin enhancer (5 minutes of incubation), (2) tube with LISS-albumin and PEG (15 minutes of incubation), (3) gel, and (4) conventional tube method (60 minutes of incubation). In parallel, the study compared the C:T ratio of a tertiary-hospital transfusion laboratory in two different periods: 3 months before and 3 months after the implementation of the 5-minute IAT-XM protocol. The use of LISS-albumin with 5 minutes of incubation exhibited the same performance as LISS-albumin, PEG, and gel with 15 minutes of incubation. Conventional tube method results were equally comparable, but reactions were significantly less intense, except for anti-c (p = 0.406). Accuracy was 100 percent for all selected methods. After the implementation of the 5-minute IAT-XM protocol, the C:T ratio fell from 2.74 to 1.29 (p < 0.001). IAT-XM can have its incubation time reduced to 5 minutes with the use of LISS

  19. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  20. A high-performance genetic algorithm: using traveling salesman problem as a case.

    PubMed

    Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  1. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  2. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  3. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  4. PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm

    PubMed Central

    Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198

  5. PCB drill path optimization by combinatorial cuckoo search algorithm.

    PubMed

    Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process.

  6. Object-Oriented Algorithm For Evaluation Of Fault Trees

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1992-01-01

    Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).

  7. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  8. Application of feces extracts and synthetic analogues of the host marking pheromone of Anastrepha ludens significantly reduces fruit infestation by A. obliqua in tropical plum and mango backyard orchards.

    PubMed

    Aluja, Martín; Díaz-Fleischer, F; Boller, E F; Hurter, J; Edmunds, A J F; Hagmann, L; Patrian, B; Reyes, J

    2009-12-01

    We determined the efficacy of three potential oviposition deterrents in reducing fruit infestation by Anastrepha obliqua in tropical plum and mango orchards. These were: (1) Extracts of feces of Mexican fruit fly, Anastrepha ludens, known to contain the A. ludens host marking pheromone (HMP) and (2) two fully synthetic simplified analogues of the naturally occurring compound, which we have named desmethyl A. ludens HMP (DM-HMP) and Anastrephamide. Two applications of feces extracts 2 or 3 wk before fruit color break reduced A. obliqua infestation in plums by 94.1, 75.9, and 72% when measured 8, 14, and 25 d, respectively, after application. The natural A. ludens-HMP containing extract retained its effectiveness despite considerable rainfall (112.5 mm) and high A. obliqua populations. The synthetic desmethyl HMP derivative (DM-HMP) also reduced infestation in plums by 53.3 and 58.7% when measured, 18 and 26 d, respectively, after application. Finally, applications of Anastrephamide resulted in fruit loss cut by half and an 80% reduction in numbers of fly larvae per fruit. Our results confirm previous findings indicating that there is interspecific cross-recognition of the HMP in two of the most pestiferous Anastrepha species and open the door for the development of a highly selective, biorational Anastrepha management scheme.

  9. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  10. Genetic algorithms with permutation coding for multiple sequence alignment.

    PubMed

    Ben Othman, Mohamed Tahar; Abdel-Azim, Gamil

    2013-08-01

    Multiple sequence alignment (MSA) is one of the topics of bio informatics that has seriously been researched. It is known as NP-complete problem. It is also considered as one of the most important and daunting tasks in computational biology. Concerning this a wide number of heuristic algorithms have been proposed to find optimal alignment. Among these heuristic algorithms are genetic algorithms (GA). The GA has mainly two major weaknesses: it is time consuming and can cause local minima. One of the significant aspects in the GA process in MSA is to maximize the similarities between sequences by adding and shuffling the gaps of Solution Coding (SC). Several ways for SC have been introduced. One of them is the Permutation Coding (PC). We propose a hybrid algorithm based on genetic algorithms (GAs) with a PC and 2-opt algorithm. The PC helps to code the MSA solution which maximizes the gain of resources, reliability and diversity of GA. The use of the PC opens the area by applying all functions over permutations for MSA. Thus, we suggest an algorithm to calculate the scoring function for multiple alignments based on PC, which is used as fitness function. The time complexity of the GA is reduced by using this algorithm. Our GA is implemented with different selections strategies and different crossovers. The probability of crossover and mutation is set as one strategy. Relevant patents have been probed in the topic.

  11. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  12. The oxidation capacity of Mn3O4 nanoparticles is significantly enhanced by anchoring them onto reduced graphene oxide to facilitate regeneration of surface-associated Mn(III).

    PubMed

    Duan, Lin; Wang, Zhongyuan; Hou, Yan; Wang, Zepeng; Gao, Guandao; Chen, Wei; Alvarez, Pedro J J

    2016-10-15

    Metal oxides are often anchored to graphene materials to achieve greater contaminant removal efficiency. To date, the enhanced performance has mainly been attributed to the role of graphene materials as a conductor for electron transfer. Herein, we report a new mechanism via which graphene materials enhance oxidation of organic contaminants by metal oxides. Specifically, Mn3O4-rGO nanocomposites (Mn3O4 nanoparticles anchored to reduced graphene oxide (rGO) nanosheets) enhanced oxidation of 1-naphthylamine (used here as a reaction probe) compared to bare Mn3O4. Spectroscopic analyses (X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy) show that the rGO component of Mn3O4-rGO was further reduced during the oxidation of 1-naphthylamine, although rGO reduction was not the result of direct interaction with 1-naphthylamine. We postulate that rGO improved the oxidation efficiency of anchored Mn3O4 by re-oxidizing Mn(II) formed from the reaction between Mn3O4 and 1-naphthylamine, thereby regenerating the surface-associated oxidant Mn(III). The proposed role of rGO was verified by separate experiments demonstrating its ability to oxidize dissolved Mn(II) to Mn(III), which subsequently can oxidize 1-naphthylamine. The role of dissolved oxygen in re-oxidizing Mn(II) was ruled out by anoxic (N2-purged) control experiments showing similar results as O2-sparged tests. Opposite pH effects on the oxidation efficiency of Mn3O4-rGO versus bare Mn3O4 were also observed, corroborating the proposed mechanism because higher pH facilitates oxidation of surface-associated Mn(II) even though it lowers the oxidation potential of Mn3O4. Overall, these findings may guide the development of novel metal oxide-graphene nanocomposites for contaminant removal.

  13. Convergence Behavior of Bird's Sophisticated DSMC Algorithm

    NASA Astrophysics Data System (ADS)

    Gallis, M. A.; Torczynski, J. R.; Rader, D. J.

    2007-11-01

    Bird's standard Direct Simulation Monte Carlo (DSMC) algorithm has remained almost unchanged since the mid-1970s. Recently, Bird developed a new DSMC algorithm, termed ``sophisticated DSMC'', which significantly modifies the way molecules both move and collide. The sophisticated DSMC algorithm is implemented in a one-dimensional DSMC code, and its convergence behavior is investigated for one-dimensional Fourier flow, where an argon-like hard-sphere gas is confined between two parallel, motionless, fully accommodating walls with unequal temperatures. As in previous work, the primary convergence metric is the ratio of the DSMC-calculated thermal conductivity to the theoretical value. The convergence behavior of sophisticated DSMC is compared to that of standard DSMC and to the predictions of Green-Kubo theory. The sophisticated algorithm significantly reduces the computational resources needed to maintain a fixed level of accuracy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  16. Fault Tolerant Algorithm for Structured Illumination Microscopy with Incoherent Light

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Heidingsfelder, Philipp; Gao, Jun; Yu, Liandong; Ott, Peter

    2015-04-01

    In this contribution we present a new algorithm for structured illumination microscopy with incoherent light. Existing algorithms for determining the contrast values of the focal depth response require a high accurate phase shift of the fringe pattern illumination. The presented algorithm, which is robust against inaccurate phase shift of the fringe pattern, reduces significantly the requirements for the phase shift and consequently the costs of the microscope. The new algorithm was tested by a preliminary experiment, whereby the grating was shifted by an elastic guided micro-motion mechanism employing a low-cost stepper motor replacing the conventional expensive piezo drive. The determined focal depth response is very smooth and corresponds very well to the theoretical focal depth response.

  17. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  18. Efficacy of a diagnostic and therapeutic algorithm for Clostridium difficile infection.

    PubMed

    Marukawa, Yohei; Komura, Takuya; Kagaya, Takashi; Ohta, Hajime; Unoura, Masashi

    2016-08-01

    In July 2012, metronidazole was approved for the treatment of Clostridium difficile infection (CDI). To clarify the selection criteria for the drug in terms of CDI severity, we established a diagnostic and therapeutic algorithm with reference to the SHEA-IDSA Clinical Practice Guidelines. We compared patients whose treatments were guided by the algorithm (29 cases, October 2012-September 2013) with patients treated prior to the development of the algorithm (37 cases, October 2011-September 2012). All cases treated with reference to the algorithm were diagnosed using enzyme immunoassay of C. difficile toxins A and B and glutamate dehydrogenase;an appropriate drug was prescribed in 93.1% of the cases. We found no significant between-group differences in the cure, recurrence, or complication rates. However, drug costs in cases wherein treatments were guided by the algorithm were markedly reduced. We have, thus, shown that algorithm-guided treatment is efficacious and cost-effective.

  19. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  20. Introductory Students, Conceptual Understanding, and Algorithmic Success.

    ERIC Educational Resources Information Center

    Pushkin, David B.

    1998-01-01

    Addresses the distinction between conceptual and algorithmic learning and the clarification of what is meant by a second-tier student. Explores why novice learners in chemistry and physics are able to apply algorithms without significant conceptual understanding. (DDR)

  1. A methodology for constructing fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B

    1997-01-01

    This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). The design of specific FALVQ algorithms according to existing approaches reduces to the selection of the membership function assigned to the weight vectors of an LVQ competitive neural network, which represent the prototypes. The development of a broad variety of FALVQ algorithms can be accomplished by selecting the form of the interference function that determines the effect of the nonwinning prototypes on the attraction between the winning prototype and the input of the network. The proposed methodology provides the basis for extending the existing FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms. This paper also introduces two quantitative measures which establish a relationship between the formulation that led to FALVQ algorithms and the competition between the prototypes during the learning process. The proposed algorithms and competition measures are tested and evaluated using the IRIS data set. The significance of the proposed competition measure is illustrated using FALVQ algorithms to perform segmentation of magnetic resonance images of the brain.

  2. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, Gary Karl

    2000-05-01

    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively

  3. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  4. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  5. An Accurate and Efficient Gaussian Fit Centroiding Algorithm for Star Trackers

    NASA Astrophysics Data System (ADS)

    Delabie, Tjorven; Schutter, Joris De; Vandenbussche, Bart

    2015-06-01

    This paper presents a novel centroiding algorithm for star trackers. The proposed algorithm, which is referred to as the Gaussian Grid algorithm, fits an elliptical Gaussian function to the measured pixel data and derives explicit expressions to determine the centroids of the stars. In tests, the algorithm proved to yield accuracy comparable to that of the most accurate existing algorithms, while being significantly less computationally intensive. Hence, the Gaussian Grid algorithm can deliver high centroiding accuracy to spacecraft with limited computational power. Furthermore, a hybrid algorithm is proposed in which the Gaussian Grid algorithm yields an accurate initial estimate for a least squares fitting method, resulting in a reduced number of iterations and hence reduced computational cost. The low computational cost allows to improve performance by acquiring the attitude estimates at a higher rate or use more stars in the estimation algorithms. It is also a valuable contribution to the expanding field of small satellites, where it could enable low-cost platforms to have highly accurate attitude estimation.

  6. Noise filtering algorithm for the MFTF-B computer based control system

    SciTech Connect

    Minor, E.G.

    1983-11-30

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions.

  7. A Fast parallel tridiagonal algorithm for a class of CFD applications

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  8. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  9. Noise propagation in iterative reconstruction algorithms with line searches

    SciTech Connect

    Qi, Jinyi

    2003-11-15

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced.

  10. New mode switching algorithm for the JPL 70-meter antenna servo controller

    NASA Technical Reports Server (NTRS)

    Nickerson, J. A.

    1988-01-01

    The design of control mode switching algorithms and logic for JPL's 70 m antenna servo controller are described. The old control mode switching logic was reviewed and perturbation problems were identified. Design approaches for mode switching are presented and the final design is described. Simulations used to compare old and new mode switching algorithms and logic show that the new mode switching techniques will significantly reduce perturbation problems.

  11. Bacteriophage significantly reduces Listeria monocytogenes on raw salmon fillet tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We have demonstrated the antilisterial activity of generally recognized as safe (GRAS) bacteriophage LISTEX P100 (phage P100) on the surface of raw salmon fillet tissue against Listeria monocytogenes serotypes 1/2a and 4b. In a broth model system, phage P100 completely inhibited L. monocytogenes gro...

  12. Climate warming could reduce runoff significantly in New England, USA

    USGS Publications Warehouse

    Huntington, T.G.

    2003-01-01

    The relation between mean annual temperature (MAT), mean annual precipitation (MAP) and evapotranspiration (ET) for 38 forested watersheds was determined to evaluate the potential increase in ET and resulting decrease in stream runoff that could occur following climate change and lengthening of the growing season. The watersheds were all predominantly forested and were located in eastern North America, along a gradient in MAT from 3.5??C in New Brunswick, CA, to 19.8??C in northern Florida. Regression analysis for MAT versus ET indicated that along this gradient ET increased at a rate of 2.85 cm??C-1 increase in MAT (??0.96 cm??C-1, 95% confidence limits). General circulation models (GCM) using current mid-range emission scenarios project global MAT to increase by about 3??C during the 21st century. The inferred, potential, reduction in annual runoff associated with a 3??C increase in MAT for a representative small coastal basin and an inland mountainous basin in New England would be 11-13%. Percentage reductions in average daily runoff could be substantially larger during the months of lowest flows (July-September). The largest absolute reductions in runoff are likely to be during April and May with smaller reduction in the fall. This seasonal pattern of reduction in runoff is consistent with lengthening of the growing season and an increase in the ratio of rain to snow. Future increases in water use efficiency (WUE), precipitation, and cloudiness could mitigate part or all of this reduction in runoff but the full effects of changing climate on WUE remain quite uncertain as do future trends in precipitation and cloudiness.

  13. Using DFX for Algorithm Evaluation

    SciTech Connect

    Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.

    1998-10-20

    Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a

  14. Control algorithms for dynamic attenuators

    SciTech Connect

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  15. Statistically significant relational data mining :

    SciTech Connect

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  16. Optimizing connected component labeling algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-04-01

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.

  17. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  18. Reduced order parameter estimation using quasilinearization and quadratic programming

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Putti, Mario; Yeh, William W.-G.

    2012-06-01

    The ability of a particular model to accurately predict how a system responds to forcing is predicated on various model parameters that must be appropriately identified. There are many algorithms whose purpose is to solve this inverse problem, which is often computationally intensive. In this study, we propose a new algorithm that significantly reduces the computational burden associated with parameter identification. The algorithm is an extension of the quasilinearization approach where the governing system of differential equations is linearized with respect to the parameters. The resulting inverse problem therefore becomes a linear regression or quadratic programming problem (QP) for minimizing the sum of squared residuals; the solution becomes an update on the parameter set. This process of linearization and regression is repeated until convergence takes place. This algorithm has not received much attention, as the QPs can become quite large, often infeasible for real-world systems. To alleviate this drawback, proper orthogonal decomposition is applied to reduce the size of the linearized model, thereby reducing the computational burden of solving each QP. In fact, this study shows that the snapshots need only be calculated once at the very beginning of the algorithm, after which no further calculations of the reduced-model subspace are required. The proposed algorithm therefore only requires one linearized full-model run per parameter at the first iteration followed by a series of reduced-order QPs. The method is applied to a groundwater model with about 30,000 computation nodes where as many as 15 zones of hydraulic conductivity are estimated.

  19. PVT Analysis With A Deconvolution Algorithm

    SciTech Connect

    Kouzes, Richard T.

    2011-02-01

    Polyvinyl Toluene (PVT) plastic scintillator is the most common gamma ray detector material used for large systems when only gross counting is needed because of its low cost, robustness, and relative sensitivity. PVT does provide some energy information about the incident photons, as has been demonstrated through the development of Energy Windowing analysis. There is a more sophisticated energy analysis algorithm developed by Symetrica, Inc., and they have demonstrated the application of their deconvolution algorithm to PVT with very promising results. The thrust of such a deconvolution algorithm used with PVT is to allow for identification and rejection of naturally occurring radioactive material, reducing alarm rates, rather than the complete identification of all radionuclides, which is the goal of spectroscopic portal monitors. Under this condition, there could be a significant increase in sensitivity to threat materials. The advantage of this approach is an enhancement to the low cost, robust detection capability of PVT-based radiation portal monitor systems. The success of this method could provide an inexpensive upgrade path for a large number of deployed PVT-based systems to provide significantly improved capability at a much lower cost than deployment of NaI(Tl)-based systems of comparable sensitivity.

  20. A sensor node lossless compression algorithm for non-slowly varying data based on DMD transform

    NASA Astrophysics Data System (ADS)

    Ren, Xuejun; Liu, Jianping

    2013-03-01

    Efficient utilization of energy is a core area of research in wireless sensor networks. Data compression methods to reduce the number of bits to be transmitted by the communication module will significantly reduce the energy requirement and increase the lifetime of the sensor node. Based on the lifting scheme 2-point discrete cosine transform (DCT), this paper proposed a new reversible recursive algorithm named Difference-Median-Difference (DMD) transform for lossless data compression in sensor node. The DMD transform can significantly reduce the spatio-temporal correlations among sensor data and can smoothly run in resource limited sensor nodes. Through an entropy encoder, the results of DMD transform can be compressed more compactly based on their statistical characteristics to achieve compression. Compared with the typical lossless algorithms, the proposed algorithm indicated better compression ratios than others for non-slowly-varying data, despite a less computational effort.

  1. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  2. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees.

    PubMed

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-09-18

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods.

  3. Implementation of low communication frequency 3D FFT algorithm for ultra-large-scale micromagnetics simulation

    NASA Astrophysics Data System (ADS)

    Tsukahara, Hiroshi; Iwano, Kaoru; Mitsumata, Chiharu; Ishikawa, Tadashi; Ono, Kanta

    2016-10-01

    We implement low communication frequency three-dimensional fast Fourier transform algorithms on micromagnetics simulator for calculations of a magnetostatic field which occupies a significant portion of large-scale micromagnetics simulation. This fast Fourier transform algorithm reduces the frequency of all-to-all communications from six to two times. Simulation times with our simulator show high scalability in parallelization, even if we perform the micromagnetics simulation using 32 768 physical computing cores. This low communication frequency fast Fourier transform algorithm enables world largest class micromagnetics simulations to be carried out with over one billion calculation cells.

  4. A combined reconstruction algorithm for computerized ionospheric tomography

    NASA Astrophysics Data System (ADS)

    Wen, D. B.; Ou, J. K.; Yuan, Y. B.

    Ionospheric electron density profiles inverted by tomographic reconstruction of GPS derived total electron content TEC measurements has the potential to become a tool to quantify ionospheric variability and investigate ionospheric dynamics The problem of reconstructing ionospheric electron density from GPS receiver to satellite TEC measurements are formulated as an ill-posed discrete linear inverse problem A combined reconstruction algorithm of computerized ionospheric tomography CIT is proposed in this paper In this algorithm Tikhonov regularization theory TRT is exploited to solve the ill-posed problem and its estimate from GPS observation data is input as the initial guess of simultaneous iterative reconstruction algorithm SIRT The combined algorithm offer a more reasonable method to choose initial guess of SIRT and the use of SIRT algorithm is to improve the quality of the final reconstructed imaging Numerical experiments from the actual GPS observation data are used to validate the reliability of the method the reconstructed results show that the new algorithm works reasonably and effectively with CIT the overall reconstruction error reduces significantly compared to the reconstruction error of SIRT only or TRT only

  5. Aligning parallel arrays to reduce communication

    NASA Technical Reports Server (NTRS)

    Sheffler, Thomas J.; Schreiber, Robert; Gilbert, John R.; Chatterjee, Siddhartha

    1994-01-01

    Axis and stride alignment is an important optimization in compiling data-parallel programs for distributed-memory machines. We previously developed an optimal algorithm for aligning array expressions. Here, we examine alignment for more general program graphs. We show that optimal alignment is NP-complete in this setting, so we study heuristic methods. This paper makes two contributions. First, we show how local graph transformations can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. Second, we give a heuristic that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. Our algorithms have been implemented; we present experimental results showing their effect on the performance of some example programs running on the CM-5.

  6. On the use of frequency-domain reconstruction algorithms for photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Schulze, Rainer; Zangerl, Gerhard; Holotta, Markus; Meyer, Dirk; Handle, Florian; Nuster, Robert; Paltauf, Günther; Scherzer, Otmar

    2011-08-01

    We investigate the use of a frequency-domain reconstruction algorithm based on the nonuniform fast Fourier transform (NUFFT) for photoacoustic imaging (PAI). Standard algorithms based on the fast Fourier transform (FFT) are computationally efficient, but compromise the image quality by artifacts. In our previous work we have developed an algorithm for PAI based on the NUFFT which is computationally efficient and can reconstruct images with the quality known from temporal backprojection algorithms. In this paper we review imaging qualities, such as resolution, signal-to-noise ratio, and the effects of artifacts in real-world situations. Reconstruction examples show that artifacts are reduced significantly. In particular, image details with a larger distance from the detectors can be resolved more accurately than with standard FFT algorithms.

  7. Eddy-current NDE inverse problem with sparse grid algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric

    2016-02-01

    In model-based inverse problems, the unknown parameters (such as length, width, depth) need to be estimated. When the unknown parameters are few, the conventional mathematical methods are suitable. But the increasing number of unknown parameters will make the computation become heavy. To reduce the burden of computation, the sparse grid algorithm was used in our work. As a result, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid.

  8. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  9. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  10. A Dummy Scan Flip-Flop Insertion Algorithm based on Driving Vertex

    NASA Astrophysics Data System (ADS)

    Liu, H. L.; Li, L.; Zhang, Z. X.; Zhou, W. T.

    2017-03-01

    Commonly termed as Hardware Trojans, is an emerging issue for global hardware security. The research on Hardware Trojan detection is urgent and significant. Dummy Scan Flip-Flop(DSFF) structure could be used to improve the probability of hardware Trojan activation, which is significant to hardware Trojan detection, especially during the design phase. In this express, an algorithm for inserting the DSFF structure based on driving vertex is proposed. According to the experimental results, under the same transition probability threshold(Pth), compared to the state-of-art, the proposed algorithm can reduce both the inserting complexity and the induced area overhead of the DSFF insertion. The maximum area optimization rate can reach 44.8%. The simulation results on S386 and S38584 benchmark circuits indicate that the proposed algorithm can significantly reduce Trojan authentication time by increasing activation probability of hardware Trojan circuits.

  11. Outline of a fast hardware implementation of Winograd's DFT algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  12. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.

  13. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  14. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  15. ICESat-2 / ATLAS Flight Science Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Mcgarry, J.; Carabajal, C. C.; Degnan, J. J.; Mallama, A.; Palm, S. P.; Ricklefs, R.; Saba, J. L.

    2013-12-01

    NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in 2016 with a 3 year mission lifetime. The ICESat-2 orbital altitude will be 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of eliminating solar background noise in near real-time, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allow the instrument to telemeter only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and an onboard relief map. Similar to what was flown on the ATLAS predecessor GLAS (Geoscience Laser Altimeter System) the DEM provides minimum and maximum heights for each 1 degree x 1 degree tile on the Earth. This information allows the onboard algorithm to limit its signal search to the region between minimum and maximum heights (plus some margin for errors). The understanding that the surface echoes will tend to clump while noise will be randomly distributed led us to histogram the received event times. The selection of the signal locations is based on those histogram bins with statistically significant counts. Once the signal location has been established the onboard Digital Relief Map (DRM) is used to determine the vertical width of the telemetry band about the signal. The ATLAS Receiver Algorithms are nearing completion of the development phase and are currently being tested using a Monte Carlo Software Simulator that models the instrument, the orbit and the environment

  16. Automatic Spike Removal Algorithm for Raman Spectra.

    PubMed

    Tian, Yao; Burch, Kenneth S

    2016-11-01

    Raman spectroscopy is a powerful technique, widely used in both academia and industry. In part, the technique's extensive use stems from its ability to uniquely identify and image various material parameters: composition, strain, temperature, lattice/excitation symmetry, and magnetism in bulk, nano, solid, and organic materials. However, in nanomaterials and samples with low thermal conductivity, these measurements require long acquisition times. On the other hand, charge-coupled device (CCD) detectors used in Raman microscopes are vulnerable to cosmic rays. As a result, many spurious spikes occur in the measured spectra, which can distort the result or require the spectra to be ignored. In this paper, we outline a new method that significantly improves upon existing algorithms for removing these spikes. Specifically, we employ wavelet transform and data clustering in a new spike-removal algorithm. This algorithm results in spike-free spectra with negligible spectral distortion. The reduced dependence on the selection of wavelets and intuitive wavelet coefficient adjustment strategy enables non-experts to employ these powerful spectra-filtering techniques.

  17. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  18. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  19. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms.

    PubMed

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K

    2015-06-11

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution.

  20. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  1. A New Aloha Anti-Collision Algorithm Based on CDMA

    NASA Astrophysics Data System (ADS)

    Bai, Enjian; Feng, Zhu

    The tags' collision is a common problem in RFID (radio frequency identification) system. The problem has affected the integrity of the data transmission during the process of communication in the RFID system. Based on analysis of the existing anti-collision algorithm, a novel anti-collision algorithm is presented. The new algorithm combines the group dynamic frame slotted Aloha algorithm with code division multiple access technology. The algorithm can effectively reduce the collision probability between tags. Under the same number of tags, the algorithm is effective in reducing the reader recognition time and improve overall system throughput rate.

  2. Accelerating Computation of Large Biological Datasets using MapReduce Framework.

    PubMed

    Wang, Chao; Dai, Dong; Li, Xi; Wang, Aili; Zhou, Xuehai

    2016-04-05

    The maximal information coefficient (MIC) has been proposed to discover relationships and associations between pairs of variables. It poses significant challenges for bioinformatics scientists to accelerate the MIC calculation, especially in genome sequencing and biological annotations. In this paper we explore a parallel approach which uses MapReduce framework to improve the computing efficiency and throughput of the MIC computation. The acceleration system includes biological data storage on HDFS, preprocessing algorithms, distributed memory cache mechanism, and the partition of MapReduce jobs. Based on the acceleration approach, we extend the traditional two-variable algorithm to multiple variables algorithm. The experimental results show that our parallel solution provides a linear speedup comparing with original algorithm without affecting the correctness and sensitivity.

  3. [The new algorithm for disease management of patients with epilepsy based on genetic research].

    PubMed

    Oros, M M; Smolanka, V I

    2012-01-01

    We have developed and proposed a new algorithm for treating patients with epilepsy, which takes into account the genetic criteria for the effectiveness of AEDs and provides an opportunity to significantly reduce the time drug-resistance definition, which in turn reduces the time progression epileptohenesis. Therefore, the use of alternative treatments for epilepsy, it is possible before the occurrence of irreversible changes in the patient's central nervous system. Therefore, treatment for this algorithm accelerates the choice of adequate treatment tactics in a particular patient, which promotes safety in society as active and healthy citizens.

  4. Application of modified Arnoldi algorithm to passive macromodeling of MEMS.

    PubMed

    Wong, Woon Ket; Wang, Wei

    2009-02-01

    The demand for accurately simulating dynamical responses of complex MEMS and NEMS systems leads to intensive studies in reduced-order modeling methods. We apply a modified Block Arnoldi algorithm to significantly reduce the run time and usage of computer resource for such calculations, while preserving essential properties. The 2n x 2n matrix in the computation is replaced by a n x n matrix and the FLOP count is reduced from (56n(3) - 216n(2) + 22n) / 3 to (7n(3) - 54n(2) + 11n) / 3. The CPU run time for a resonator example of n = 39 is reduced from 0.091 second to 0.080 second. For a butterfly gyroscope example with a larger matrix size, n = 17361, the CPU time is reduced from 4343 seconds to 1528 seconds, achieving 65% improvement.

  5. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  7. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  8. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  9. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  10. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  11. Reduced discretization error in HZETRN

    SciTech Connect

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.

  12. "Zeus" a new oral anticoagulant therapy dosing algorithm: a cohort study.

    PubMed

    Cafolla, A; Melizzi, R; Baldacci, E; Pignoloni, P; Dragoni, F; Campanelli, M; Caraccini, R; Foà, R

    2011-10-01

    The demand for oral anticoagulant therapy (OAT) has constantly increased during the last ten years with an extended use of computer assistance. Many mathematical algorithms have been projected to suggest doses and time to next visit for patients on OAT. We designed a new algorithm: "Zeus". A "before-after" study was planned to compare the efficacy and safety of this algorithm dosing OAT with manual dosage decided by the same expert physicians according to the target of International Normalized Ratio (INR). The study analysed data of 1876 patients managed with each of the two modalities for eight months, with an interval of two years between them. The aim was to verify the increased quality of therapy by time spent in INR target and efficiency and safety of Zeus algorithm. Time in therapeutic range (TTR) was significantly (p < 0.0001) higher during the algorithm dosing period in comparison with the TTR during manual management period (62.3% vs 50.3%). The number of PT/INR tests above 5 was significantly (p < 0.001) reduced by algorithm suggested prescriptions in comparison with manual those (254 vs 537 times). The anticoagulant drug amount prescribed according to the algorithm suggestions was significantly (p < 0.0001) lower than that of the manual method. The number of clinical events observed in patients during the algorithm management time was significantly (p < 0.05) lower than that in those managed with the manual dosage. This study confirms the clinical utility of the computer-assisted OAT and shows the efficacy and safety of the Zeus algorithm.

  13. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  14. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  15. A novel algorithm for notch detection

    NASA Astrophysics Data System (ADS)

    Acosta, C.; Salazar, D.; Morales, D.

    2013-06-01

    It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data, both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such data transformations, several polygon geometry changes are introduced, which can substantially increase shot count, geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact contribute to the complexity of the surrounding geometry, and are therefore undesirable. Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific geometrical restrictions in order to be valid candidates for notch correction. Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with strange shapes and in strange rotations. This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances

  16. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  17. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms

    PubMed Central

    Zwartjes, Ardjan; Havinga, Paul J. M.; Smit, Gerard J. M.; Hurink, Johann L.

    2016-01-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution. PMID:27706071

  18. A Modified NASA Team Sea Ice Algorithm for the Antarctic

    NASA Technical Reports Server (NTRS)

    Cavalieri, Donald J.; Markus, Thorsten

    1998-01-01

    A recent comparative study of the NASA Team and Bootstrap passive microwave sea ice algorithms revealed significantly different sea ice concentration retrievals in some parts of the Antarctic. The study identified potential reasons for the discrepancies including the influence of sea ice temperature variability on the Bootstrap retrievals and the influence of ice surface reflectivity on the horizontally polarized emissivity in the NASA Team retrievals. In this study, we present a modified version of the NASA Team algorithm which reduces the error associated with the use of horizontally polarized radiance data, while retaining the relative insensitivity to ice temperature variations provided by radiance ratios. By retaining the 19 GHz polarization as an independent variable, we also maintain a relatively large dynamic range in sea ice concentration. The modified algorithm utilizes the 19 GHz polarization (PR19) and both gradient ratios, GRV and GRH defined by (37V-19V)/(37V+19V) and (37H-19H)/(37H+19H), respectively, rather than just GRV used in the current NASA Team algorithm. A plot of GRV versus GRH shows that the preponderance of points lie along a quadratic curve, whereas those points affected by surface reflectivity anomalies deviate from this curve. This serves as a method of identifying the problems points. The 19H brightness temperature of these problem points is increased so they too fall along quadratic curve. Sea ice concentrations derived from AVHRR imagery illustrate the extent to which this method reduces the error associated with surface layering.

  19. A simplified rate control algorithm for H.264/SVC

    NASA Astrophysics Data System (ADS)

    Zhang, Guang Y.; Abdelazim, Abdelrahman; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    The objective of scalable video coding is to enable the generation of a unique bitstream that can adapt to various bitrates, transmission channels and display capabilities. The scalability is categorised in terms of temporal, spatial, and quality. Effective Rate Control (RC) has important ramifications for coding efficiency, and also channel bandwidth and buffer constraints in real-time communication. The main target of RC is to reduce the disparity between the actual and target bit-rates. In order to meet the target bitrate, a predicted Mean of Absolute Difference (MAD) between frames is used in a rate-quantisation model to obtain the Quantisation Parameter (QP) for encoding the current frame. The encoding process exploits the interdependencies between video frames; therefore the MAD does not change abruptly unless the scene changes significantly. After the scene change, the MAD will maintain a stable slow increase or decrease. Based on this observation, we developed a simplified RC algorithm. The scheme is divided in two steps; firstly, we predict scene changes, secondly, in order to suppress the visual quality, we limit the change in QP value between two frames to an adaptive range. This limits the need to use the rate-quantisation model to those situations where the scene changes significantly. To assess the proposed algorithm, comprehensive experiments were conducted. The experimental results show that the proposed algorithm significantly reduces encoding time whilst maintaining similar rate distortion performance, compared to both the H.264/SVC reference software and recently reported work.

  20. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  1. REDUCING INDOOR HUMIDITY SIGNIFICANTLY REDUCES DUST MITES AND ALLERGEN IN HOMES. (R825250)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  2. An enhanced algorithm to estimate BDS satellite's differential code biases

    NASA Astrophysics Data System (ADS)

    Shi, Chuang; Fan, Lei; Li, Min; Liu, Zhizhao; Gu, Shengfeng; Zhong, Shiming; Song, Weiwei

    2016-02-01

    This paper proposes an enhanced algorithm to estimate the differential code biases (DCB) on three frequencies of the BeiDou Navigation Satellite System (BDS) satellites. By forming ionospheric observables derived from uncombined precise point positioning and geometry-free linear combination of phase-smoothed range, satellite DCBs are determined together with ionospheric delay that is modeled at each individual station. Specifically, the DCB and ionospheric delay are estimated in a weighted least-squares estimator by considering the precision of ionospheric observables, and a misclosure constraint for different types of satellite DCBs is introduced. This algorithm was tested by GNSS data collected in November and December 2013 from 29 stations of Multi-GNSS Experiment (MGEX) and BeiDou Experimental Tracking Stations. Results show that the proposed algorithm is able to precisely estimate BDS satellite DCBs, where the mean value of day-to-day scattering is about 0.19 ns and the RMS of the difference with respect to MGEX DCB products is about 0.24 ns. In order to make comparison, an existing algorithm based on IGG: Institute of Geodesy and Geophysics, China (IGGDCB), is also used to process the same dataset. Results show that, the DCB difference between results from the enhanced algorithm and the DCB products from Center for Orbit Determination in Europe (CODE) and MGEX is reduced in average by 46 % for GPS satellites and 14 % for BDS satellites, when compared with DCB difference between the results of IGGDCB algorithm and the DCB products from CODE and MGEX. In addition, we find the day-to-day scattering of BDS IGSO satellites is obviously lower than that of GEO and MEO satellites, and a significant bias exists in daily DCB values of GEO satellites comparing with MGEX DCB product. This proposed algorithm also provides a new approach to estimate the satellite DCBs of multiple GNSS systems.

  3. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  4. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  5. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  6. ALGORITHM FOR SORTING GROUPED DATA

    NASA Technical Reports Server (NTRS)

    Evans, J. D.

    1994-01-01

    It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.

  7. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  8. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  9. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  10. Significance of periodogram peaks

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Guy, Leanne; Zucker, Shay

    2016-10-01

    Three versions of significance measures or False Alarm Probabilities (FAPs) for periodogram peaks are presented and compared for sinusoidal and box-like signals, with specific application on large-scale surveys in mind.

  11. Adaptive search range adjustment and multiframe selection algorithm for motion estimation in H.264/AVC

    NASA Astrophysics Data System (ADS)

    Liu, Yingzhe; Wang, Jinxiang; Fu, Fangfa

    2013-04-01

    The H.264/AVC video standard adopts a fixed search range (SR) and fixed reference frame (RF) for motion estimation. These fixed settings result in a heavy computational load in the video encoder. We propose a dynamic SR and multiframe selection algorithm to improve the computational efficiency of motion estimation. By exploiting the relationship between the predicted motion vector and the SR size, we develop an adaptive SR adjustment algorithm. We also design a RF selection scheme based on the correlation between the different block sizes of the macroblock. Experimental results show that our algorithm can significantly reduce the computational complexity of motion estimation compared with the JM15.1 reference software, with a negligible decrease in peak signal-to-noise ratio and a slight increase in bit rate. Our algorithm also outperforms existing methods in terms of its low complexity and high coding quality.

  12. A split finite element algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1979-01-01

    An accurate and efficient numerical solution algorithm is established for solution of the high Reynolds number limit of the Navier-Stokes equations governing the multidimensional flow of a compressible essentially inviscid fluid. Finite element interpolation theory is used within a dissipative formulation established using Galerkin criteria within the Method of Weighted Residuals. An implicit iterative solution algorithm is developed, employing tensor product bases within a fractional steps integration procedure, that significantly enhances solution economy concurrent with sharply reduced computer hardware demands. The algorithm is evaluated for resolution of steep field gradients and coarse grid accuracy using both linear and quadratic tensor product interpolation bases. Numerical solutions for linear and nonlinear, one, two and three dimensional examples confirm and extend the linearized theoretical analyses, and results are compared to competitive finite difference derived algorithms.

  13. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  14. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  15. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  16. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  17. Study on the optimal algorithm prediction of corn leaf component information based on hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Wang, Jihua; Wang, Cheng; Xu, Tongyu

    2016-09-01

    Genetic algorithm (GA) has a significant effect in the band optimization selection of Partial Least Squares (PLS) correction model. Application of genetic algorithm in selection of characteristic bands can achieve the optimal solution more rapidly, effectively improve measurement accuracy and reduce variables used for modeling. In this study, genetic algorithm as a module conducted band selection for the application of hyperspectral imaging in nondestructive testing of corn seedling leaves, and GA-PLS model was established. In addition, PLS quantitative model of full spectrum and experienced-spectrum region were established in order to suggest the feasibility of genetic algorithm optimizing wave bands, and model robustness was evaluated. There were 12 characteristic bands selected by genetic algorithm. With reflectance values of corn seedling component information at spectral characteristic wavelengths corresponding to 12 characteristic bands as variables, a model about SPAD values of corn leaves acquired was established by PLS, and modeling results showed r = 0.7825. The model results were better than those of PLS model established in full spectrum and experience-based selected bands. The results suggested that genetic algorithm can be used for data optimization and screening before establishing the corn seedling component information model by PLS method and effectively increase measurement accuracy and greatly reduce variables used for modeling.

  18. Statistical Significance Testing.

    ERIC Educational Resources Information Center

    McLean, James E., Ed.; Kaufman, Alan S., Ed.

    1998-01-01

    The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…

  19. Significance of brown dwarfs

    NASA Technical Reports Server (NTRS)

    Black, D. C.

    1986-01-01

    The significance of brown dwarfs for resolving some major problems in astronomy is discussed. The importance of brown dwarfs for models of star formation by fragmentation of molecular clouds and for obtaining independent measurements of the ages of stars in binary systems is addressed. The relationship of brown dwarfs to planets is considered.

  20. Adaptive-feedback control algorithm.

    PubMed

    Huang, Debin

    2006-06-01

    This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.

  1. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  2. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  3. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  4. Technology to Reduce Hypoglycemia.

    PubMed

    Yeoh, Ester; Choudhary, Pratik

    2015-07-01

    Hypoglycemia is a major barrier toward achieving glycemic targets and is associated with significant morbidity (both psychological and physical) and mortality. This article reviews technological strategies, from simple to more advanced technologies, which may help prevent or mitigate exposure to hypoglycemia. More efficient insulin delivery systems, bolus advisor calculators, data downloads providing information on glucose trends, continuous glucose monitoring with alarms warning of hypoglycemia, predictive algorithms, and finally closed loop insulin delivery systems are reviewed. The building blocks to correct use and interpretation of this range of available technology require patient education and appropriate patient selection.

  5. Technology to Reduce Hypoglycemia

    PubMed Central

    Yeoh, Ester; Choudhary, Pratik

    2015-01-01

    Hypoglycemia is a major barrier toward achieving glycemic targets and is associated with significant morbidity (both psychological and physical) and mortality. This article reviews technological strategies, from simple to more advanced technologies, which may help prevent or mitigate exposure to hypoglycemia. More efficient insulin delivery systems, bolus advisor calculators, data downloads providing information on glucose trends, continuous glucose monitoring with alarms warning of hypoglycemia, predictive algorithms, and finally closed loop insulin delivery systems are reviewed. The building blocks to correct use and interpretation of this range of available technology require patient education and appropriate patient selection. PMID:25883167

  6. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  7. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  8. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  9. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  10. A denoising algorithm for projection measurements in cone-beam computed tomography.

    PubMed

    Karimi, Davood; Ward, Rabab

    2016-02-01

    The ability to reduce the radiation dose in computed tomography (CT) is limited by the excessive quantum noise present in the projection measurements. Sinogram denoising is, therefore, an essential step towards reconstructing high-quality images, especially in low-dose CT. Effective denoising requires accurate modeling of the photon statistics and of the prior knowledge about the characteristics of the projection measurements. This paper proposes an algorithm for denoising low-dose sinograms in cone-beam CT. The proposed algorithm is based on minimizing a cost function that includes a measurement consistency term and two regularizations in terms of the gradient and the Hessian of the sinogram. This choice of the regularization is motivated by the nature of CT projections. We use a split Bregman algorithm to minimize the proposed cost function. We apply the algorithm on simulated and real cone-beam projections and compare the results with another algorithm based on bilateral filtering. Our experiments with simulated and real data demonstrate the effectiveness of the proposed algorithm. Denoising of the projections with the proposed algorithm leads to a significant reduction of the noise in the reconstructed images without oversmoothing the edges or introducing artifacts.

  11. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  12. Attitude Estimation Signal Processing: A First Report on Possible Algorithms and Their Utility

    NASA Technical Reports Server (NTRS)

    Riasati, Vahid R.

    1998-01-01

    In this brief effort, time has been of the essence. The data had to be acquired from APL/Lincoln Labs, stored, and sorted out to obtain the pertinent streams. This has been a significant part of this effort and hardware and software problems have been addressed with the appropriate solutions to accomplish this part of the task. Passed this, some basic and important algorithms are utilized to improve the performance of the attitude estimation systems. These algorithms are an essential part of the signal processing for the attitude estimation problem as they are utilized to reduce the amount of the additive/multiplicative noise that in general may or may not change its structure and probability density function, pdf, in time. These algorithms are not currently utilized in the processing of the data, at least, we are not aware of their use in this attitude estimation problem. Some of these algorithms, like the variable thresholding, are new conjectures, but one would expect that someone somewhere must have utilized this kind of scheme before. The variable thresholding idea is a straightforward scheme to use in case of a slowly varying pdf, or statistical moments of the unwanted random process. The algorithms here are kept simple but yet effective for processing the data and removing the unwanted noise. For the most part, these algorithms can be arranged so that their consecutive and orderly execution would complement the preceding algorithm and improve the overall performance of the signal processing chain.

  13. Composite Defect Significance.

    DTIC Science & Technology

    1982-07-13

    A12i 299 COMPOSITE DEFECT SIGNIFICANCE(U) MATERIALS SCIENCES 1/1 \\ CORP SPRING HOUSE PA S N CHATTERJEE ET AL. 13 JUL 82 MSC/TFR/1288/il87 NADC-80848...Directorate 30 Sensors & Avionics Technology Directorate 40 Communication & Navigation Technology Directorate 50 Software Computer Directorate 60 Aircraft ...instructions concerning commercial products herein do not constitute an endorsement by the Government nor do they convey or imply the license or right to use

  14. Significant Tsunami Events

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.

    2014-12-01

    Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/

  15. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  16. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  17. Efficient algorithm for simulation of isoelectric focusing.

    PubMed

    Yoo, Kisoo; Shim, Jaesool; Liu, Jin; Dutta, Prashanta

    2014-03-01

    IEF simulation is an effective tool to investigate the transport phenomena and separation performance as well as to design IEF microchip. However, multidimensional IEF simulations are computationally intensive as one has to solve a large number of mass conservation equations for ampholytes to simulate a realistic case. In this study, a parallel scheme for a 2D IEF simulation is developed to reduce the computational time. The calculation time for each equation is analyzed to identify which procedure is suitable for parallelization. As expected, simultaneous solution of mass conservation equations of ampholytes is identified as the computational hot spot, and the computational time can be significantly reduced by parallelizing the solution procedure for that. Moreover, to optimize the computing time, electric potential behavior during transient state is investigated. It is found that for a straight channel the transient variation of electric potential along the channel is negligible in a narrow pH range (5∼8) IEF. Thus the charge conservation equation is solved for the first time step only, and the electric potential obtain from that is used for subsequent calculations. IEF simulations are carried out using this algorithm for separation of cardiac troponin I from serum albumin in a pH range of 5-8 using 192 biprotic ampholytes. Significant reduction in simulation time is achieved using the parallel algorithm. We also study the effect of number of ampholytes to form the pH gradient and its effect in the focusing and separation behavior of cardiac troponin I and albumin. Our results show that, at the completion of separation phase, the pH profile is stepwise for lower number of ampholytes, but becomes smooth as the number of ampholytes increases. Numerical results also show that higher protein concentration can be obtained using higher number of ampholytes.

  18. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  19. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  20. GPU acceleration of simplex volume algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Zhang, Junping; Lin, Zhouhan; Chen, Hao; Huang, Bormin

    2012-10-01

    The simplex volume algorithm (SVA)1 is an endmember extraction algorithm based on the geometrical properties of a simplex in the feature space of hyperspectral image. By utilizing the relation between a simplex volume and its corresponding parallelohedron volume in the high-dimensional space, the algorithm extracts endmembers from the initial hyperspectral image directly without the need of dimension reduction. It thus avoids the drawback of the N-FINDER algorithm, which requires the dimension of the data to be reduced to one less than the number of the endmembers. In this paper, we take advantage of the large-scale parallelism of CUDA (Compute Unified Device Architecture) to accelerate the computation of SVA on the NVidia GeForce 560 GPU. The time for computing a simplex volume increases with the number of endmembers. Experimental results show that the proposed GPU-based SVA achieves a significant 112.56x speedup for extracting 16 endmembers, as compared to its CPU-based single-threaded counterpart.

  1. Exact significance test for Markov order

    NASA Astrophysics Data System (ADS)

    Pethel, S. D.; Hahs, D. W.

    2014-02-01

    We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.

  2. Algorithms for Automatic Alignment of Arrays

    NASA Technical Reports Server (NTRS)

    Chatterjee, Siddhartha; Gilbert, John R.; Oliker, Leonid; Schreiber, Robert; Sheffler, Thomas J.

    1996-01-01

    Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication.

  3. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy.

  4. Optimization of an algorithm for measurements of velocity vector components using a three-wire sensor.

    PubMed

    Ligeza, P; Socha, K

    2007-10-01

    Hot-wire measurements of velocity vector components use a sensor with three orthogonal wires, taking advantage of an anisotropic effect of wire sensitivity. The sensor is connected to a three-channel anemometric circuit and a data acquisition and processing system. Velocity vector components are obtained from measurement signals, using a modified algorithm for measuring velocity vector components enabling the minimization of measurement errors described in this paper. The standard deviation of the relative error was significantly reduced in comparison with the classical algorithm.

  5. Performance analysis of approximate Affine Projection Algorithm in acoustic feedback cancellation.

    PubMed

    Nikjoo S, Mohammad; Seyedi, Amir; Tehrani, Arash Saber

    2008-01-01

    Acoustic feedback is an annoying problem in several audio applications and especially in hearing aids. Adaptive feedback cancellation techniques have attracted recent attention and show great promise in reducing the deleterious effects of feedback. In this paper, we investigated the performance of a class of adaptive feedback cancellation algorithms viz. the approximated Affine Projection Algorithms (APA). Mixed results were obtained with the natural speech and music data collected from five different commercial hearing aids in a variety of sub-oscillatory and oscillatory feedback conditions. The performance of the approximated APA was significantly better with music stimuli than natural speech stimuli.

  6. Geometric Transforms for Fast Geometric Algorithms.

    DTIC Science & Technology

    1979-12-01

    approximation algorithm extends the ideas of the first by defining a transform based on a " pie -slice" diagram and Use Of the floor function. 8.1 .1...2. The second (-approxinate algorithm reduces the time from O(N/(()I/") to O(N + 1 /() by using a tr’nisformu hascd on a " pie -slic e" diagram (Figure...N + 1/).) Bentley, Weide, and Yao [18] have used a simple " pie -slice" diagram for their Voronoi diagram algorithm and Weide [09] has used the floor

  7. Recursive algorithms for vector extrapolation methods

    NASA Technical Reports Server (NTRS)

    Ford, William F.; Sidi, Avram

    1988-01-01

    Three classes of recursion relations are devised for implementing some extrapolation methods for vector sequences. One class of recursion relations can be used to implement methods like the modified minimal polynomial extrapolation and the topological epsilon algorithm; another allows implementation of methods like minimal polynomial and reduced rank extrapolation; while the remaining class can be employed in the implementation of the vector E-algorithm. Operation counts and storage requirements for these methods are also discussed, and some related techniques for special applications are also presented. Included are methods for the rapid evaluations of the vector E-algorithm.

  8. Fast imaging system and algorithm for monitoring microlymphatics

    NASA Astrophysics Data System (ADS)

    Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.

    2010-02-01

    The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.

  9. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  10. Adaptive motion artifact reducing algorithm for wrist photoplethysmography application

    NASA Astrophysics Data System (ADS)

    Zhao, Jingwei; Wang, Guijin; Shi, Chenbo

    2016-04-01

    Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.

  11. Optimal algorithm for fluorescence suppression of modulated Raman spectroscopy.

    PubMed

    Mazilu, Michael; De Luca, Anna Chiara; Riches, Andrew; Herrington, C Simon; Dholakia, Kishan

    2010-05-24

    Raman spectroscopy permits probing of the molecular and chemical properties of the analyzed sample. However, its applicability has been seriously limited to specific applications by the presence of a strong fluorescence background. In our recent paper [Anal. Chem. 82, 738 (2010)], we reported a new modulation method for separating Raman scattering from fluorescence. By continuously changing the excitation wavelength, we demonstrated that it is possible to continuously shift the Raman peaks while the fluorescence background remains essentially constant. In this way, our method allows separation of the modulated Raman peaks from the static fluorescence background with important advantages when compared to previous work using only two [Appl. Spectrosc. 46, 707 (1992)] or a few shifted excitation wavelengths [Opt. Express 16, 10975 (2008)]. The purpose of the present work is to demonstrate a significant improvement of the efficacy of the modulated method by using different processing algorithms. The merits of each algorithm (Standard Deviation analysis, Fourier Filtering, Least-Squares fitting and Principal Component Analysis) are discussed and the dependence of the modulated Raman signal on several parameters, such as the amplitude and the modulation rate of the Raman excitation wavelength, is analyzed. The results of both simulation and experimental data demonstrate that Principal Component Analysis is the best processing algorithm. It improves the signal-to-noise ratio in the treated Raman spectra, reducing required acquisition times. Additionally, this approach does not require any synchronization procedure, reduces user intervention and renders it suitable for real-time applications.

  12. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  13. Fungi producing significant mycotoxins.

    PubMed

    2012-01-01

    Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors

  14. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  15. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  16. Algorithms, modelling and VO₂ kinetics.

    PubMed

    Capelli, Carlo; Carlo, Capelli; Cautero, Michela; Michela, Cautero; Pogliaghi, Silvia; Silvia, Pogliaghi

    2011-03-01

    This article summarises the pros and cons of different algorithms developed for estimating breath-by-breath (B-by-B) alveolar O(2) transfer (VO 2A) in humans. VO 2A is the difference between O(2) uptake at the mouth and changes in alveolar O(2) stores (∆ VO(2s)), which for any given breath, are equal to the alveolar volume change at constant FAO2/FAiO2 ∆VAi plus the O(2) alveolar fraction change at constant volume [V Ai-1(F Ai - F Ai-1) O2, where V (Ai-1) is the alveolar volume at the beginning of a breath. Therefore, VO 2A can be determined B-by-B provided that V (Ai-1) is: (a) set equal to the subject's functional residual capacity (algorithm of Auchincloss, A) or to zero; (b) measured (optoelectronic plethysmography, OEP); (c) selected according to a procedure that minimises B-by-B variability (algorithm of Busso and Robbins, BR). Alternatively, the respiratory cycle can be redefined as the time between equal FO(2) in two subsequent breaths (algorithm of Grønlund, G), making any assumption of V (Ai-1) unnecessary. All the above methods allow an unbiased estimate of VO2 at steady state, albeit with different precision. Yet the algorithms "per se" affect the parameters describing the B-by-B kinetics during exercise transitions. Among these approaches, BR and G, by increasing the signal-to-noise ratio of the measurements, reduce the number of exercise repetitions necessary to study VO2 kinetics, compared to A approach. OEP and G (though technically challenging and conceptually still debated), thanks to their ability to track ∆VO(2s) changes during the early phase of exercise transitions, appear rather promising for investigating B-by-B gas exchange.

  17. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  18. Large-scale validation of a computer-aided polyp detection algorithm for CT colonography using cluster computing

    NASA Astrophysics Data System (ADS)

    Bitter, Ingmar; Brown, John E.; Brickman, Daniel; Summers, Ronald M.

    2004-04-01

    The presented method significantly reduces the time necessary to validate a computed tomographic colonography (CTC) computer aided detection (CAD) algorithm of colonic polyps applied to a large patient database. As the algorithm is being developed on Windows PCs and our target, a Beowulf cluster, is running on Linux PCs, we made the application dual platform compatible using a single source code tree. To maintain, share, and deploy source code, we used CVS (concurrent versions system) software. We built the libraries from their sources for each operating system. Next, we made the CTC CAD algorithm dual-platform compatible and validate that both Windows and Linux produced the same results. Eliminating system dependencies was mostly achieved using the Qt programming library, which encapsulates most of the system dependent functionality in order to present the same interface on either platform. Finally, we wrote scripts to execute the CTC CAD algorithm in parallel. Running hundreds of simultaneous copies of the CTC CAD algorithm on a Beowulf cluster computing network enables execution in less than four hours on our entire collection of over 2400 CT scans, as compared to a month a single PC. As a consequence, our complete patient database can be processed daily, boosting research productivity. Large scale validation of a computer aided polyp detection algorithm for CT colonography using cluster computing significantly improves the round trip time of algorithm improvement and revalidation.

  19. A correction factor for ablation algorithms assuming deviations of Lambert-Beer's law with a Gaussian-profile beam

    NASA Astrophysics Data System (ADS)

    Rodríguez-Marín, Francisco; Anera, Rosario G.; Alarcón, Aixa; Hita, E.; Jiménez, J. R.

    2012-04-01

    In this work, we propose an adjustment factor to be considered in ablation algorithms used in refractive surgery. This adjustment factor takes into account potential deviations of Lambert-Beer's law and the characteristics of a Gaussian-profile beam. To check whether the adjustment factor deduced is significant for visual function, we applied it to the paraxial Munnerlyn formula and found that it significantly influences the post-surgical corneal radius and p-factor. The use of the adjustment factor can help reduce the discrepancies in corneal shape between the real data and corneal shape expected when applying laser ablation algorithms.

  20. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  1. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  2. MapReduce SVM Game

    SciTech Connect

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.

  3. MapReduce SVM Game

    DOE PAGES

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; ...

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  4. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  5. Distributed Minimum Hop Algorithms

    DTIC Science & Technology

    1982-01-01

    acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is

  6. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  7. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  8. Algorithms for radio networks with dynamic topology

    NASA Astrophysics Data System (ADS)

    Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose

    1991-08-01

    The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.

  9. Performance Analysis of Apriori Algorithm with Different Data Structures on Hadoop Cluster

    NASA Astrophysics Data System (ADS)

    Singh, Sudhakar; Garg, Rakhi; Mishra, P. K.

    2015-10-01

    Mining frequent itemsets from massive datasets is always being a most important problem of data mining. Apriori is the most popular and simplest algorithm for frequent itemset mining. To enhance the efficiency and scalability of Apriori, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed processing. MapReduce is the emerging parallel and distributed technology to process big datasets on Hadoop Cluster. To mine big datasets it is essential to re-design the data mining algorithm on this new paradigm. In this paper, we implement three variations of Apriori algorithm using data structures hash tree, trie and hash table trie i.e. trie with hash technique on MapReduce paradigm. We emphasize and investigate the significance of these three data structures for Apriori algorithm on Hadoop cluster, which has not been given attention yet. Experiments are carried out on both real life and synthetic datasets which shows that hash table trie data structures performs far better than trie and hash tree in terms of execution time. Moreover the performance in case of hash tree becomes worst.

  10. Combining ptychographical algorithms with the Hybrid Input-Output (HIO) algorithm.

    PubMed

    Konijnenberg, A P; Coene, W M J; Pereira, S F; Urbach, H P

    2016-12-01

    In this article we combine the well-known Ptychographical Iterative Engine (PIE) with the Hybrid Input-Output (HIO) algorithm. The important insight is that the HIO feedback function should be kept strictly separate from the reconstructed object, which is done by introducing a separate feedback function per probe position. We have also combined HIO with floating PIE (fPIE) and extended PIE (ePIE). Simulations indicate that the combined algorithm performs significantly better in many situations. Although we have limited our research to a combination with HIO, the same insight can be used to combine ptychographical algorithms with any phase retrieval algorithm that uses a feedback function.

  11. On the convergence of the phase gradient autofocus algorithm for synthetic aperture radar imaging

    SciTech Connect

    Hicks, M.J.

    1996-01-01

    Synthetic Aperture Radar (SAR) imaging is a class of coherent range and Doppler signal processing techniques applied to remote sensing. The aperture is synthesized by recording and processing coherent signals at known positions along the flight path. Demands for greater image resolution put an extreme burden on requirements for inertial measurement units that are used to maintain accurate pulse-to-pulse position information. The recently developed Phase Gradient Autofocus algorithm relieves this burden by taking a data-driven digital signal processing approach to estimating the range-invariant phase aberrations due to either uncompensated motions of the SAR platform or to atmospheric turbulence. Although the performance of this four-step algorithm has been demonstrated, its convergence has not been modeled mathematically. A new sensitivity study of algorithm performance is a necessary step towards this model. Insights that are significant to the application of this algorithm to both SAR and to other coherent imaging applications are developed. New details on algorithm implementation identify an easily avoided biased phase estimate. A new algorithm for defining support of the point spread function is proposed, which promises to reduce the number of iterations required even for rural scenes with low signal-to-clutter ratios.

  12. Estimation of IMU and MARG orientation using a gradient descent algorithm.

    PubMed

    Madgwick, Sebastian O H; Harrison, Andrew J L; Vaidyanathan, Andrew

    2011-01-01

    This paper presents a novel orientation algorithm designed to support a computationally efficient, wearable inertial human motion tracking system for rehabilitation applications. It is applicable to inertial measurement units (IMUs) consisting of tri-axis gyroscopes and accelerometers, and magnetic angular rate and gravity (MARG) sensor arrays that also include tri-axis magnetometers. The MARG implementation incorporates magnetic distortion compensation. The algorithm uses a quaternion representation, allowing accelerometer and magnetometer data to be used in an analytically derived and optimised gradient descent algorithm to compute the direction of the gyroscope measurement error as a quaternion derivative. Performance has been evaluated empirically using a commercially available orientation sensor and reference measurements of orientation obtained using an optical measurement system. Performance was also benchmarked against the propriety Kalman-based algorithm of orientation sensor. Results indicate the algorithm achieves levels of accuracy matching that of the Kalman based algorithm; < 0.8° static RMS error, < 1.7° dynamic RMS error. The implications of the low computational load and ability to operate at small sampling rates significantly reduces the hardware and power necessary for wearable inertial movement tracking, enabling the creation of lightweight, inexpensive systems capable of functioning for extended periods of time.

  13. New image compression algorithm based on improved reversible biorthogonal integer wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Yu, Xianchuan

    2012-10-01

    The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.

  14. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  15. Aquarius geophysical model function and combined active passive algorithm for ocean surface salinity and wind retrieval

    NASA Astrophysics Data System (ADS)

    Yueh, Simon; Tang, Wenqing; Fore, Alexander; Hayashi, Akiko; Song, Yuhe T.; Lagerloef, Gary

    2014-08-01

    This paper describes the updated Combined Active-Passive (CAP) retrieval algorithm for simultaneous retrieval of surface salinity and wind from Aquarius' brightness temperature and radar backscatter. Unlike the algorithm developed by Remote Sensing Systems (RSS), implemented in the Aquarius Data Processing System (ADPS) to produce Aquarius standard products, the Jet Propulsion Laboratory's CAP algorithm does not require monthly climatology SSS maps for the salinity retrieval. Furthermore, the ADPS-RSS algorithm fully uses the National Center for Environmental Predictions (NCEP) wind for data correction, while the CAP algorithm uses the NCEP wind only as a constraint. The major updates to the CAP algorithm include the galactic reflection correction, Faraday rotation, Antenna Pattern Correction, and geophysical model functions of wind or wave impacts. Recognizing the limitation of geometric optics scattering, we improve the modeling of the reflection of galactic radiation; the results are better salinity accuracy and significantly reduced ascending-descending bias. We assess the accuracy of CAP's salinity by comparison with ARGO monthly gridded salinity products provided by the Asia-Pacific Data-Research Center (APDRC) and Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The RMS differences between Aquarius CAP and APDRC's or JAMSTEC's ARGO salinities are less than 0.2 psu for most parts of the ocean, except for the regions in the Intertropical Convergence Zone, near the outflow of major rivers and at high latitudes.

  16. GAMPMS: Genetic algorithm managed peptide mutant screening.

    PubMed

    Long, Thomas; McDougal, Owen M; Andersen, Tim

    2015-06-30

    The prominence of endogenous peptide ligands targeted to receptors makes peptides with the desired binding activity good molecular scaffolds for drug development. Minor modifications to a peptide's primary sequence can significantly alter its binding properties with a receptor, and screening collections of peptide mutants is a useful technique for probing the receptor-ligand binding domain. Unfortunately, the combinatorial growth of such collections can limit the number of mutations which can be explored using structure-based molecular docking techniques. Genetic algorithm managed peptide mutant screening (GAMPMS) uses a genetic algorithm to conduct a heuristic search of the peptide's mutation space for peptides with optimal binding activity, significantly reducing the computational requirements of the virtual screening. The GAMPMS procedure was implemented and used to explore the binding domain of the nicotinic acetylcholine receptor (nAChR) α3β2-isoform with a library of 64,000 α-conotoxin (α-CTx) MII peptide mutants. To assess GAMPMS's performance, it was compared with a virtual screening procedure that used AutoDock to predict the binding affinity of each of the α-CTx MII peptide mutants with the α3β2-nAChR. The GAMPMS implementation performed AutoDock simulations for as few as 1140 of the 64,000 α-CTx MII peptide mutants and could consistently identify a set of 10 peptides with an aggregated binding energy that was at least 98% of the aggregated binding energy of the 10 top peptides from the exhaustive AutoDock screening.

  17. JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve

    2000-01-01

    A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.

  18. A novel hardware-friendly algorithm for hyperspectral linear unmixing

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Santos, Lucana; López, Sebastián.; Sarmiento, Roberto

    2015-10-01

    significantly reduced.

  19. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  20. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  1. Social significance of community structure: statistical view.

    PubMed

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  2. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  3. Ultrametric Hierarchical Clustering Algorithms.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1979-01-01

    Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)

  4. The Training Effectiveness Algorithm.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)

  5. Oscillation Detection Algorithm Development Summary Report and Test Plan

    SciTech Connect

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.; Jin, Shuangshuang

    2009-10-03

    Small signal stability problems are one of the major threats to grid stability and reliability in California and the western U.S. power grid. An unstable oscillatory mode can cause large-amplitude oscillations and may result in system breakup and large-scale blackouts. There have been several incidents of system-wide oscillations. Of them, the most notable is the August 10, 1996 western system breakup produced as a result of undamped system-wide oscillations. There is a great need for real-time monitoring of small-signal oscillations in the system. In power systems, a small-signal oscillation is the result of poor electromechanical damping. Considerable understanding and literature have been developed on the small-signal stability problem over the past 50+ years. These studies have been mainly based on a linearized system model and eigenvalue analysis of its characteristic matrix. However, its practical feasibility is greatly limited as power system models have been found inadequate in describing real-time operating conditions. Significant efforts have been devoted to monitoring system oscillatory behaviors from real-time measurements in the past 20 years. The deployment of phasor measurement units (PMU) provides high-precision time-synchronized data needed for estimating oscillation modes. Measurement-based modal analysis, also known as ModeMeter, uses real-time phasor measure-ments to estimate system oscillation modes and their damping. Low damping indicates potential system stability issues. Oscillation alarms can be issued when the power system is lightly damped. A good oscillation alarm tool can provide time for operators to take remedial reaction and reduce the probability of a system breakup as a result of a light damping condition. Real-time oscillation monitoring requires ModeMeter algorithms to have the capability to work with various kinds of measurements: disturbance data (ringdown signals), noise probing data, and ambient data. Several measurement

  6. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  7. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. Factors significantly increasing or inhibiting early stages of malignant melanoma (M.M.) and non-invasive evaluation of new treatment by ingestion and external application of optimal doses of the most effective anti-M.M. substances: haritaki, cilantro, vitamin D3, nori, EPA with DHA, & application of special (+) solar energy stored paper, which reduced the M.M. active area & asbestos rapidly.

    PubMed

    Omura, Yoshiaki; Jones, Marilyn; Duvvi, Harsha; Paluch, Kamila; Shimotsuura, Yasuhiro; Ohki, Motomu

    2013-01-01

    Sterilizing the pre-cancer skin of malignant melanoma (M.M.) with 70% Isopropyl alcohol intensified malignancy & the malignant response extended to surrounding normal looking skin, while sterilizing with 80% (vodka) or 12% (plum wine) ethyl alcohol completely inhibited M.M. in the area (both effects lasted for about 90 minutes initially). Burnt food (bread, vegetables, meat, and fish), a variety of smoked & non-smoked fish-skin, many animal's skin, pepper, Vitamin C over 75 mg, mango, pineapple, coconut, almond, sugars, Saccharine & Aspartame, garlic, onion, etc & Electromagnetic field from cellular phones worsened M.M. & induced abnormal M.M. response of surrounding skin. We found the following factors inhibit early stage of M.M. significantly: 1) Increasing normal cell telomere, by taking 500 mg Haritaki, often reached between 400-1150 ng& gradually diminished, but the M.M. response was completely inhibited until normal cell telomeres are reduced to 150 ng, which takes 6-8 hours. More than 70 mg Vitamin C, Orange Juice, & other high Vitamin C containing substances shouldn't be taken because they completely inhibit the effects of Haritaki. 2) We found Chrysotile asbestos & Tremolite asbestos (% of the Chrysotile amount) coexist. A special Cilantro tablet was used to remove asbestos & some toxic metals. 3) Vitamin D3 400 I.U. has a maximum inhibiting effect on M.M. but 800 I.U. or higher promotes malignancy. 4) Noricontaining Iodine, etc., was used. 5) EPA 180 mm with DHA 120 mg was most effectively used after metastasis to the surrounding skin was eliminated. When we combined 1 Cilantro tablet & Vitamin D3 400 I.U. withsmall Nori pieces & EPA with DHA, the effect of complete inhibition of M.M. lasted 9-11 hours. When these anti-M.M.substances (Haritaki, Vitamin D3, Cilantro, Nori, EPA. with DHA) were taken together, the effect lasted 12-14 hoursand M.M. involvement in surrounding normal-looking skin disappeared rapidly & original dark brown or black are as

  9. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    PubMed

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10(4) up to 10(8) or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10(5) permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  10. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  11. CHROMagar Orientation Medium Reduces Urine Culture Workload

    PubMed Central

    Manickam, Kanchana; Karlowsky, James A.; Adam, Heather; Lagacé-Wiens, Philippe R. S.; Rendina, Assunta; Pang, Paulette; Murray, Brenda-Lee

    2013-01-01

    Microbiology laboratories continually strive to streamline and improve their urine culture algorithms because of the high volumes of urine specimens they receive and the modest numbers of those specimens that are ultimately considered clinically significant. In the current study, we quantitatively measured the impact of the introduction of CHROMagar Orientation (CO) medium into routine use in two hospital laboratories and compared it to conventional culture on blood and MacConkey agars. Based on data extracted from our Laboratory Information System from 2006 to 2011, the use of CO medium resulted in a 28% reduction in workload for additional procedures such as Gram stains, subcultures, identification panels, agglutination tests, and biochemical tests. The average number of workload units (one workload unit equals 1 min of hands-on labor) per urine specimen was significantly reduced (P < 0.0001; 95% confidence interval [CI], 0.5326 to 1.047) from 2.67 in 2006 (preimplementation of CO medium) to 1.88 in 2011 (postimplementation of CO medium). We conclude that the use of CO medium streamlined the urine culture process and increased bench throughput by reducing both workload and turnaround time in our laboratories. PMID:23363839

  12. Meteorological Data Analysis Using MapReduce

    PubMed Central

    Fang, Wei; Sheng, V. S.; Wen, XueZhi; Pan, Wubin

    2014-01-01

    In the atmospheric science, the scale of meteorological data is massive and growing rapidly. K-means is a fast and available cluster algorithm which has been used in many fields. However, for the large-scale meteorological data, the traditional K-means algorithm is not capable enough to satisfy the actual application needs efficiently. This paper proposes an improved MK-means algorithm (MK-means) based on MapReduce according to characteristics of large meteorological datasets. The experimental results show that MK-means has more computing ability and scalability. PMID:24790576

  13. Meteorological data analysis using MapReduce.

    PubMed

    Fang, Wei; Sheng, V S; Wen, XueZhi; Pan, Wubin

    2014-01-01

    In the atmospheric science, the scale of meteorological data is massive and growing rapidly. K-means is a fast and available cluster algorithm which has been used in many fields. However, for the large-scale meteorological data, the traditional K-means algorithm is not capable enough to satisfy the actual application needs efficiently. This paper proposes an improved MK-means algorithm (MK-means) based on MapReduce according to characteristics of large meteorological datasets. The experimental results show that MK-means has more computing ability and scalability.

  14. Dual format algorithm for monostatic SAR

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Rigling, Brian D.

    2010-04-01

    The polar format algorithm for monostatic synthetic aperture radar imaging is based on a linear approximation of the differential range to a scatterer, which leads to spatially-variant distortion and defocus in the resultant image. While approximate corrections may be applied to compensate for these effects, these corrections are ad-hoc in nature. Here, we introduce an alternative imaging algorithm called the Dual Format Algorithm (DFA) that provides better isolation of the defocus effects and reduces distortion. Quadratic phase errors are isolated along a single dimension by allowing image formation to an arbitrary grid instead of a Cartesian grid. This provides an opportunity for more efficient phase error corrections. We provide a description of the arbitrary image grid and we show the quadratic phase error correction derived from a second-order Taylor series approximation of the differential range. The algorithm is demonstrated with a point target simulation.

  15. Cell list algorithms for nonequilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Dobson, Matthew; Fox, Ian; Saracino, Alexandra

    2016-06-01

    We present two modifications of the standard cell list algorithm that handle molecular dynamics simulations with deforming periodic geometry. Such geometry naturally arises in the simulation of homogeneous, linear nonequilibrium flow modeled with periodic boundary conditions, and recent progress has been made developing boundary conditions suitable for general 3D flows of this type. Previous works focused on the planar flows handled by Lees-Edwards or Kraynik-Reinelt boundary conditions, while the new versions of the cell list algorithm presented here are formulated to handle the general 3D deforming simulation geometry. As in the case of equilibrium, for short-ranged pairwise interactions, the cell list algorithm reduces the computational complexity of the force computation from O(N2) to O(N), where N is the total number of particles in the simulation box. We include a comparison of the complexity and efficiency of the two proposed modifications of the standard algorithm.

  16. Speckle-reduction algorithm for ultrasound images in complex wavelet domain using genetic algorithm-based mixture model.

    PubMed

    Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-05-20

    Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.

  17. Algorithms for optimal dyadic decision trees

    SciTech Connect

    Hush, Don; Porter, Reid

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  18. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  19. Comparison of dose calculation algorithms for colorectal cancer brachytherapy treatment with a shielded applicator

    SciTech Connect

    Yan Xiangsheng; Poon, Emily; Reniers, Brigitte; Vuong, Te; Verhaegen, Frank

    2008-11-15

    Colorectal cancer patients are treated at our hospital with {sup 192}Ir high dose rate (HDR) brachytherapy using an applicator that allows the introduction of a lead or tungsten shielding rod to reduce the dose to healthy tissue. The clinical dose planning calculations are, however, currently performed without taking the shielding into account. To study the dose distributions in shielded cases, three techniques were employed. The first technique was to adapt a shielding algorithm which is part of the Nucletron PLATO HDR treatment planning system. The isodose pattern exhibited unexpected features but was found to be a reasonable approximation. The second technique employed a ray tracing algorithm that assigns a constant dose ratio with/without shielding behind the shielding along a radial line originating from the source. The dose calculation results were similar to the results from the first technique but with improved accuracy. The third and most accurate technique used a dose-matrix-superposition algorithm, based on Monte Carlo calculations. The results from the latter technique showed quantitatively that the dose to healthy tissue is reduced significantly in the presence of shielding. However, it was also found that the dose to the tumor may be affected by the presence of shielding; for about a quarter of the patients treated the volume covered by the 100% isodose lines was reduced by more than 5%, leading to potential tumor cold spots. Use of any of the three shielding algorithms results in improved dose estimates to healthy tissue and the tumor.

  20. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  1. Evaluation of algorithms for estimating wheat acreage from multispectral scanner data. [Kansas and Texas

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Richardson, W.; Pentland, A. P.

    1976-01-01

    The author has identified the following significant results. Fourteen different classification algorithms were tested for their ability to estimate the proportion of wheat in an area. For some algorithms, accuracy of classification in field centers was observed. The data base consisted of ground truth and LANDSAT data from 55 sections (1 x 1 mile) from five LACIE intensive test sites in Kansas and Texas. Signatures obtained from training fields selected at random from the ground truth were generally representative of the data distribution patterns. LIMMIX, an algorithm that chooses a pure signature when the data point is close enough to a signature mean and otherwise chooses the best mixture of a pair of signatures, reduced the average absolute error to 6.1% and the bias to 1.0%. QRULE run with a null test achieved a similar reduction.

  2. Improved delay-leaping simulation algorithm for biochemical reaction systems with delays

    NASA Astrophysics Data System (ADS)

    Yi, Na; Zhuang, Gang; Da, Liang; Wang, Yifei

    2012-04-01

    In biochemical reaction systems dominated by delays, the simulation speed of the stochastic simulation algorithm depends on the size of the wait queue. As a result, it is important to control the size of the wait queue to improve the efficiency of the simulation. An improved accelerated delay stochastic simulation algorithm for biochemical reaction systems with delays, termed the improved delay-leaping algorithm, is proposed in this paper. The update method for the wait queue is effective in reducing the size of the queue as well as shortening the storage and access time, thereby accelerating the simulation speed. Numerical simulation on two examples indicates that this method not only obtains a more significant efficiency compared with the existing methods, but also can be widely applied in biochemical reaction systems with delays.

  3. A novel algorithm for non-bonded-list updating in molecular simulations.

    PubMed

    Maximova, Tatiana; Keasar, Chen

    2006-06-01

    Simulations of molecular systems typically handle interactions within non-bonded pairs. Generating and updating a list of these pairs can be the most time-consuming part of energy calculations for large systems. Thus, efficient non-bonded list processing can speed up the energy calculations significantly. While the asymptotic complexity of current algorithms (namely O(N), where N is the number of particles) is probably the lowest possible, a wide space for optimization is still left. This article offers a heuristic extension to the previously suggested grid based algorithms. We show that, when the average particle movements are slow, simulation time can be reduced considerably. The proposed algorithm has been implemented in the DistanceMatrix class of the molecular modeling package MESHI. MESHI is freely available at .

  4. Cryptanalysis of optical security systems with significant output images.

    PubMed

    Situ, Guohai; Gopinathan, Unnikrishnan; Monaghan, David S; Sheridan, John T

    2007-08-01

    The security of the encryption and verification techniques with significant output images is examined by a known-plaintext attack. We introduce an iterative phase-retrieval algorithm based on multiple intensity measurements to heuristically estimate the phase key in the Fourier domain by several plaintext-cyphertext pairs. We obtain correlation output images with very low error by correlating the estimated key with corresponding random phase masks. Our studies show that the convergence behavior of this algorithm sensitively depends on the starting point. We also demonstrate that this algorithm can be used to attack the double random phase encoding technique.

  5. Speeding up Batch Alignment of Large Ontologies Using MapReduce.

    PubMed

    Thayasivam, Uthayasanker; Doshi, Prashant

    2013-09-01

    Real-world ontologies tend to be very large with several containing thousands of entities. Increasingly, ontologies are hosted in repositories, which often compute the alignment between the ontologies. As new ontologies are submitted or ontologies are updated, their alignment with others must be quickly computed. Therefore, aligning several pairs of ontologies quickly becomes a challenge for these repositories. We project this problem as one of batch alignment and show how it may be approached using the distributed computing paradigm of MapReduce. Our approach allows any alignment algorithm to be utilized on a MapReduce architecture. Experiments using four representative alignment algorithms demonstrate flexible and significant speedup of batch alignment of large ontology pairs using MapReduce.

  6. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems

    NASA Astrophysics Data System (ADS)

    Omelyan, I. P.; Mryglod, I. M.; Folk, R.

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.

  7. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems.

    PubMed

    Omelyan, I P; Mryglod, I M; Folk, R

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.

  8. Algorithm for in-flight gyroscope calibration

    NASA Technical Reports Server (NTRS)

    Davenport, P. B.; Welter, G. L.

    1988-01-01

    An optimal algorithm for the in-flight calibration of spacecraft gyroscope systems is presented. Special consideration is given to the selection of the loss function weight matrix in situations in which the spacecraft attitude sensors provide significantly more accurate information in pitch and yaw than in roll, such as will be the case in the Hubble Space Telescope mission. The results of numerical tests that verify the accuracy of the algorithm are discussed.

  9. Numerical linear algebra algorithms and software

    NASA Astrophysics Data System (ADS)

    Dongarra, Jack J.; Eijkhout, Victor

    2000-11-01

    The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra - in particular, the solution of linear systems of equations - lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

  10. An enhanced mode shape identification algorithm

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Mook, D. Joseph

    1989-01-01

    A mode shape identification algorithm is developed which is characterized by a low sensitivity to measurement noise and a high accuracy of mode identification. The algorithm proposed here is also capable of identifying the mode shapes of structures with significant damping. The combined results indicate that mode shape identification is much more dependent on measurement noise than identification of natural frequencies. Accurate detection of modal parameters and mode shapes is demonstrated for modes with damping ratios exceeding 15 percent.

  11. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  12. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  13. Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms

    SciTech Connect

    Johnson, J R; Foster, I

    2003-05-01

    A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.

  14. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  15. Systematic identification of statistically significant network measures

    NASA Astrophysics Data System (ADS)

    Ziv, Etay; Koytcheff, Robin; Middendorf, Manuel; Wiggins, Chris

    2005-01-01

    We present a graph embedding space (i.e., a set of measures on graphs) for performing statistical analyses of networks. Key improvements over existing approaches include discovery of “motif hubs” (multiple overlapping significant subgraphs), computational efficiency relative to subgraph census, and flexibility (the method is easily generalizable to weighted and signed graphs). The embedding space is based on scalars, functionals of the adjacency matrix representing the network. Scalars are global, involving all nodes; although they can be related to subgraph enumeration, there is not a one-to-one mapping between scalars and subgraphs. Improvements in network randomization and significance testing—we learn the distribution rather than assuming Gaussianity—are also presented. The resulting algorithm establishes a systematic approach to the identification of the most significant scalars and suggests machine-learning techniques for network classification.

  16. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  17. Evaluation of several MS/MS search algorithms for analysis of spectra derived from electron transfer dissociation experiments.

    PubMed

    Kandasamy, Kumaran; Pandey, Akhilesh; Molina, Henrik

    2009-09-01

    Electron transfer dissociation (ETD) is increasingly becoming popular for high-throughput experiments especially in the identification of the labile post-translational modifications. Most search algorithms that are currently in use for querying MS/MS data against protein databases have been optimized on the basis of matching fragment ions derived from collision induced dissociation of peptides, which are dominated by b and y ions. However, electron transfer dissociation of peptides generates completely different types of fragments: c and z ions. The goal of our study was to test the ability of different search algorithms to handle data from this fragmentation method. We compared four MS/MS search algorithms (OMSSA, Mascot, Spectrum Mill, and X!Tandem) using approximately 170,000 spectra generated from a standard protein mix, as well as from complex proteomic samples which included a large number of phosphopeptides. Our analysis revealed (1) greater differences between algorithms than has been previously reported for CID data, (2) a significant charge state bias resulting in >60-fold difference in the numbers of matched doubly charged peptides, and (3) identification of 70% more peptides by the best performing algorithm than the algorithm identifying the least number of peptides. Our results indicate that the search engines for analyzing ETD derived MS/MS spectra are still in their early days and that multiple search engines could be used to reduce individual biases of algorithms.

  18. Underwater Sensor Network Redeployment Algorithm Based on Wolf Search

    PubMed Central

    Jiang, Peng; Feng, Yang; Wu, Feng

    2016-01-01

    This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance. PMID:27775659

  19. Reducing rotor weight

    SciTech Connect

    Cheney, M.C.

    1997-12-31

    The cost of energy for renewables has gained greater significance in recent years due to the drop in price in some competing energy sources, particularly natural gas. In pursuit of lower manufacturing costs for wind turbine systems, work was conducted to explore an innovative rotor designed to reduce weight and cost over conventional rotor systems. Trade-off studies were conducted to measure the influence of number of blades, stiffness, and manufacturing method on COE. The study showed that increasing number of blades at constant solidity significantly reduced rotor weight and that manufacturing the blades using pultrusion technology produced the lowest cost per pound. Under contracts with the National Renewable Energy Laboratory and the California Energy Commission, a 400 kW (33m diameter) turbine was designed employing this technology. The project included tests of an 80 kW (15.5m diameter) dynamically scaled rotor which demonstrated the viability of the design.

  20. Advanced optimization of permanent magnet wigglers using a genetic algorithm

    SciTech Connect

    Hajima, Ryoichi

    1995-12-31

    In permanent magnet wigglers, magnetic imperfection of each magnet piece causes field error. This field error can be reduced or compensated by sorting magnet pieces in proper order. We showed a genetic algorithm has good property for this sorting scheme. In this paper, this optimization scheme is applied to the case of permanent magnets which have errors in the direction of field. The result shows the genetic algorithm is superior to other algorithms.

  1. OPC recipe optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asthana, Abhishek; Wilkinson, Bill; Power, Dave

    2016-03-01

    Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.

  2. An image-data-compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1981-01-01

    Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.

  3. Filter selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patel, Devesh

    1996-03-01

    Convolution operators act as matched filters for certain types of variations found in images and have been extensively used in the analysis of images. However, filtering through a bank of N filters generates N filtered images, consequently increasing the amount of data considerably. Moreover, not all these filters have the same discriminatory capabilities for the individual images, thus making the task of any classifier difficult. In this paper, we use genetic algorithms to select a subset of relevant filters. Genetic algorithms represent a class of adaptive search techniques where the processes are similar to natural selection of biological evolution. The steady state model (GENITOR) has been used in this paper. The reduction of filters improves the performance of the classifier (which in this paper is the multi-layer perceptron neural network) and furthermore reduces the computational requirement. In this study we use the Laws filters which were proposed for the analysis of texture images. Our aim is to recognize the different textures on the images using the reduced filter set.

  4. Decomposition of Large Scale Semantic Graphsvia an Efficient Communities Algorithm

    SciTech Connect

    Yao, Y

    2008-02-08

    's decomposition algorithm, much more efficiently, leading to significantly reduced computation time. Test runs on a desktop computer have shown reductions of up to 89%. Our focus this year has been on the implementation of parallel graph clustering on one of LLNL's supercomputers. In order to achieve efficiency in parallel computing, we have exploited the fact that large semantic graphs tend to be sparse, comprising loosely connected dense node clusters. When implemented on distributed memory computers, our approach performed well on several large graphs with up to one billion nodes, as shown in Table 2. The rightmost column of Table 2 contains the associated Newman's modularity [1], a metric that is widely used to assess the quality of community structure. Existing algorithms produce results that merely approximate the optimal solution, i.e., maximum modularity. We have developed a verification tool for decomposition algorithms, based upon a novel integer linear programming (ILP) approach, that computes an exact solution. We have used this ILP methodology to find the maximum modularity and corresponding optimal community structure for several well-studied graphs in the literature (e.g., Figure 1) [3]. The above approaches assume that modularity is the best measure of quality for community structure. In an effort to enhance this quality metric, we have also generalized Newman's modularity based upon an insightful random walk interpretation that allows us to vary the scope of the metric. Generalized modularity has enabled us to develop new, more flexible versions of our algorithms. In developing these methodologies, we have made several contributions to both graph theoretic algorithms and software engineering. We have written two research papers for refereed publication [3-4] and are working on another one [5]. In addition, we have presented our research findings at three academic and professional conferences.

  5. Simplified calculation of distance measure in DP algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Ren, Xian-yi; Lu, Yu-ming

    2014-01-01

    Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker) polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve. Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and III, perpendicular distance in zone II. Compared with Dan Sunday's method, the proposed method can take full advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental results show that the proposed distance measure method can improve the efficiency of original DP algorithm.

  6. Faster unfolding of communities: Speeding up the Louvain algorithm

    NASA Astrophysics Data System (ADS)

    Traag, V. A.

    2015-09-01

    Many complex networks exhibit a modular structure of densely connected groups of nodes. Usually, such a modular structure is uncovered by the optimization of some quality function. Although flawed, modularity remains one of the most popular quality functions. The Louvain algorithm was originally developed for optimizing modularity, but has been applied to a variety of methods. As such, speeding up the Louvain algorithm enables the analysis of larger graphs in a shorter time for various methods. We here suggest to consider moving nodes to a random neighbor community, instead of the best neighbor community. Although incredibly simple, it reduces the theoretical runtime complexity from O (m ) to O (n log) in networks with a clear community structure. In benchmark networks, it speeds up the algorithm roughly 2-3 times, while in some real networks it even reaches 10 times faster runtimes. This improvement is due to two factors: (1) a random neighbor is likely to be in a "good" community and (2) random neighbors are likely to be hubs, helping the convergence. Finally, the performance gain only slightly diminishes the quality, especially for modularity, thus providing a good quality-performance ratio. However, these gains are less pronounced, or even disappear, for some other measures such as significance or surprise.

  7. Prefiltering Model for Homology Detection Algorithms on GPU.

    PubMed

    Retamosa, Germán; de Pedro, Luis; González, Ivan; Tamames, Javier

    2016-01-01

    Homology detection has evolved over the time from heavy algorithms based on dynamic programming approaches to lightweight alternatives based on different heuristic models. However, the main problem with these algorithms is that they use complex statistical models, which makes it difficult to achieve a relevant speedup and find exact matches with the original results. Thus, their acceleration is essential. The aim of this article was to prefilter a sequence database. To make this work, we have implemented a groundbreaking heuristic model based on NVIDIA's graphics processing units (GPUs) and multicore processors. Depending on the sensitivity settings, this makes it possible to quickly reduce the sequence database by factors between 50% and 95%, while rejecting no significant sequences. Furthermore, this prefiltering application can be used together with multiple homology detection algorithms as a part of a next-generation sequencing system. Extensive performance and accuracy tests have been carried out in the Spanish National Centre for Biotechnology (NCB). The results show that GPU hardware can accelerate the execution times of former homology detection applications, such as National Centre for Biotechnology Information (NCBI), Basic Local Alignment Search Tool for Proteins (BLASTP), up to a factor of 4.

  8. Superelement model based parallel algorithm for vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Agrawal, O. P.; Danhof, K. J.; Kumar, R.

    1994-05-01

    This paper presents a superelement model based parallel algorithm for a planar vehicle dynamics. The vehicle model is made up of a chassis and two suspension systems each of which consists of an axle-wheel assembly and two trailing arms. In this model, the chassis is treated as a Cartesian element and each suspension system is treated as a superelement. The parameters associated with the superelements are computed using an inverse dynamics technique. Suspension shock absorbers and the tires are modeled by nonlinear springs and dampers. The Euler-Lagrange approach is used to develop the system equations of motion. This leads to a system of differential and algebraic equations in which the constraints internal to superelements appear only explicitly. The above formulation is implemented on a multiprocessor machine. The numerical flow chart is divided into modules and the computation of several modules is performed in parallel to gain computational efficiency. In this implementation, the master (parent processor) creates a pool of slaves (child processors) at the beginning of the program. The slaves remain in the pool until they are needed to perform certain tasks. Upon completion of a particular task, a slave returns to the pool. This improves the overall response time of the algorithm. The formulation presented is general which makes it attractive for a general purpose code development. Speedups obtained in the different modules of the dynamic analysis computation are also presented. Results show that the superelement model based parallel algorithm can significantly reduce the vehicle dynamics simulation time.

  9. Improving CMD Areal Density Analysis: Algorithms and Strategies

    NASA Astrophysics Data System (ADS)

    Wilson, R. E.

    2014-06-01

    Essential ideas, successes, and difficulties of Areal Density Analysis (ADA) for color-magnitude diagrams (CMD¡¯s) of resolved stellar populations are examined, with explanation of various algorithms and strategies for optimal performance. A CMDgeneration program computes theoretical datasets with simulated observational error and a solution program inverts the problem by the method of Differential Corrections (DC) so as to compute parameter values from observed magnitudes and colors, with standard error estimates and correlation coefficients. ADA promises not only impersonal results, but also significant saving of labor, especially where a given dataset is analyzed with several evolution models. Observational errors and multiple star systems, along with various single star characteristics and phenomena, are modeled directly via the Functional Statistics Algorithm (FSA). Unlike Monte Carlo, FSA is not dependent on a random number generator. Discussions include difficulties and overall requirements, such as need for fast evolutionary computation and realization of goals within machine memory limits. Degradation of results due to influence of pixelization on derivatives, Initial Mass Function (IMF) quantization, IMF steepness, low Areal Densities (A ), and large variation in A are reduced or eliminated through a variety of schemes that are explained sufficiently for general application. The Levenberg-Marquardt and MMS algorithms for improvement of solution convergence are contained within the DC program. An example of convergence, which typically is very good, is shown in tabular form. A number of theoretical and practical solution issues are discussed, as are prospects for further development.

  10. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    NASA Astrophysics Data System (ADS)

    Cordero, José; Garzón Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  11. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  12. IJA: an efficient algorithm for query processing in sensor networks.

    PubMed

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  13. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  14. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  15. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  16. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  17. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  18. Inference from matrix products: a heuristic spin glass algorithm

    SciTech Connect

    Hastings, Matthew B

    2008-01-01

    We present an algorithm for finding ground states of two-dimensional spin-glass systems based on ideas from matrix product states in quantum information theory. The algorithm works directly at zero temperature and defines an approximation to the energy whose accuracy depends on a parameter k. We test the algorithm against exact methods on random field and random bond Ising models, and we find that accurate results require a k which scales roughly polynomially with the system size. The algorithm also performs well when tested on small systems with arbitrary interactions, where no fast, exact algorithms exist. The time required is significantly less than Monte Carlo schemes.

  19. Versatility of the CFR algorithm for limited angle reconstruction

    SciTech Connect

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V. )

    1990-04-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant.

  20. Parallel LU-factorization algorithms for dense matrices

    SciTech Connect

    Oppe, T.C.; Kincaid, D.R.

    1987-05-01

    Several serial and parallel algorithms for computing the LU-factorization of a dense matrix are investigated. Numerical experiments and programming considerations to reduce bank conflicts on the Cray X-MP4 parallel computer are presented. Speedup factors are given for the parallel algorithms. 15 refs., 6 tabs.

  1. A Re-Usable Algorithm for Teaching Procedural Skills.

    ERIC Educational Resources Information Center

    Jones, Mark K.; And Others

    The design of a re-usable instructional algorithm for computer based instruction (CBI) is described. The prototype is implemented on IBM PC compatibles running the Windows(TM) graphical environment, using the prototyping tool ToolBook(TM). The algorithm is designed to reduce development and life cycle costs for CBI by providing an authoring…

  2. SU-E-T-85: Comparison of Treatment Plans Calculated Using Ray Tracing and Monte Carlo Algorithms for Lung Cancer Patients Having Undergone Radiotherapy with Cyberknife

    SciTech Connect

    Pennington, A; Selvaraj, R; Kirkpatrick, S; Oliveira, S; Leventouri, T

    2014-06-01

    Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of −0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV.

  3. Advances in Significance Testing for Cluster Detection

    NASA Astrophysics Data System (ADS)

    Coleman, Deidra Andrea

    Over the past two decades, much attention has been given to data driven project goals such as the Human Genome Project and the development of syndromic surveillance systems. A major component of these types of projects is analyzing the abundance of data. Detecting clusters within the data can be beneficial as it can lead to the identification of specified sequences of DNA nucleotides that are related to important biological functions or the locations of epidemics such as disease outbreaks or bioterrorism attacks. Cluster detection techniques require efficient and accurate hypothesis testing procedures. In this dissertation, we improve upon the hypothesis testing procedures for cluster detection by enhancing distributional theory and providing an alternative method for spatial cluster detection using syndromic surveillance data. In Chapter 2, we provide an efficient method to compute the exact distribution of the number and coverage of h-clumps of a collection of words. This method involves defining a Markov chain using a minimal deterministic automaton to reduce the number of states needed for computation. We allow words of the collection to contain other words of the collection making the method more general. We use our method to compute the distributions of the number and coverage of h-clumps in the Chi motif of H. influenza.. In Chapter 3, we provide an efficient algorithm to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. This algorithm involves defining a Markov chain to efficiently keep track of probabilities needed to compute p-values of the statistic. We use our algorithm to identify cases where the available approximation does not perform well. We also use our algorithm to detect unusual clusters of made free throw shots by National Basketball Association players during the 2009-2010 regular season. In Chapter 4, we give a procedure to detect outbreaks using syndromic

  4. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  5. A segmentation algorithm for noisy images

    SciTech Connect

    Xu, Y.; Olman, V.; Uberbacher, E.C.

    1996-12-31

    This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.

  6. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  7. A comprehensive review of swarm optimization algorithms.

    PubMed

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  8. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  9. Investigation of a one-step spectral CT reconstruction algorithm for direct inversion into basis material images

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Sidky, Emil Y.

    2015-03-01

    Photon-counting detectors with pulse-height analysis have shown promise for improved spectral CT imaging. This study investigated a novel spectral CT reconstruction method that directly estimates basis-material images from the measured energy-bin data (i.e., `one-step' reconstruction). The proposed algorithm can incorporate constraints to stabilize the reconstruction and potentially reduce noise. The algorithm minimizes the error between the measured energy-bin data and the data estimated from the reconstructed basis images. A total variation (TV) constraint was also investigated for additional noise reduction. The proposed one-step algorithm was applied to simulated data of an anthropomorphic phantom with heterogeneous tissue composition. Reconstructed water, bone, and gadolinium basis images were compared for the proposed one-step algorithm and the conventional `two-step' method of decomposition followed by reconstruction. The unconstrained algorithm provided a 30% to 60% reduction in noise standard deviation compared to the two-step algorithm. The fTV =0.8 constraint provided a small reduction in noise (˜ 1%) compared to the unconstrained reconstruction. Images reconstructed with the fTV =0.5 constraint demonstrated 77% to 94% standard deviation reduction compared to the two-step reconstruction, however with increased blurring. There were no significant differences in the mean values reconstructed by the investigated algorithms. Overall, the proposed one-step spectral CT reconstruction algorithm provided three-material-decomposition basis images with reduced noise compared to the conventional two-step approach. When using a moderate TV constraint factor (fTV = 0.8), a 30%-60% reduction in noise standard deviation was achieved while preserving the edge profile for this simulated phantom.

  10. Barzilai-Borwein method in graph drawing algorithm based on Kamada-Kawai algorithm

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Pospisil, Lukas; Nowakova, Jana

    2016-06-01

    Extension of Kamada-Kawai algorithm, which was designed for calculating layouts of simple undirected graphs, is presented in this paper. Graphs drawn by Kamada-Kawai algorithm exhibit symmetries, tend to produce aesthetically pleasing and crossing-free layouts for planar graphs. Minimization of Kamada-Kawai algorithm is based on Newton-Raphson method, which needs Hessian matrix of second derivatives of minimized node. Disadvantage of Kamada-Kawai embedder algorithm is computational requirements. This is caused by searching of minimal potential energy of the whole system, which is minimized node by node. The node with highest energy is minimized against all nodes till the local equilibrium state is reached. In this paper with Barzilai-Borwein (BB) minimization algorithm, which needs only gradient for minimum searching, instead of Newton-Raphson method, is worked. It significantly improves the computational time and requirements.

  11. A survey of DNA motif finding algorithms

    PubMed Central

    Das, Modan K; Dai, Ho-Kwok

    2007-01-01

    Background Unraveling the mechanisms that regulate gene expression is a major challenge in biology. An important task in this challenge is to identify regulatory elements, especially the binding sites in deoxyribonucleic acid (DNA) for transcription factors. These binding sites are short DNA segments that are called motifs. Recent advances in genome sequence availability and in high-throughput gene expression analysis technologies have allowed for the development of computational methods for motif finding. As a result, a large number of motif finding algorithms have been implemented and applied to various motif models over the past decade. This survey reviews the latest developments in DNA motif finding algorithms. Results Earlier algorithms use promoter sequences of coregulated genes from single genome and search for statistically overrepresented motifs. Recent algorithms are designed to use phylogenetic footprinting or orthologous sequences and also an integrated approach where promoter sequences of coregulated genes and phylogenetic footprinting are used. All the algorithms studied have been reported to correctly detect the motifs that have been previously detected by laboratory experimental approaches, and some algorithms were able to find novel motifs. However, most of these motif finding algorithms have been shown to work successfully in yeast and other lower organisms, but perform significantly worse in higher organisms. Conclusion Despite considerable efforts to date, DNA motif finding remains a complex challenge for biologists and computer scientists. Researchers have taken many different approaches in developing motif discovery tools and the progress made in this area of research is very encouraging. Performance comparison of different motif finding tools and identification of the best tools have proven to be a difficult task because tools are designed based on algorithms and motif models that are diverse and complex and our incomplete understanding of

  12. Embodied intervention reduce depression

    NASA Astrophysics Data System (ADS)

    Song, Dong-Qing; Bi, Xin; Fu, Ying

    2011-10-01

    To investigate the difference of the selected-rate of undergraduates' depression with respect to time, gender and scales and the intervention effect of embodied exercise, 201 Undergraduates were measured with Self-Rating Depression Scale and Beck Depression Inventory (BDI).The result shows there are significant difference of the selected-rates of undergraduates' depression resulted from long-time interval rather than from short-time interval and gender. After the intervention, the selected-rates are decreased and no significant difference has been found between the embodied groups and the controlled group. Only the embodied groups maintain the better effects of the intervention in the tracking. Also the result shows that only the participants of embodied groups obtain more positive emotional experience. We conclude that there is significant difference of selected-rate of undergraduates' depression on scales, and the embodied exercise can effectively reduce undergraduate's depression.

  13. Sensor network algorithms and applications.

    PubMed

    Trigoni, Niki; Krishnamachari, Bhaskar

    2012-01-13

    A sensor network is a collection of nodes with processing, communication and sensing capabilities deployed in an area of interest to perform a monitoring task. There has now been about a decade of very active research in the area of sensor networks, with significant accomplishments made in terms of both designing novel algorithms and building exciting new sensing applications. This Theme Issue provides a broad sampling of the central challenges and the contributions that have been made towards addressing these challenges in the field, and illustrates the pervasive and central role of sensor networks in monitoring human activities and the environment.

  14. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  15. Genetic Algorithms for Digital Quantum Simulations.

    PubMed

    Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M

    2016-06-10

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  16. Paradigms for Realizing Machine Learning Algorithms.

    PubMed

    Agneeswaran, Vijay Srinivas; Tonpay, Pranay; Tiwary, Jayati

    2013-12-01

    The article explains the three generations of machine learning algorithms-with all three trying to operate on big data. The first generation tools are SAS, SPSS, etc., while second generation realizations include Mahout and RapidMiner (that work over Hadoop), and the third generation paradigms include Spark and GraphLab, among others. The essence of the article is that for a number of machine learning algorithms, it is important to look beyond the Hadoop's Map-Reduce paradigm in order to make them work on big data. A number of promising contenders have emerged in the third generation that can be exploited to realize deep analytics on big data.

  17. A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Reichert, Bruce A.; Wendt, Bruce J.

    1994-01-01

    A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.

  18. Deblurring algorithms accounting for the finite detector size in photoacoustic tomography.

    PubMed

    Roitner, Heinz; Haltmeier, Markus; Nuster, Robert; O'Leary, Dianne P; Berer, Thomas; Paltauf, Guenther; Grün, Hubert; Burgholzer, Peter

    2014-05-01

    Most reconstruction algorithms for photoacoustic tomography, like back projection or time reversal, work ideally for point-like detectors. For real detectors, which integrate the pressure over their finite size, images reconstructed by these algorithms show some blurring. Iterative reconstruction algorithms using an imaging matrix can take the finite size of real detectors directly into account, but the numerical effort is significantly higher compared to the use of direct algorithms. For spherical or cylindrical detection surfaces, the blurring caused by a finite detector size is proportional to the distance from the rotation center (spin blur) and is equal to the detector size at the detection surface. In this work, we apply deconvolution algorithms to reduce this type of blurring on simulated and on experimental data. Two particular deconvolution methods are compared, which both utilize the fact that a representation of the blurred image in polar coordinates decouples pixels at different radii from the rotation center. Experimental data have been obtained with a flat, rectangular piezoelectric detector measuring signals around a plastisol cylinder containing various small photoacoustic sources with variable distance from the center. Both simulated and experimental results demonstrate a nearly complete elimination of spin blur.

  19. Deblurring algorithms accounting for the finite detector size in photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Roitner, Heinz; Haltmeier, Markus; Nuster, Robert; O'Leary, Dianne P.; Berer, Thomas; Paltauf, Guenther; Grün, Hubert; Burgholzer, Peter

    2014-05-01

    Most reconstruction algorithms for photoacoustic tomography, like back projection or time reversal, work ideally for point-like detectors. For real detectors, which integrate the pressure over their finite size, images reconstructed by these algorithms show some blurring. Iterative reconstruction algorithms using an imaging matrix can take the finite size of real detectors directly into account, but the numerical effort is significantly higher compared to the use of direct algorithms. For spherical or cylindrical detection surfaces, the blurring caused by a finite detector size is proportional to the distance from the rotation center (spin blur) and is equal to the detector size at the detection surface. In this work, we apply deconvolution algorithms to reduce this type of blurring on simulated and on experimental data. Two particular deconvolution methods are compared, which both utilize the fact that a representation of the blurred image in polar coordinates decouples pixels at different radii from the rotation center. Experimental data have been obtained with a flat, rectangular piezoelectric detector measuring signals around a plastisol cylinder containing various small photoacoustic sources with variable distance from the center. Both simulated and experimental results demonstrate a nearly complete elimination of spin blur.

  20. A reconstruction algorithm for compressive quantum tomography using various measurement sets.

    PubMed

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-14

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d(2)) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  1. Online estimation algorithm for a biaxial ankle kinematic model with configuration dependent joint axes.

    PubMed

    Tsoi, Y H; Xie, S Q

    2011-02-01

    The kinematics of the human ankle is commonly modeled as a biaxial hinge joint model. However, significant variations in axis orientations have been found between different individuals and also between different foot configurations. For ankle rehabilitation robots, information regarding the ankle kinematic parameters can be used to estimate the ankle and subtalar joint displacements. This can in turn be used as auxiliary variables in adaptive control schemes to allow modification of the robot stiffness and damping parameters to reduce the forces applied at stiffer foot configurations. Due to the large variations observed in the ankle kinematic parameters, an online identification algorithm is required to provide estimates of the model parameters. An online parameter estimation routine based on the recursive least-squares (RLS) algorithm was therefore developed in this research. An extension of the conventional biaxial ankle kinematic model, which allows variation in axis orientations with different foot configurations had also been developed and utilized in the estimation algorithm. Simulation results showed that use of the extended model in the online algorithm is effective in capturing the foot orientation of a biaxial ankle model with variable joint axis orientations. Experimental results had also shown that a modified RLS algorithm that penalizes a deviation of model parameters from their nominal values can be used to obtain more realistic parameter estimates while maintaining a level of estimation accuracy comparable to that of the conventional RLS routine.

  2. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    PubMed Central

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  3. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms.

    PubMed

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O

    2016-12-23

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems.

  4. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  5. A reconstruction algorithm for compressive quantum tomography using various measurement sets

    NASA Astrophysics Data System (ADS)

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-01

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d2) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  6. Hierarchical tree algorithm for collisional N-body simulations on GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Kawai, Atsushi

    2016-06-01

    We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional N-body simulations, running on the GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such a combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all the particle data and also the tree node data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on the host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct N2 summation in the original Hermite scheme. For example, we can achieve about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of an N = 106 system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme.

  7. A reconstruction algorithm for compressive quantum tomography using various measurement sets

    PubMed Central

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-01-01

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d2) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms. PMID:27966521

  8. A Fast Sphere Decoding Algorithm for Space-Frequency Block Codes

    NASA Astrophysics Data System (ADS)

    Safar, Zoltan; Su, Weifeng; Liu, K. J. Ray

    2006-12-01

    The recently proposed space-frequency-coded MIMO-OFDM systems have promised considerable performance improvement over single-antenna systems. However, in order to make multiantenna OFDM systems an attractive choice for practical applications, implementation issues such as decoding complexity must be addressed successfully. In this paper, we propose a computationally efficient decoding algorithm for space-frequency block codes. The central part of the algorithm is a modulation-independent sphere decoding framework formulated in the complex domain. We develop three decoding approaches: a modulation-independent approach applicable to any memoryless modulation method, a QAM-specific and a PSK-specific fast decoding algorithm performing nearest-neighbor signal point search. The computational complexity of the algorithms is investigated via both analysis and simulation. The simulation results demonstrate that the proposed algorithm can significantly reduce the decoding complexity. We observe up to 75% reduction in the required FLOP count per code block compared to previously existing methods without noticeable performance degradation.

  9. Prefiltering Model for Homology Detection Algorithms on GPU

    PubMed Central

    Retamosa, Germán; de Pedro, Luis; González, Ivan; Tamames, Javier

    2016-01-01

    Homology detection has evolved over the time from heavy algorithms based on dynamic programming approaches to lightweight alternatives based on different heuristic models. However, the main problem with these algorithms is that they use complex statistical models, which makes it difficult to achieve a relevant speedup and find exact matches with the original results. Thus, their acceleration is essential. The aim of this article was to prefilter a sequence database. To make this work, we have implemented a groundbreaking heuristic model based on NVIDIA’s graphics processing units (GPUs) and multicore processors. Depending on the sensitivity settings, this makes it possible to quickly reduce the sequence database by factors between 50% and 95%, while rejecting no significant sequences. Furthermore, this prefiltering application can be used together with multiple homology detection algorithms as a part of a next-generation sequencing system. Extensive performance and accuracy tests have been carried out in the Spanish National Centre for Biotechnology (NCB). The results show that GPU hardware can accelerate the execution times of former homology detection applications, such as National Centre for Biotechnology Information (NCBI), Basic Local Alignment Search Tool for Proteins (BLASTP), up to a factor of 4. KEY POINTS:Owing to the increasing size of the current sequence datasets, filtering approach and high-performance computing (HPC) techniques are the best solution to process all these information in acceptable processing times.Graphics processing unit cards and their corresponding programming models are good options to carry out these processing methods.Combination of filtration models with HPC techniques is able to offer new levels of performance and accuracy in homology detection algorithms such as National Centre for Biotechnology Information Basic Local Alignment Search Tool. PMID:28008220

  10. Algorithmic cooling in liquid-state nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Atia, Yosi; Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2016-01-01

    Algorithmic cooling is a method that employs thermalization to increase qubit purification level; namely, it reduces the qubit system's entropy. We utilized gradient ascent pulse engineering, an optimal control algorithm, to implement algorithmic cooling in liquid-state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of C132-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic-resonance spectroscopy.

  11. Kriging-approximation simulated annealing algorithm for groundwater modeling

    NASA Astrophysics Data System (ADS)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  12. Complex algorithm of optical flow determination by weighted full search

    NASA Astrophysics Data System (ADS)

    Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.

    2016-11-01

    An optical flow determination algorithm is proposed, developed and tested in the article. The algorithm is aimed at improving the accuracy of displacement determination at the scene element boundaries (objects). The results show that the application of the proposed algorithm is rather promising for stereo vision applications. Variations in calculating parameters have allowed determining their rational values and reducing the average absolute error of the end point displacement determination (AEE). The peculiarity of the proposed algorithm is performing calculations within the local regions, which makes it possible to carry out such calculations simultaneously (to attract parallel calculations).

  13. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  14. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  15. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  16. Learning lung nodule similarity using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Seitz, Kerry A., Jr.; Giuca, Anne-Marie; Furst, Jacob; Raicu, Daniela

    2012-03-01

    The effectiveness and efficiency of content-based image retrieval (CBIR) can be improved by determining an optimal combination of image features to use in determining similarity between images. This combination of features can be optimized using a genetic algorithm (GA). Although several studies have used genetic algorithms to refine image features and similarity measures in CBIR, the present study is the first to apply these techniques to medical image retrieval. By implementing a GA to test different combinations of image features for pulmonary nodules in CT scans, the set of image features was reduced to 29 features from a total of 63 extracted features. The performance of the CBIR system was assessed by calculating the average precision across all query nodules. The precision values obtained using the GA-reduced set of features were significantly higher than those found using all 63 image features. Using radiologist-annotated malignancy ratings as ground truth resulted in an average precision of 85.95% after 3 images retrieved per query nodule when using the feature set identified by the GA. Using computer-predicted malignancy ratings as ground truth resulted in an average precision of 86.91% after 3 images retrieved. The results suggest that in the absence of radiologist semantic ratings, using computer-predicted malignancy as ground truth is a valid substitute given the closeness of the two precision values.

  17. An adaptive multi-level simulation algorithm for stochastic biological systems.

    PubMed

    Lester, C; Yates, C A; Giles, M B; Baker, R E

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, "Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics," SIAM Multiscale Model. Simul. 10(1), 146-179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the

  18. Algorithmization in Learning and Instruction.

    ERIC Educational Resources Information Center

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  19. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  20. Impact of Trauma Dispatch Algorithm Software on the Rate of Missions of Emergency Medical Services

    PubMed Central

    Alizadeh, Reza; Panahi, Farzad; Saghafinia, Masoud; Alizadeh, Keivan; Barakati, Neusha; Khaje-Daloee, Mohammad

    2012-01-01

    Background Trauma still stands atop of the list of emergencies. Transfer of these patients via Emergency Medical Services (EMS) dispatch is critical with regard to importance of timing. This aspect has achieved greater importance due to population increase and telephone triage. Objectives We aimed to decrease unnecessary Emergency Medical Services (EMS) missions via a computer program designed for an algorithmic approach for trauma care by nurses involved in EMS, to help them evaluate the case more accurately. We named our program “Trauma Dispatch Algorithm”. Materials and Methods First, the most common chief complaints regarding traumatic events were chosen from searching all the calls in December 2008 recorded in Tehran, Iran’s EMS center; and then an algorithm approach was written for them. These algorithms were revised by three traumatologists and emergency medicine specialists, after their approval the algorithms were evaluated by EMS dispatch center for their practicality. Finally all data were turned into computer software. The program was used at the Tehran EMS center; 100 recorded calls assessed with each system were selected randomly. They were evaluated by another traumatologist whether it was necessary to send a team to the site or not. Results The age average was 26 years in both groups. The “trauma dispatch algorithm” was significantly effective in reducing the unnecessary missions of EMS by 16% (from 42% to 26%) (P = 0.005). Conclusions This program was effective in reducing unnecessary missions. We propose the usage of this system in all EMS centers. PMID:24350116

  1. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  2. Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including

  3. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  4. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  5. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  6. An improved direction finding algorithm based on Toeplitz approximation.

    PubMed

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-07

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  7. A Comparative Analysis of DBSCAN, K-Means, and Quadratic Variation Algorithms for Automatic Identification of Swallows from Swallowing Accelerometry Signals

    PubMed Central

    Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L

    2015-01-01

    Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505

  8. Model predictive control based on reduced order models applied to belt conveyor system.

    PubMed

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system.

  9. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoi