Science.gov

Sample records for highly specific algorithm

  1. Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms

    PubMed Central

    Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas

    2016-01-01

    Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640

  2. A proteomics search algorithm specifically designed for high-resolution tandem mass spectra.

    PubMed

    Wenger, Craig D; Coon, Joshua J

    2013-03-01

    The acquisition of high-resolution tandem mass spectra (MS/MS) is becoming more prevalent in proteomics, but most researchers employ peptide identification algorithms that were designed prior to this development. Here, we demonstrate new software, Morpheus, designed specifically for high-mass accuracy data, based on a simple score that is little more than the number of matching products. For a diverse collection of data sets from a variety of organisms (E. coli, yeast, human) acquired on a variety of instruments (quadrupole-time-of-flight, ion trap-orbitrap, and quadrupole-orbitrap) in different laboratories, Morpheus gives more spectrum, peptide, and protein identifications at a 1% false discovery rate (FDR) than Mascot, Open Mass Spectrometry Search Algorithm (OMSSA), and Sequest. Additionally, Morpheus is 1.5 to 4.6 times faster, depending on the data set, than the next fastest algorithm, OMSSA. Morpheus was developed in C# .NET and is available free and open source under a permissive license.

  3. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  4. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  5. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  6. Specification and Design Methodologies for High-Speed Fault-Tolerant Array Algorithms and Structures for VLSI.

    DTIC Science & Technology

    1987-06-01

    A182 772 SPECIFICATION AND DESIGN METHODOLOGIES FOR NIGH-SPEED 11 FAULT-TOLERANT ARRA.. CU) CALIFORNIA UNIY LOS ANGELES DEPT OF COMPUTER SCIENCE M D ...ERCEGOVAC ET AL. JUN 0? UNLASSIFIED N611-03--K-S49 F/ 91 ML Ji 1 2. ~ iiii -i ’IfIhIN I_______ IIIII .l n. ’ 3 ’ 3 .3 .5 *. .. w w, - .. .J’. ~ d ...STRUCTURES FOR VLSI Office of Naval Research Contract No. N00014-83-K-0493 Principal Investigator Milo D . Ercegovac ELECTE Co-Principal Ivestigator S AUG 0

  7. High performance FDTD algorithm for GPGPU supercomputers

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2016-10-01

    An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.

  8. Advanced CHP Control Algorithms: Scope Specification

    SciTech Connect

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  9. A human papilloma virus testing algorithm comprising a combination of the L1 broad-spectrum SPF10 PCR assay and a novel E6 high-risk multiplex type-specific genotyping PCR assay.

    PubMed

    van Alewijk, Dirk; Kleter, Bernhard; Vent, Maarten; Delroisse, Jean-Marc; de Koning, Maurits; van Doorn, Leen-Jan; Quint, Wim; Colau, Brigitte

    2013-04-01

    Human papillomavirus (HPV) epidemiological and vaccine studies require highly sensitive HPV detection and genotyping systems. To improve HPV detection by PCR, the broad-spectrum L1-based SPF10 PCR DNA enzyme immunoassay (DEIA) LiPA system and a novel E6-based multiplex type-specific system (MPTS123) that uses Luminex xMAP technology were combined into a new testing algorithm. To evaluate this algorithm, cervical swabs (n = 860) and cervical biopsy specimens (n = 355) were tested, with a focus on HPV types detected by the MPTS123 assay (types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, 68, 6, and 11). Among the HPV-positive samples, identifications of individual HPV genotypes were compared. When all MPTS123 targeted genotypes were considered together, good overall agreement was found (κ = 0.801, 95% confidence interval [CI], 0.784 to 0.818) with identification by SPF10 LiPA, but significantly more genotypes (P < 0.0001) were identified by the MPTS123 PCR Luminex assay, especially for HPV types 16, 35, 39, 45, 58, and 59. An alternative type-specific assay was evaluated that is based on detection of a limited number of HPV genotypes by type-specific PCR and a reverse hybridization assay (MPTS12 RHA). This assay showed results similar to those of the expanded MPTS123 Luminex assay. These results confirm the fact that broad-spectrum PCRs are hampered by type competition when multiple HPV genotypes are present in the same sample. Therefore, a testing algorithm combining the broad-spectrum PCR and a range of type-specific PCRs can offer a highly accurate method for the analysis of HPV infections and diminish the rate of false-negative results and may be particularly useful for epidemiological and vaccine studies.

  10. Optimization of warfarin dose by population-specific pharmacogenomic algorithm.

    PubMed

    Pavani, A; Naushad, S M; Rupasree, Y; Kumar, T R; Malempati, A R; Pinjala, R K; Mishra, R C; Kutala, V K

    2012-08-01

    To optimize the warfarin dose, a population-specific pharmacogenomic algorithm was developed using multiple linear regression model with vitamin K intake and cytochrome P450 IIC polypeptide9 (CYP2C9(*)2 and (*)3), vitamin K epoxide reductase complex 1 (VKORC1(*)3, (*)4, D36Y and -1639 G>A) polymorphism profile of subjects who attained therapeutic international normalized ratio as predictors. New algorithm was validated by correlating with Wadelius, International Warfarin Pharmacogenetics Consortium and Gage algorithms; and with the therapeutic dose (r=0.64, P<0.0001). New algorithm was more accurate (Overall: 0.89 vs 0.51, warfarin resistant: 0.96 vs 0.77 and warfarin sensitive: 0.80 vs 0.24), more sensitive (0.87 vs 0.52) and specific (0.93 vs 0.50) compared with clinical data. It has significantly reduced the rate of overestimation (0.06 vs 0.50) and underestimation (0.13 vs 0.48). To conclude, this population-specific algorithm has greater clinical utility in optimizing the warfarin dose, thereby decreasing the adverse effects of suboptimal dose.

  11. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  12. High specific heat superconducting composite

    DOEpatents

    Steyert, Jr., William A.

    1979-01-01

    A composite superconductor formed from a high specific heat ceramic such as gadolinium oxide or gadolinium-aluminum oxide and a conventional metal conductor such as copper or aluminum which are insolubly mixed together to provide adiabatic stability in a superconducting mode of operation. The addition of a few percent of insoluble gadolinium-aluminum oxide powder or gadolinium oxide powder to copper, increases the measured specific heat of the composite by one to two orders of magnitude below the 5.degree. K. level while maintaining the high thermal and electrical conductivity of the conventional metal conductor.

  13. Specific PCR product primer design using memetic algorithm.

    PubMed

    Yang, Cheng-Hong; Cheng, Yu-Huei; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2009-01-01

    To provide feasible primer sets for performing a polymerase chain reaction (PCR) experiment, many primer design methods have been proposed. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product size. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this article, a memetic algorithm (MA) is proposed to solve primer design problems associated with providing a specific product size for PCR experiments. The MA is compared with a genetic algorithm (GA) using an accuracy formula to estimate the quality of the primer design and test the running time. Overall, 50 accession nucleotide sequences were sampled for the comparison of the accuracy of the GA and MA for primer design. Five hundred runs of the GA and MA primer design were performed with PCR product lengths of 150-300 bps and 500-800 bps, and two different methods of calculating T(m) for each accession nucleotide sequence were tested. A comparison of the accuracy results for the GA and MA primer design showed that the MA primer design yielded better results than the GA primer design. The results further indicate that the proposed method finds optimal or near-optimal primer sets and effective PCR products in a dry dock experiment. Related materials are available online at http://bio.kuas.edu.tw/ma-pd/.

  14. Multiscale high-order/low-order (HOLO) algorithms and applications

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.

    2017-02-01

    We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.

  15. Specification-based Error Recovery: Theory, Algorithms, and Usability

    DTIC Science & Technology

    2013-02-01

    The basis of the methodology is a view of the specification as a non-deterministic implementation, which may permit a high degree of non-determinism...developed, optimized and rigorously evaluated in this project. It leveraged the Alloy specification language and its SAT-based tool-set as an enabling...a high degree of non-determinism. The key insight is to use likely correct actions by an otherwise erroneous execu- tion to prune the non-determinism

  16. High rate pulse processing algorithms for microcalorimeters

    SciTech Connect

    Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N

    2009-01-01

    It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.

  17. High specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1996-01-01

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  18. High specific activity silicon-32

    DOEpatents

    Phillips, D.R.; Brzezinski, M.A.

    1996-06-11

    A process for preparation of silicon-32 is provided and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidation state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  19. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  20. Ontological Problem-Solving Framework for Assigning Sensor Systems and Algorithms to High-Level Missions

    PubMed Central

    Qualls, Joseph; Russomanno, David J.

    2011-01-01

    The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081

  1. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  2. On constructing optimistic simulation algorithms for the discrete event system specification

    SciTech Connect

    Nutaro, James J

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.

  3. Design specification for the whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Fitzjerrell, D. G.

    1974-01-01

    The necessary requirements and guidelines for the construction of a computer program of the whole-body algorithm are presented. The minimum subsystem models required to effectively simulate the total body response to stresses of interest are (1) cardiovascular (exercise/LBNP/tilt); (2) respiratory (Grodin's model); (3) thermoregulatory (Stolwijk's model); and (4) long-term circulatory fluid and electrolyte (Guyton's model). The whole-body algorithm must be capable of simulating response to stresses from CO2 inhalation, hypoxia, thermal environmental exercise (sitting and supine), LBNP, and tilt (changing body angles in gravity).

  4. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  5. Using Genetic Algorithms to Converge on Molecules with Specific Properties

    NASA Astrophysics Data System (ADS)

    Foster, Stephen; Lindzey, Nathan; Rogers, Jon; West, Carl; Potter, Walt; Smith, Sean; Alexander, Steven

    2007-10-01

    Although it can be a straightforward matter to determine the properties of a molecule from its structure, the inverse problem is much more difficult. We have chosen to generate molecules by using a genetic algorithm, a computer simulation that models biological evolution and natural selection. By creating a population of randomly generated molecules, we can apply a process of selection, mutation, and recombination to ensure that the best members of the population (i.e. those molecules that possess many of the qualities we are looking for) survive, while the worst members of the population ``die.'' The best members are then modified by random mutation and by ``mating'' with other molecules to produce ``offspring.'' After many hundreds (or thousands) of iterations, one hopes that the population will get better and better---that is, that the properties of the individuals in the population will more and more closely match the properties we want.

  6. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  7. Computing highly specific and mismatch tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2003-01-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about twenty units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a sub-sequence other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 63% to approximately 79% of qualified oligomers.

  8. Computing highly specific and noise-tolerant oligomers efficiently.

    PubMed

    Yamada, Tomoyuki; Morishita, Shinichi

    2004-03-01

    The sequencing of the genomes of a variety of species and the growing databases containing expressed sequence tags (ESTs) and complementary DNAs (cDNAs) facilitate the design of highly specific oligomers for use as genomic markers, PCR primers, or DNA oligo microarrays. The first step in evaluating the specificity of short oligomers of about 20 units in length is to determine the frequencies at which the oligomers occur. However, for oligomers longer than about fifty units this is not efficient, as they usually have a frequency of only 1. A more suitable procedure is to consider the mismatch tolerance of an oligomer, that is, the minimum number of mismatches that allows a given oligomer to match a substring other than the target sequence anywhere in the genome or the EST database. However, calculating the exact value of mismatch tolerance is computationally costly and impractical. Therefore, we studied the problem of checking whether an oligomer meets the constraint that its mismatch tolerance is no less than a given threshold. Here, we present an efficient dynamic programming algorithm solution that utilizes suffix and height arrays. We demonstrated the effectiveness of this algorithm by efficiently computing a dense list of numerous oligo-markers applicable to the human genome. Experimental results show that the algorithm runs faster than well-known Abrahamson's algorithm by orders of magnitude and is able to enumerate 65% approximately 76% of qualified oligomers.

  9. C-element: a new clustering algorithm to find high quality functional modules in PPI networks.

    PubMed

    Ghasemi, Mahdieh; Rahgozar, Maseud; Bidkhori, Gholamreza; Masoudi-Nejad, Ali

    2013-01-01

    Graph clustering algorithms are widely used in the analysis of biological networks. Extracting functional modules in protein-protein interaction (PPI) networks is one such use. Most clustering algorithms whose focuses are on finding functional modules try either to find a clique like sub networks or to grow clusters starting from vertices with high degrees as seeds. These algorithms do not make any difference between a biological network and any other networks. In the current research, we present a new procedure to find functional modules in PPI networks. Our main idea is to model a biological concept and to use this concept for finding good functional modules in PPI networks. In order to evaluate the quality of the obtained clusters, we compared the results of our algorithm with those of some other widely used clustering algorithms on three high throughput PPI networks from Sacchromyces Cerevisiae, Homo sapiens and Caenorhabditis elegans as well as on some tissue specific networks. Gene Ontology (GO) analyses were used to compare the results of different algorithms. Each algorithm's result was then compared with GO-term derived functional modules. We also analyzed the effect of using tissue specific networks on the quality of the obtained clusters. The experimental results indicate that the new algorithm outperforms most of the others, and this improvement is more significant when tissue specific networks are used.

  10. Dimensionality Reduction Particle Swarm Algorithm for High Dimensional Clustering

    SciTech Connect

    Cui, Xiaohui; ST Charles, Jesse Lee; Potok, Thomas E; Beaver, Justin M

    2008-01-01

    The Particle Swarm Optimization (PSO) clustering algorithm can generate more compact clustering results than the traditional K-means clustering algorithm. However, when clustering high dimensional datasets, the PSO clustering algorithm is notoriously slow because its computation cost increases exponentially with the size of the dataset dimension. Dimensionality reduction techniques offer solutions that both significantly improve the computation time, and yield reasonably accurate clustering results in high dimensional data analysis. In this paper, we introduce research that combines different dimensionality reduction techniques with the PSO clustering algorithm in order to reduce the complexity of high dimensional datasets and speed up the PSO clustering process. We report significant improvements in total runtime. Moreover, the clustering accuracy of the dimensionality reduction PSO clustering algorithm is comparable to the one that uses full dimension space.

  11. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  12. A predictor-corrector guidance algorithm for use in high-energy aerobraking system studies

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Powell, Richard W.

    1991-01-01

    A three-degree-of-freedom predictor-corrector guidance algorithm has been developed specifically for use in high-energy aerobraking performance evaluations. The present study reports on both the development and capabilities of this guidance algorithm to the design of manned Mars aero-braking vehicles. Atmospheric simulations are performed to demonstrate the applicability of this algorithm and to evaluate the effect of atmospheric uncertainties upon the mission requirements. The off-nominal conditions simulated result from atmospheric density and aerodynamic characteristic mispredictions. The guidance algorithm is also used to provide relief from the high deceleration levels typically encountered in a high-energy aerobraking mission profile. Through this analysis, bank-angle modulation is shown to be an effective means of providing deceleration relief. Furthermore, the capability of the guidance algorithm to manage off-nominal vehicle aerodynamic and atmospheric density variations is demonstrated.

  13. High-Resolution Array with Prony, MUSIC, and ESPRIT Algorithms

    DTIC Science & Technology

    1992-08-25

    N avalI Research La bora tory AD-A255 514 Washington, DC 20375-5320 NRL/FR/5324-92-9397 High-resolution Array with Prony, music , and ESPRIT...unlimited t"orm n pprovoiREPORT DOCUMENTATION PAGE OMB. o 0 104 0188 4. TITLE AND SUBTITLE S. FUNDING NUMBERS High-resolution Array with Prony. MUSIC . and...the array high-resolution properties of three algorithms: the Prony algo- rithm, the MUSIC algorithm, and the ESPRIT algorithm. MUSIC has been much

  14. On the importance of FIB-SEM specific segmentation algorithms for porous media

    SciTech Connect

    Salzer, Martin; Thiele, Simon; Zengerle, Roland; Schmidt, Volker

    2014-09-15

    A new algorithmic approach to segmentation of highly porous three dimensional image data gained by focused ion beam tomography is described which extends the key-principle of local threshold backpropagation described in Salzer et al. (2012). The technique of focused ion beam tomography has shown to be capable of imaging the microstructure of functional materials. In order to perform a quantitative analysis on the corresponding microstructure a segmentation task needs to be performed. However, algorithmic segmentation of images obtained with focused ion beam tomography is a challenging problem for highly porous materials if filling the pore phase, e.g. with epoxy resin, is difficult. The gray intensities of individual voxels are not sufficient to determine the phase represented by them and usual thresholding methods are not applicable. We thus propose a new approach to segmentation that pays respect to the specifics of the imaging process of focused ion beam tomography. As an application of our approach, the segmentation of three dimensional images for a cathode material used in polymer electrolyte membrane fuel cells is discussed. We show that our approach preserves significantly more of the original nanostructure than a thresholding approach. - Highlights: • We describe a new approach to the segmentation of FIB-SEM images of porous media. • The first and last occurrences of structures are detected by analysing the z-profiles. • The algorithm is validated by comparing it to a manual segmentation. • The new approach shows significantly less artifacts than a thresholding approach. • A structural analysis also shows improved results for the obtained microstructure.

  15. Comparison of the specificity of implantable dual chamber defibrillator detection algorithms.

    PubMed

    Hintringer, Florian; Deibl, Martina; Berger, Thomas; Pachinger, Otmar; Roithinger, Franz Xaver

    2004-07-01

    The aim of the study was to compare the specificity of dual chamber ICDs detection algorithms for correct classification of supraventricular tachyarrhythmias derived from clinical studies according to their size to detect an impact of sample size on the specificity. Furthermore, the study sought to compare the specificities of detection algorithms calculated from clinical data with the specificity calculated from simulations of tachyarrhythmias. A survey was conducted of all available sources providing data regarding the specificity of five dual chamber ICDs. The specificity was correlated with the number of patients included, number of episodes, and number of supraventricular tachyarrhythmias recorded. The simulation was performed using tachyarrhythmias recorded in the electrophysiology laboratory. The range of the number of patients included into the studies was 78-1,029, the range of the total number of episodes recorded was 362-5,788, and the range of the number of supraventricular tachyarrhythmias used for calculation of the specificity for correct detection of these arrhythmias was 100 (Biotronik) to 1662 (Medtronic). The specificity for correct detection of supraventricular tachyarrhythmias was 90% (Biotronik), 89% (ELA Medical), 89% (Guidant), 68% (Medtronic), and 76% (St. Jude Medical). There was an inverse correlation (r = -0.9, P = 0.037) between the specificity for correct classification of supraventricular tachyarrhythmias and the number of patients. The specificity for correct detection of supraventricular tachyarrhythmias calculated from the simulation after correction for the clinical prevalence of the simulated tachyarrhythmias was 95% (Biotronik), 99% (ELA Medical), 94% (Guidant), 93% (Medtronic), and 92% (St. Jude Medical). In conclusion, the specificity of ICD detection algorithms calculated from clinical studies or registries may depend on the number of patients studied. Therefore, a direct comparison between different detection algorithms

  16. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  17. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  18. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  19. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  20. An algorithm for on-line detection of high frequency oscillations related to epilepsy.

    PubMed

    López-Cuevas, Armando; Castillo-Toledo, Bernardino; Medina-Ceja, Laura; Ventura-Mejía, Consuelo; Pardo-Peña, Kenia

    2013-06-01

    Recent studies suggest that the appearance of signals with high frequency oscillations components in specific regions of the brain is related to the incidence of epilepsy. These oscillations are in general small in amplitude and short in duration, making them difficult to identify. The analysis of these oscillations are particularly important in epilepsy and their study could lead to the development of better medical treatments. Therefore, the development of algorithms for detection of these high frequency oscillations is of great importance. In this work, a new algorithm for automatic detection of high frequency oscillations is presented. This algorithm uses approximate entropy and artificial neural networks to extract features in order to detect and classify high frequency components in electrophysiological signals. In contrast to the existing algorithms, the one proposed here is fast and accurate, and can be implemented on-line, thus reducing the time employed to analyze the experimental electrophysiological signals.

  1. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  2. Chaotic substitution for highly autocorrelated data in encryption algorithm

    NASA Astrophysics Data System (ADS)

    Anees, Amir; Siddiqui, Adil Masood; Ahmed, Fawad

    2014-09-01

    This paper addresses the major drawback of substitution-box in highly auto-correlated data and proposes a novel chaotic substitution technique for encryption algorithm to sort the problem. Simulation results reveal that the overall strength of the proposed technique for encryption is much stronger than most of the existing encryption techniques. Furthermore, few statistical security analyses have also been done to show the strength of anticipated algorithm.

  3. Highly parallel consistent labeling algorithm suitable for optoelectronic implementation.

    PubMed

    Marsden, G C; Kiamilev, F; Esener, S; Lee, S H

    1991-01-10

    Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.

  4. A High Precision Terahertz Wave Image Reconstruction Algorithm

    PubMed Central

    Guo, Qijia; Chang, Tianying; Geng, Guoshuai; Jia, Chengyan; Cui, Hong-Liang

    2016-01-01

    With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. PMID:27455269

  5. Development of High Specific Strength Envelope Materials

    NASA Astrophysics Data System (ADS)

    Komatsu, Keiji; Sano, Masa-Aki; Kakuta, Yoshiaki

    Progress in materials technology has produced a much more durable synthetic fabric envelope for the non-rigid airship. Flexible materials are required to form airship envelopes, ballonets, load curtains, gas bags and covering rigid structures. Polybenzoxazole fiber (Zylon) and polyalirate fiber (Vectran) show high specific tensile strength, so that we developed membrane using these high specific tensile strength fibers as a load carrier. The main material developed is a Zylon or Vectran load carrier sealed internally with a polyurethane bonded inner gas retention film (EVOH). The external surface provides weather protecting with, for instance, a titanium oxide integrated polyurethane or Tedlar film. The mechanical test results show that tensile strength 1,000 N/cm is attained with weight less than 230g/m2. In addition to the mechanical properties, temperature dependence of the joint strength and solar absorptivity and emissivity of the surface are measured. 

  6. Production of high specific activity silicon-32

    SciTech Connect

    Phillips, D.R.; Brzezinski, M.A.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development Project (LDRD) at Los Alamos National Laboratory (LANL). There were two primary objectives for the work performed under this project. The first was to take advantage of capabilities and facilities at Los Alamos to produce the radionuclide {sup 32}Si in unusually high specific activity. The second was to combine the radioanalytical expertise at Los Alamos with the expertise at the University of California to develop methods for the application of {sup 32}Si in biological oceanographic research related to global climate modeling. The first objective was met by developing targetry for proton spallation production of {sup 32}Si in KCl targets and chemistry for its recovery in very high specific activity. The second objective was met by developing a validated field-useable, radioanalytical technique, based upon gas-flow proportional counting, to measure the dynamics of silicon uptake by naturally occurring diatoms.

  7. A high capacity 3D steganography algorithm.

    PubMed

    Chao, Min-Wen; Lin, Chao-hung; Yu, Cheng-Wei; Lee, Tong-Yee

    2009-01-01

    In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multilayered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.

  8. Research on High-Specific-Heat Dielectrics

    DTIC Science & Technology

    1990-01-31

    wellp as related thermodynamic properties , we infer the following conclusions: 1. The exceptionally high C peaks for ZnCr204 andp 2 CdCr204 in the 2...which determine the electric, magnetic, and thermodynamic properties of the system. In addition, we have found from this microscopic analysis that... properties of this lattice will therefore be dominated by the properties of the cluster. The 3 thermodynamic properties such as the energy, the specific

  9. High-order hydrodynamic algorithms for exascale computing

    SciTech Connect

    Morgan, Nathaniel Ray

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  10. Wp specific methylation of highly proliferated LCLs

    SciTech Connect

    Park, Jung-Hoon; Jeon, Jae-Pil; Shim, Sung-Mi; Nam, Hye-Young; Kim, Joon-Woo; Han, Bok-Ghee; Lee, Suman . E-mail: suman@cha.ac.kr

    2007-06-29

    The epigenetic regulation of viral genes may be important for the life cycle of EBV. We determined the methylation status of three viral promoters (Wp, Cp, Qp) from EBV B-lymphoblastoid cell lines (LCLs) by pyrosequencing. Our pyrosequencing data showed that the CpG region of Wp was methylated, but the others were not. Interestingly, Wp methylation was increased with proliferation of LCLs. Wp methylation was as high as 74.9% in late-passage LCLs, but 25.6% in early-passage LCLs. From two Burkitt's lymphoma cell lines, Wp specific hypermethylation was also found (>80%). Interestingly, the expression of EBNA2 gene which located directly next to Wp was associated with its methylation. Our data suggested that Wp specific methylation may be important for the indicator of the proliferation status of LCLs, and the epigenetic viral gene regulation of EBNA2 gene by Wp should be further defined possibly with other biological processes.

  11. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  12. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2003-10-28

    A process for the selective production and isolation of high specific activity Cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  13. Production Of High Specific Activity Copper-67

    DOEpatents

    Jamriska, Sr., David J.; Taylor, Wayne A.; Ott, Martin A.; Fowler, Malcolm; Heaton, Richard C.

    2002-12-03

    A process for the selective production and isolation of high specific activity cu.sup.67 from proton-irradiated enriched Zn.sup.70 target comprises target fabrication, target irradiation with low energy (<25 MeV) protons, chemical separation of the Cu.sup.67 product from the target material and radioactive impurities of gallium, cobalt, iron, and stable aluminum via electrochemical methods or ion exchange using both anion and cation organic ion exchangers, chemical recovery of the enriched Zn.sup.70 target material, and fabrication of new targets for re-irradiation is disclosed.

  14. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  15. Benefits Assessment of Algorithmically Combining Generic High Altitude Airspace Sectors

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod; Lai, Chok Fung; Kopardekar, Parimal

    2009-01-01

    In today's air traffic control operations, sectors that have traffic demand below capacity are combined so that fewer controller teams are required to manage air traffic. Controllers in current operations are certified to control a group of six to eight sectors, known as an area of specialization. Sector combinations are restricted to occur within areas of specialization. Since there are few sector combination possibilities in each area of specialization, human supervisors can effectively make sector combination decisions. In the future, automation and procedures will allow any appropriately trained controller to control any of a large set of generic sectors. The primary benefit of this will be increased controller staffing flexibility. Generic sectors will also allow more options for combining sectors, making sector combination decisions difficult for human supervisors. A sector-combining algorithm can assist supervisors as they make generic sector combination decisions. A heuristic algorithm for combining under-utilized air space sectors to conserve air traffic control resources has been described and analyzed. Analysis of the algorithm and comparisons with operational sector combinations indicate that this algorithm could more efficiently utilize air traffic control resources than current sector combinations. This paper investigates the benefits of using the sector-combining algorithm proposed in previous research to combine high altitude generic airspace sectors. Simulations are conducted in which all the high altitude sectors in a center are allowed to combine, as will be possible in generic high altitude airspace. Furthermore, the algorithm is adjusted to use a version of the simplified dynamic density (SDD) workload metric that has been modified to account for workload reductions due to automatic handoffs and Automatic Dependent Surveillance Broadcast (ADS-B). This modified metric is referred to here as future simplified dynamic density (FSDD). Finally

  16. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    NASA Technical Reports Server (NTRS)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  17. An incremental high-utility mining algorithm with transaction insertion.

    PubMed

    Lin, Jerry Chun-Wei; Gan, Wensheng; Hong, Tzung-Pei; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  18. High performance genetic algorithm for VLSI circuit partitioning

    NASA Astrophysics Data System (ADS)

    Dinu, Simona

    2016-12-01

    Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.

  19. Stride search: A general algorithm for storm detection in high-resolution climate data

    SciTech Connect

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  20. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  1. Production of high specific activity silicon-32

    DOEpatents

    Phillips, Dennis R.; Brzezinski, Mark A.

    1994-01-01

    A process for preparation of silicon-32 is provide and includes contacting an irradiated potassium chloride target, including spallation products from a prior irradiation, with sufficient water, hydrochloric acid or potassium hydroxide to form a solution, filtering the solution, adjusting pH of the solution to from about 5.5 to about 7.5, admixing sufficient molybdate-reagent to the solution to adjust the pH of the solution to about 1.5 and to form a silicon-molybdate complex, contacting the solution including the silicon-molybdate complex with a dextran-based material, washing the dextran-based material to remove residual contaminants such as sodium-22, separating the silicon-molybdate complex from the dextran-based material as another solution, adding sufficient hydrochloric acid and hydrogen peroxide to the solution to prevent reformation of the silicon-molybdate complex and to yield an oxidization state of the molybdate adapted for subsequent separation by an anion exchange material, contacting the solution with an anion exchange material whereby the molybdate is retained by the anion exchange material and the silicon remains in solution, and optionally adding sufficient alkali metal hydroxide to adjust the pH of the solution to about 12 to 13. Additionally, a high specific activity silicon-32 product having a high purity is provided.

  2. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  3. A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Kopaczka, Marcin; Wimmer, Andreas; Platsch, Günther; Declerck, Jérôme

    2014-03-01

    Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

  4. High flux isotope reactor technical specifications

    SciTech Connect

    Not Available

    1982-04-01

    Technical specifications are presented concerning safety limits and limiting safety system settings; limiting conditions for operation; surveillance requirements; design features; administrative controls; and accidents and anticipated transients.

  5. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  6. Utility of gene-specific algorithms for predicting pathogenicity of uncertain gene variants

    PubMed Central

    Lyon, Elaine; Williams, Marc S; Narus, Scott P; Facelli, Julio C; Mitchell, Joyce A

    2011-01-01

    The rapid advance of gene sequencing technologies has produced an unprecedented rate of discovery of genome variation in humans. A growing number of authoritative clinical repositories archive gene variants and disease phenotypes, yet there are currently many more gene variants that lack clear annotation or disease association. To date, there has been very limited coverage of gene-specific predictors in the literature. Here the evaluation is presented of “gene-specific” predictor models based on a naïve Bayesian classifier for 20 gene–disease datasets, containing 3986 variants with clinically characterized patient conditions. The utility of gene-specific prediction is then compared with “all-gene” generalized prediction and also with existing popular predictors. Gene-specific computational prediction models derived from clinically curated gene variant disease datasets often outperform established generalized algorithms for novel and uncertain gene variants. PMID:22037892

  7. Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dinc, Ali

    2016-09-01

    In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.

  8. A moving frame algorithm for high Mach number hydrodynamics

    NASA Astrophysics Data System (ADS)

    Trac, Hy; Pen, Ue-Li

    2004-07-01

    We present a new approach to Eulerian computational fluid dynamics that is designed to work at high Mach numbers encountered in astrophysical hydrodynamic simulations. Standard Eulerian schemes that strictly conserve total energy suffer from the high Mach number problem and proposed solutions to additionally solve the entropy or thermal energy still have their limitations. In our approach, the Eulerian conservation equations are solved in an adaptive frame moving with the fluid where Mach numbers are minimized. The moving frame approach uses a velocity decomposition technique to define local kinetic variables while storing the bulk kinetic components in a smoothed background velocity field that is associated with the grid velocity. Gravitationally induced accelerations are added to the grid, thereby minimizing the spurious heating problem encountered in cold gas flows. Separately tracking local and bulk flow components allows thermodynamic variables to be accurately calculated in both subsonic and supersonic regions. A main feature of the algorithm, that is not possible in previous Eulerian implementations, is the ability to resolve shocks and prevent spurious heating where both the pre-shock and post-shock fluid are supersonic. The hybrid algorithm combines the high-resolution shock capturing ability of the second-order accurate Eulerian TVD scheme with a low-diffusion Lagrangian advection scheme. We have implemented a cosmological code where the hydrodynamic evolution of the baryons is captured using the moving frame algorithm while the gravitational evolution of the collisionless dark matter is tracked using a particle-mesh N-body algorithm. Hydrodynamic and cosmological tests are described and results presented. The current code is fast, memory-friendly, and parallelized for shared-memory machines.

  9. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  10. Measuring Specific Heats at High Temperatures

    NASA Technical Reports Server (NTRS)

    Vandersande, Jan W.; Zoltan, Andrew; Wood, Charles

    1987-01-01

    Flash apparatus for measuring thermal diffusivities at temperatures from 300 to 1,000 degrees C modified; measures specific heats of samples to accuracy of 4 to 5 percent. Specific heat and thermal diffusivity of sample measured. Xenon flash emits pulse of radiation, absorbed by sputtered graphite coating on sample. Sample temperature measured with thermocouple, and temperature rise due to pulse measured by InSb detector.

  11. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  12. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  13. Trajectories for High Specific Impulse High Specific Power Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.

  14. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  15. An evaluation of Z-transform algorithms for identifying subject-specific abnormalities in neuroimaging data.

    PubMed

    Mayer, Andrew R; Dodd, Andrew B; Ling, Josef M; Wertz, Christopher J; Shaff, Nicholas A; Bedrick, Edward J; Viamonte, Carlo

    2017-03-20

    The need for algorithms that capture subject-specific abnormalities (SSA) in neuroimaging data is increasingly recognized across many neuropsychiatric disorders. However, the effects of initial distributional properties (e.g., normal versus non-normally distributed data), sample size, and typical preprocessing steps (spatial normalization, blurring kernel and minimal cluster requirements) on SSA remain poorly understood. The current study evaluated the performance of several commonly used z-transform algorithms [leave-one-out (LOO); independent sample (IDS); Enhanced Z-score Microstructural Assessment of Pathology (EZ-MAP); distribution-corrected z-scores (DisCo-Z); and robust z-scores (ROB-Z)] for identifying SSA using simulated and diffusion tensor imaging data from healthy controls (N = 50). Results indicated that all methods (LOO, IDS, EZ-MAP and DisCo-Z) with the exception of the ROB-Z eliminated spurious differences that are present across artificially created groups following a standard z-transform. However, LOO and IDS consistently overestimated the true number of extrema (i.e., SSA) across all sample sizes and distributions. The EZ-MAP and DisCo-Z algorithms more accurately estimated extrema across most distributions and sample sizes, with the exception of skewed distributions. DTI results indicated that registration algorithm (linear versus non-linear) and blurring kernel size differentially affected the number of extrema in positive versus negative tails. Increasing the blurring kernel size increased the number of extrema, although this effect was much more prominent when a minimum cluster volume was applied to the data. In summary, current results highlight the need to statistically compare the frequency of SSA in control samples or to develop appropriate confidence intervals for patient data.

  16. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    SciTech Connect

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  17. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  18. Advanced Non-Linear Control Algorithms Applied to Design Highly Maneuverable Autonomous Underwater Vehicles (AUVs)

    DTIC Science & Technology

    2007-08-01

    Advanced non- linear control algorithms applied to design highly maneuverable Autonomous Underwater Vehicles (AUVs) Vladimir Djapic, Jay A. Farrell...hierarchical such that an ”inner loop” non- linear controller (outputs the appropriate thrust values) is the same for all mission scenarios while a...library of ”outer-loop” non- linear controllers are available to implement specific maneuvering scenarios. On top of the outer-loop is the mission planner

  19. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be

  20. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  1. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  2. Robust Optimization Design Algorithm for High-Frequency TWTs

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  3. An effective algorithm for the generation of patient-specific Purkinje networks in computational electrocardiology

    NASA Astrophysics Data System (ADS)

    Palamara, Simone; Vergara, Christian; Faggiano, Elena; Nobile, Fabio

    2015-02-01

    The Purkinje network is responsible for the fast and coordinated distribution of the electrical impulse in the ventricle that triggers its contraction. Therefore, it is necessary to model its presence to obtain an accurate patient-specific model of the ventricular electrical activation. In this paper, we present an efficient algorithm for the generation of a patient-specific Purkinje network, driven by measures of the electrical activation acquired on the endocardium. The proposed method provides a correction of an initial network, generated by means of a fractal law, and it is based on the solution of Eikonal problems both in the muscle and in the Purkinje network. We present several numerical results both in an ideal geometry with synthetic data and in a real geometry with patient-specific clinical measures. These results highlight an improvement of the accuracy provided by the patient-specific Purkinje network with respect to the initial one. In particular, a cross-validation test shows an accuracy increase of 19% when only the 3% of the total points are used to generate the network, whereas an increment of 44% is observed when a random noise equal to 20% of the maximum value of the clinical data is added to the measures.

  4. High specific activity platinum-195m

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-10-12

    A new composition of matter includes .sup.195m Pt characterized by a specific activity of at least 30 mCi/mg Pt, generally made by method that includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  5. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea

  6. High Density Jet Fuel Supply and Specifications

    DTIC Science & Technology

    1986-01-01

    same shortcomings. Perhaps different LAK blends using heavy reformate or heavy cat cracker naphtha (both high in aromatics and isoparaffins) could... catalytic cracking (FCC) process. Subsequent investigations funded by the U. S. Air Force concentrated on producing a similar fuel from the...cut (19% overhead) and adding heavy naphtha (320-440F) from a nearby paraffinic crude (40"API Wyoming Sweet) an excellent JP-8X can be created. Table 5

  7. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  8. Cryptanalyzing a chaotic encryption algorithm for highly autocorrelated data

    NASA Astrophysics Data System (ADS)

    Li, Ming; Liu, Shangwang; Niu, Liping; Liu, Hong

    2016-12-01

    Recently, a chaotic encryption algorithm for highly autocorrelated data was proposed. By adding chaotic diffusion to the former work, the information leakage of the encryption results especially for the images with lower gray scales was eliminated, and both higher-level security and fast encryption time were achieved. In this study, we analyze the security weakness of this scheme. By applying the ciphertext-only attack, the encrypted image can be restored into the substituted image except for the first block; and then, by using the chosen-plaintext attack, the S-boxes, the distribution map, and the block of chaotic map values, can all be revealed, and the encrypted image can be completely cracked. The improvement is also proposed. Experimental results verify our assertion.

  9. Developing Benthic Class Specific, Chlorophyll-a Retrieving Algorithms for Optically-Shallow Water Using SeaWiFS

    PubMed Central

    Blakey, Tara; Melesse, Assefa; Sukop, Michael C.; Tachiev, Georgio; Whitman, Dean; Miralles-Wilhelm, Fernando

    2016-01-01

    This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC) algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning. PMID:27775626

  10. Randomized algorithms for high quality treatment planning in volumetric modulated arc therapy

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Dong, Bin; Wen, Zaiwen

    2017-02-01

    In recent years, volumetric modulated arc therapy (VMAT) has been becoming a more and more important radiation technique widely used in clinical application for cancer treatment. One of the key problems in VMAT is treatment plan optimization, which is complicated due to the constraints imposed by the involved equipments. In this paper, we consider a model with four major constraints: the bound on the beam intensity, an upper bound on the rate of the change of the beam intensity, the moving speed of leaves of the multi-leaf collimator (MLC) and its directional-convexity. We solve the model by a two-stage algorithm: performing minimization with respect to the shapes of the aperture and the beam intensities alternatively. Specifically, the shapes of the aperture are obtained by a greedy algorithm whose performance is enhanced by random sampling in the leaf pairs with a decremental rate. The beam intensity is optimized using a gradient projection method with non-monotonic line search. We further improve the proposed algorithm by an incremental random importance sampling of the voxels to reduce the computational cost of the energy functional. Numerical simulations on two clinical cancer date sets demonstrate that our method is highly competitive to the state-of-the-art algorithms in terms of both computational time and quality of treatment planning.

  11. A Hybrid Feature Subset Selection Algorithm for Analysis of High Correlation Proteomic Data

    PubMed Central

    Kordy, Hussain Montazery; Baygi, Mohammad Hossein Miran; Moradi, Mohammad Hassan

    2012-01-01

    Pathological changes within an organ can be reflected as proteomic patterns in biological fluids such as plasma, serum, and urine. The surface-enhanced laser desorption and ionization time-of-flight mass spectrometry (SELDI-TOF MS) has been used to generate proteomic profiles from biological fluids. Mass spectrometry yields redundant noisy data that the most data points are irrelevant features for differentiating between cancer and normal cases. In this paper, we have proposed a hybrid feature subset selection algorithm based on maximum-discrimination and minimum-correlation coupled with peak scoring criteria. Our algorithm has been applied to two independent SELDI-TOF MS datasets of ovarian cancer obtained from the NCI-FDA clinical proteomics databank. The proposed algorithm has used to extract a set of proteins as potential biomarkers in each dataset. We applied the linear discriminate analysis to identify the important biomarkers. The selected biomarkers have been able to successfully diagnose the ovarian cancer patients from the noncancer control group with an accuracy of 100%, a sensitivity of 100%, and a specificity of 100% in the two datasets. The hybrid algorithm has the advantage that increases reproducibility of selected biomarkers and able to find a small set of proteins with high discrimination power. PMID:23717808

  12. Clinical algorithm for malaria during low and high transmission seasons

    PubMed Central

    Muhe, L.; Oljira, B.; Degefu, H.; Enquesellassie, F.; Weber, M.

    1999-01-01

    OBJECTIVES—To assess the proportion of children with febrile disease who suffer from malaria and to identify clinical signs and symptoms that predict malaria during low and high transmission seasons.
STUDY DESIGN—2490 children aged 2 to 59 months presenting to a health centre in rural Ethiopia with fever had their history documented and the following investigations: clinical examination, diagnosis, haemoglobin measurement, and a blood smear for malaria parasites. Clinical findings were related to the presence of malaria parasitaemia.
RESULTS—Malaria contributed to 5.9% of all febrile cases from January to April and to 30.3% during the rest of the year. Prediction of malaria was improved by simple combinations of a few signs and symptoms. Fever with a history of previous malarial attack or absence of cough or a finding of pallor gave a sensitivity of 83% in the high risk season and 75% in the low risk season, with corresponding specificities of 51% and 60%; fever with a previous malaria attack or pallor or splenomegaly had sensitivities of 80% and 69% and specificities of 65% and 81% in high and low risk settings, respectively.
CONCLUSION—Better clinical definitions are possible for low malaria settings when microscopic examination cannot be done. Health workers should be trained to detect pallor and splenomegaly because these two signs improve the specificity for malaria.

 PMID:10451393

  13. NFLUX PRE: Validation of New Specific Humidity, Surface Air Temperature, and Wind Speed Algorithms for Ascending/Descending Directions and Clear or Cloudy Conditions

    DTIC Science & Technology

    2015-06-18

    Validation of New Specific Humidity, Surface Air Temperature , and Wind Speed Algorithms for Ascending/ Descending Directions and Clear or Cloudy...LIMITATION OF ABSTRACT NFLUX PRE: Validation of New Specific Humidity, Surface Air Temperature , and Wind Speed Algorithms for Ascending/Descending...satellite retrieval algorithms. In addition to data from the Special Sensor Microwave Imager/Sounder (SSMIS) and the Advanced Microwave Sounding

  14. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  15. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  16. High School Educational Specifications: Facilities Planning Standards. Edition I.

    ERIC Educational Resources Information Center

    Jefferson County School District R-1, Denver, CO.

    The Jefferson County School District (Colorado) has developed a manual of high school specifications for Design Advisory Groups and consultants to use for planning and designing the district's high school facilities. The specifications are provided to help build facilities that best meet the educational needs of the students to be served.…

  17. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    DOE PAGES

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less

  18. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty

    SciTech Connect

    Ling, Julia; Templeton, Jeremy Alan

    2015-08-04

    Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.

  19. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  20. A correction to a highly accurate voight function algorithm

    NASA Technical Reports Server (NTRS)

    Shippony, Z.; Read, W. G.

    2002-01-01

    An algorithm for rapidly computing the complex Voigt function was published by Shippony and Read. Its claimed accuracy was 1 part in 10^8. It was brought to our attention by Wells that Shippony and Read was not meeting its claimed accuracy for extremely small but non zero y values. Although true, the fix to the code is so trivial to warrant this note for those who use this algorithm.

  1. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  2. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  3. An Analytic Approximation to Very High Specific Impulse and Specific Power Interplanetary Space Mission Analysis

    NASA Technical Reports Server (NTRS)

    Williams, Craig Hamilton

    1995-01-01

    A simple, analytic approximation is derived to calculate trip time and performance for propulsion systems of very high specific impulse (50,000 to 200,000 seconds) and very high specific power (10 to 1000 kW/kg) for human interplanetary space missions. The approach assumed field-free space, constant thrust/constant specific power, and near straight line (radial) trajectories between the planets. Closed form, one dimensional equations of motion for two-burn rendezvous and four-burn round trip missions are derived as a function of specific impulse, specific power, and propellant mass ratio. The equations are coupled to an optimizing parameter that maximizes performance and minimizes trip time. Data generated for hypothetical one-way and round trip human missions to Jupiter were found to be within 1% and 6% accuracy of integrated solutions respectively, verifying that for these systems, credible analysis does not require computationally intensive numerical techniques.

  4. A high-performance FFT algorithm for vector supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1988-01-01

    Many traditional algorithms for computing the fast Fourier transform (FFT) on conventional computers are unacceptable for advanced vector and parallel computers because they involve nonunit, power-of-two memory strides. A practical technique for computing the FFT that avoids all such strides and appears to be near-optimal for a variety of current vector and parallel computers is presented. Performance results of a program based on this technique are given. Notable among these results is that a FORTRAN implementation of this algorithm on the CRAY-2 runs up to 77-percent faster than Cray's assembly-coded library routine.

  5. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery

    PubMed Central

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-01-01

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the

  6. Clinical evaluation of new automatic coronary-specific best cardiac phase selection algorithm for single-beat coronary CT angiography

    PubMed Central

    Xu, Lei; Fan, Zhanming; Liang, Junfu; Yan, Zixu; Sun, Zhonghua

    2017-01-01

    The aim of this study was to evaluate the workflow efficiency of a new automatic coronary-specific reconstruction technique (Smart Phase, GE Healthcare—SP) for selection of the best cardiac phase with least coronary motion when compared with expert manual selection (MS) of best phase in patients with high heart rate. A total of 46 patients with heart rates above 75 bpm who underwent single beat coronary computed tomography angiography (CCTA) were enrolled in this study. CCTA of all subjects were performed on a 256-detector row CT scanner (Revolution CT, GE Healthcare, Waukesha, Wisconsin, US). With the SP technique, the acquired phase range was automatically searched in 2% phase intervals during the reconstruction process to determine the optimal phase for coronary assessment, while for routine expert MS, reconstructions were performed at 5% intervals and a best phase was manually determined. The reconstruction and review times were recorded to measure the workflow efficiency for each method. Two reviewers subjectively assessed image quality for each coronary artery in the MS and SP reconstruction volumes using a 4-point grading scale. The average HR of the enrolled patients was 91.1±19.0bpm. A total of 204 vessels were assessed. The subjective image quality using SP was comparable to that of the MS, 1.45±0.85 vs 1.43±0.81 respectively (p = 0.88). The average time was 246 seconds for the manual best phase selection, and 98 seconds for the SP selection, resulting in average time saving of 148 seconds (60%) with use of the SP algorithm. The coronary specific automatic cardiac best phase selection technique (Smart Phase) improves clinical workflow in high heart rate patients and provides image quality comparable with manual cardiac best phase selection. Reconstruction of single-beat CCTA exams with SP can benefit the users with less experienced in CCTA image interpretation. PMID:28231322

  7. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  8. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  9. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  10. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    NASA Technical Reports Server (NTRS)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  11. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms.

    PubMed

    Huang, Fang; Hartwich, Tobias M P; Rivera-Molina, Felix E; Lin, Yu; Duim, Whitney C; Long, Jane J; Uchil, Pradeep D; Myers, Jordan R; Baird, Michelle A; Mothes, Walther; Davidson, Michael W; Toomre, Derek; Bewersdorf, Joerg

    2013-07-01

    Newly developed scientific complementary metal-oxide semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition, enlarge the field of view and increase the effective quantum efficiency in single-molecule switching nanoscopy. However, sCMOS-intrinsic pixel-dependent readout noise substantially lowers the localization precision and introduces localization artifacts. We present algorithms that overcome these limitations and that provide unbiased, precise localization of single molecules at the theoretical limit. Using these in combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at rates of up to 32 reconstructed images per second in fixed and living cells.

  12. SU-E-T-305: Study of the Eclipse Electron Monte Carlo Algorithm for Patient Specific MU Calculations

    SciTech Connect

    Wang, X; Qi, S; Agazaryan, N; DeMarco, J

    2014-06-01

    Purpose: To evaluate the Eclipse electron Monte Carlo (eMC) algorithm based on patient specific monitor unit (MU) calculations, and to propose a new factor which quantitatively predicts the discrepancy of MUs between the eMC algorithm and hand calculations. Methods: Electron treatments were planned for 61 patients on Eclipse (Version 10.0) using the eMC algorithm for Varian TrueBeam linear accelerators. For each patient, the same treatment beam angle was kept for a point dose calculation at dmax performed with the reference condition, which used an open beam with a 15×15 cm2 size cone and 100 SSD. A patient specific correction factor (PCF) was obtained by getting the ratio between this point dose and the calibration dose, which is 1 cGy per MU delivered at dmax. The hand calculation results were corrected by the PCFs and compared with MUs from the treatment plans. Results: The MU from the treatment plans were in average (7.1±6.1)% higher than the hand calculations. The average MU difference between the corrected hand calculations and the eMC treatment plans was (0.07±3.48)%. A correlation coefficient of 0.8 was found between (1-PCF) and the percentage difference between the treatment plan and hand calculations. Most outliers were treatment plans with small beam opening (< 4 cm) and low energy beams (6 and 9 MeV). Conclusion: For CT-based patient treatment plans, the eMC algorithm tends to generate a larger MU than hand calculations. Caution should be taken for eMC patient plans with small field sizes and low energy beams. We hypothesize that the PCF ratio reflects the influence of patient surface curvature and tissue inhomogeneity to patient specific percent depth dose (PDD) curve and MU calculations in eMC algorithm.

  13. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  14. A high-speed algorithm for computation of fractional differentiation and fractional integration.

    PubMed

    Fukunaga, Masataka; Shimizu, Nobuyuki

    2013-05-13

    A high-speed algorithm for computing fractional differentiations and fractional integrations in fractional differential equations is proposed. In this algorithm, the stored data are not the function to be differentiated or integrated but the weighted integrals of the function. The intervals of integration for the memory can be increased without loss of accuracy as the computing time-step n increases. The computing cost varies as n log n, as opposed to n(2) of standard algorithms.

  15. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  16. Advanced Tribological Coatings for High Specific Strength Alloys

    DTIC Science & Technology

    1989-09-29

    Hard Anodised 4 HSSA12 (SHT) Plasma Nitrided 1 HSSA13 (H&G) Plasma Nitrided 2 HSSA14 (SHT) High Temperature Nitrocarburized 1 HSSA15 (H&G) Nitrox 1...HSSA26 ( High Temperature Plasma Nitriding) has recently arrived, and is currently undergoing metallographic examination. The remaining samples are still...Report No 3789/607 Advanced Tribological Coatings For High Specific Strength Alloys, R&D 5876-MS-01 Contract DAJ A45-87-C-0044 5th Interim Report

  17. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  18. Automated coronary artery calcium scoring from non-contrast CT using a patient-specific algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Xiaowei; Slomka, Piotr J.; Diaz-Zamudio, Mariana; Germano, Guido; Berman, Daniel S.; Terzopoulos, Demetri; Dey, Damini

    2015-03-01

    Non-contrast cardiac CT is used worldwide to assess coronary artery calcium (CAC), a subclinical marker of coronary atherosclerosis. Manual quantification of regional CAC scores includes identifying candidate regions, followed by thresholding and connected component labeling. We aimed to develop and validate a fully-automated, algorithm for both overall and regional measurement of CAC scores from non-contrast CT using a hybrid multi-atlas registration, active contours and knowledge-based region separation algorithm. A co-registered segmented CT atlas was created from manually segmented non-contrast CT data from 10 patients (5 men, 5 women) and stored offline. For each patient scan, the heart region, left ventricle, right ventricle, ascending aorta and aortic root are located by multi-atlas registration followed by active contours refinement. Regional coronary artery territories (left anterior descending artery, left circumflex artery and right coronary artery) are separated using a knowledge-based region separation algorithm. Calcifications from these coronary artery territories are detected by region growing at each lesion. Global and regional Agatston scores and volume scores were calculated in 50 patients. Agatston scores and volume scores calculated by the algorithm and the expert showed excellent correlation (Agatston score: r = 0.97, p < 0.0001, volume score: r = 0.97, p < 0.0001) with no significant differences by comparison of individual data points (Agatston score: p = 0.30, volume score: p = 0.33). The total time was <60 sec on a standard computer. Our results show that fast accurate and automated quantification of CAC scores from non-contrast CT is feasible.

  19. GPU-Based Tracking Algorithms for the ATLAS High-Level Trigger

    NASA Astrophysics Data System (ADS)

    Emeliyanov, D.; Howard, J.

    2012-12-01

    Results on the performance and viability of data-parallel algorithms on Graphics Processing Units (GPUs) in the ATLAS Level 2 trigger system are presented. We describe the existing trigger data preparation and track reconstruction algorithms, motivation for their optimization, GPU-parallelized versions of these algorithms, and a “client-server” solution for hybrid CPU/GPU event processing used for integration of the GPU-oriented algorithms into existing ATLAS trigger software. The resulting speed-up of event processing times obtained with high-luminosity simulated data is presented and discussed.

  20. Identification of two highly specific pollen promoters using transcriptomic data.

    PubMed

    Muñoz-Strale, Daniela; León, Gabriel

    2014-10-01

    The mature pollen grain displays a highly specialized function in angiosperms. Accordingly, the male gametophyte development involves many specific biological activities, making it a complex and unique process in plants. In order to accomplish this, during pollen development, a massive transcriptomic remodeling takes place, indicating the switch from a sporophytic to a gametophytic program and involving the expression of many pollen specific genes. Using microarray databases we selected genes showing pollen-specific accumulation of their mRNAs and confirmed this through RT-PCR. We selected five genes (POLLEN SPECIFIC GENE1-5) to investigate the pollen specificity of their expression. Transcriptional fusions between the putative promoters of these genes and the uidA reporter gene in Arabidopsis confirmed the pollen specific expression for at least two of these genes. The expression of the cytotoxin Barnase controlled by these promoters generated pollen specific ablation and male sterility. Through the selection of pollen specific genes from public datasets, we were able to identify promoter regions that confer pollen expression. The use of the cytotoxin Barnase allowed us to demonstrate its expression is exclusively limited to the pollen. These new promoters provide a powerful tool for the expression of genes exclusively in pollen.

  1. A high-performance genetic algorithm: using traveling salesman problem as a case.

    PubMed

    Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  2. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  3. Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions

    DTIC Science & Technology

    2007-12-06

    problems studied in this project involve numerically solving partial differential equations with either discontinuous or rapidly changing solutions ...REPORT Algorithm Development and Application of High Order Numerical Methods for Shocked and Rapid Changing Solutions 14. ABSTRACT 16. SECURITY...discontinuous Galerkin finite element methods, for solving partial differential equations with discontinuous or rapidly changing solutions . Algorithm

  4. High- and low-level hierarchical classification algorithm based on source separation process

    NASA Astrophysics Data System (ADS)

    Loghmari, Mohamed Anis; Karray, Emna; Naceur, Mohamed Saber

    2016-10-01

    High-dimensional data applications have earned great attention in recent years. We focus on remote sensing data analysis on high-dimensional space like hyperspectral data. From a methodological viewpoint, remote sensing data analysis is not a trivial task. Its complexity is caused by many factors, such as large spectral or spatial variability as well as the curse of dimensionality. The latter describes the problem of data sparseness. In this particular ill-posed problem, a reliable classification approach requires appropriate modeling of the classification process. The proposed approach is based on a hierarchical clustering algorithm in order to deal with remote sensing data in high-dimensional space. Indeed, one obvious method to perform dimensionality reduction is to use the independent component analysis process as a preprocessing step. The first particularity of our method is the special structure of its cluster tree. Most of the hierarchical algorithms associate leaves to individual clusters, and start from a large number of individual classes equal to the number of pixels; however, in our approach, leaves are associated with the most relevant sources which are represented according to mutually independent axes to specifically represent some land covers associated with a limited number of clusters. These sources contribute to the refinement of the clustering by providing complementary rather than redundant information. The second particularity of our approach is that at each level of the cluster tree, we combine both a high-level divisive clustering and a low-level agglomerative clustering. This approach reduces the computational cost since the high-level divisive clustering is controlled by a simple Boolean operator, and optimizes the clustering results since the low-level agglomerative clustering is guided by the most relevant independent sources. Then at each new step we obtain a new finer partition that will participate in the clustering process to enhance

  5. High specific energy and specific power aluminum/air battery for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Kindler, A.; Matthies, L.

    2014-06-01

    Micro air vehicles developed under the Army's Micro Autonomous Systems and Technology program generally need a specific energy of 300 - 550 watt-hrs/kg and 300 -550 watts/kg to operate for about 1 hour. At present, no commercial cell can fulfill this need. The best available commercial technology is the Lithium-ion battery or its derivative, the Li- Polymer cell. This chemistry generally provides around 15 minutes flying time. One alternative to the State-of-the Art is the Al/air cell, a primary battery that is actually half fuel cell. It has a high energy battery like aluminum anode, and fuel cell like air electrode that can extract oxygen out of the ambient air rather than carrying it. Both of these features tend to contribute to a high specific energy (watt-hrs/kg). High specific power (watts/kg) is supported by high concentration KOH electrolyte, a high quality commercial air electrode, and forced air convection from the vehicles rotors. The performance of this cell with these attributes is projected to be 500 watt-hrs/kg and 500 watts/kg based on simple model. It is expected to support a flying time of approximately 1 hour in any vehicle in which the usual limit is 15 minutes.

  6. Noncovalent functionalization of carbon nanotubes for highly specific electronic biosensors

    NASA Astrophysics Data System (ADS)

    Chen, Robert J.; Bangsaruntip, Sarunya; Drouvalakis, Katerina A.; Wong Shi Kam, Nadine; Shim, Moonsub; Li, Yiming; Kim, Woong; Utz, Paul J.; Dai, Hongjie

    2003-04-01

    Novel nanomaterials for bioassay applications represent a rapidly progressing field of nanotechnology and nanobiotechnology. Here, we present an exploration of single-walled carbon nanotubes as a platform for investigating surface-protein and protein-protein binding and developing highly specific electronic biomolecule detectors. Nonspecific binding on nanotubes, a phenomenon found with a wide range of proteins, is overcome by immobilization of polyethylene oxide chains. A general approach is then advanced to enable the selective recognition and binding of target proteins by conjugation of their specific receptors to polyethylene oxide-functionalized nanotubes. This scheme, combined with the sensitivity of nanotube electronic devices, enables highly specific electronic sensors for detecting clinically important biomolecules such as antibodies associated with human autoimmune diseases.

  7. Performing target specific band reduction using artificial neural networks and assessment of its efficacy using various target detection algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.

    2016-04-01

    Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.

  8. The evolutionary development of high specific impulse electric thruster technology

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Patterson, Michael J.; Rawlin, Vincent K.; Myers, Roger M.

    1992-01-01

    Electric propulsion flight and technology demonstrations conducted primarily by Europe, Japan, China, the U.S., and the USSR are reviewed. Evolutionary mission applications for high specific impulse electric thruster systems are discussed, and the status of arcjet, ion, and magnetoplasmadynamic thrusters and associated power processor technologies are summarized.

  9. A Ratio Test of Interrater Agreement with High Specificity

    ERIC Educational Resources Information Center

    Cousineau, Denis; Laurencelle, Louis

    2015-01-01

    Existing tests of interrater agreements have high statistical power; however, they lack specificity. If the ratings of the two raters do not show agreement but are not random, the current tests, some of which are based on Cohen's kappa, will often reject the null hypothesis, leading to the wrong conclusion that agreement is present. A new test of…

  10. Specific volume coupling and convergence properties in hybrid particle/finite volume algorithms for turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Popov, Pavel P.; Wang, Haifeng; Pope, Stephen B.

    2015-08-01

    We investigate the coupling between the two components of a Large Eddy Simulation/Probability Density Function (LES/PDF) algorithm for the simulation of turbulent reacting flows. In such an algorithm, the Large Eddy Simulation (LES) component provides a solution to the hydrodynamic equations, whereas the Lagrangian Monte Carlo Probability Density Function (PDF) component solves for the PDF of chemical compositions. Special attention is paid to the transfer of specific volume information from the PDF to the LES code: the specific volume field contains probabilistic noise due to the nature of the Monte Carlo PDF solution, and thus the use of the specific volume field in the LES pressure solver needs careful treatment. Using a test flow based on the Sandia/Sydney Bluff Body Flame, we determine the optimal strategy for specific volume feedback. Then, the overall second-order convergence of the entire LES/PDF procedure is verified using a simple vortex ring test case, with special attention being given to bias errors due to the number of particles per LES Finite Volume (FV) cell.

  11. A fast and accurate algorithm for high-frequency trans-ionospheric path length determination

    NASA Astrophysics Data System (ADS)

    Wijaya, Dudy D.

    2015-12-01

    This paper presents a fast and accurate algorithm for high-frequency trans-ionospheric path length determination. The algorithm is merely based on the solution of the Eikonal equation that is solved using the conformal theory of refraction. The main advantages of the algorithm are summarized as follows. First, the algorithm can determine the optical path length without iteratively adjusting both elevation and azimuth angles and, hence, the computational time can be reduced. Second, for the same elevation and azimuth angles, the algorithm can simultaneously determine the phase and group of both ordinary and extra-ordinary optical path lengths for different frequencies. Results from numerical simulations show that the computational time required by the proposed algorithm to accurately determine 8 different optical path lengths is almost 17 times faster than that required by a 3D ionospheric ray-tracing algorithm. It is found that the computational time to determine multiple optical path lengths is the same with that for determining a single optical path length. It is also found that the proposed algorithm is capable of determining the optical path lengths with millimeter level of accuracies, if the magnitude of the squared ratio of the plasma frequency to the transmitted frequency is less than 1.33× 10^{-3}, and hence the proposed algorithm is applicable for geodetic applications.

  12. A synthetic Earth Gravity Model Designed Specifically for Testing Regional Gravimetric Geoid Determination Algorithms

    NASA Astrophysics Data System (ADS)

    Baran, I.; Kuhn, M.; Claessens, S. J.; Featherstone, W. E.; Holmes, S. A.; Vaníček, P.

    2006-04-01

    A synthetic [simulated] Earth gravity model (SEGM) of the geoid, gravity and topography has been constructed over Australia specifically for validating regional gravimetric geoid determination theories, techniques and computer software. This regional high-resolution (1-arc-min by 1-arc-min) Australian SEGM (AusSEGM) is a combined source and effect model. The long-wavelength effect part (up to and including spherical harmonic degree and order 360) is taken from an assumed errorless EGM96 global geopotential model. Using forward modelling via numerical Newtonian integration, the short-wavelength source part is computed from a high-resolution (3-arc-sec by 3-arc-sec) synthetic digital elevation model (SDEM), which is a fractal surface based on the GLOBE v1 DEM. All topographic masses are modelled with a constant mass-density of 2,670 kg/m3. Based on these input data, gravity values on the synthetic topography (on a grid and at arbitrarily distributed discrete points) and consistent geoidal heights at regular 1-arc-min geographical grid nodes have been computed. The precision of the synthetic gravity and geoid data (after a first iteration) is estimated to be better than 30 μ Gal and 3 mm, respectively, which reduces to 1 μ Gal and 1 mm after a second iteration. The second iteration accounts for the changes in the geoid due to the superposed synthetic topographic mass distribution. The first iteration of AusSEGM is compared with Australian gravity and GPS-levelling data to verify that it gives a realistic representation of the Earth’s gravity field. As a by-product of this comparison, AusSEGM gives further evidence of the north south-trending error in the Australian Height Datum. The freely available AusSEGM-derived gravity and SDEM data, included as Electronic Supplementary Material (ESM) with this paper, can be used to compute a geoid model that, if correct, will agree to in 3 mm with the AusSEGM geoidal heights, thus offering independent verification of theories

  13. ParaDock: a flexible non-specific DNA--rigid protein docking algorithm.

    PubMed

    Banitt, Itamar; Wolfson, Haim J

    2011-11-01

    Accurate prediction of protein-DNA complexes could provide an important stepping stone towards a thorough comprehension of vital intracellular processes. Few attempts were made to tackle this issue, focusing on binding patch prediction, protein function classification and distance constraints-based docking. We introduce ParaDock: a novel ab initio protein-DNA docking algorithm. ParaDock combines short DNA fragments, which have been rigidly docked to the protein based on geometric complementarity, to create bent planar DNA molecules of arbitrary sequence. Our algorithm was tested on the bound and unbound targets of a protein-DNA benchmark comprised of 47 complexes. With neither addressing protein flexibility, nor applying any refinement procedure, CAPRI acceptable solutions were obtained among the 10 top ranked hypotheses in 83% of the bound complexes, and 70% of the unbound. Without requiring prior knowledge of DNA length and sequence, and within <2 h per target on a standard 2.0 GHz single processor CPU, ParaDock offers a fast ab initio docking solution.

  14. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  15. Power spectral density specifications for high-power laser systems

    SciTech Connect

    Lawson, J.K.; Aikens, D.A.; English, R.E. Jr.; Wolfe, C.R.

    1996-04-22

    This paper describes the use of Fourier techniques to characterize the transmitted and reflected wavefront of optical components. Specifically, a power spectral density, (PSD), approach is used. High power solid-state lasers exhibit non-linear amplification of specific spatial frequencies. Thus, specifications that limit the amplitude of these spatial frequencies are necessary in the design of these systems. Further, NIF optical components have square, rectangular or irregularly shaped apertures with major dimensions up-to 800 mm. Components with non-circular apertures can not be analyzed correctly with Zernicke polynomials since these functions are an orthogonal set for circular apertures only. A more complete and powerful representation of the optical wavefront can be obtained by Fourier analysis in 1 or 2 dimensions. The PSD is obtained from the amplitude of frequency components present in the Fourier spectrum. The shape of a resultant wavefront or the focal spot of a complex multicomponent laser system can be calculated and optimized using PSDs of the individual optical components which comprise the system. Surface roughness can be calculated over a range of spatial scale-lengths by integrating the PSD. Finally, since the optical transfer function (OTF) of the instruments used to measure the wavefront degrades at high spatial frequencies, the PSD of an optical component is underestimated. We can correct for this error by modifying the PSD function to restore high spatial frequency information. The strengths of PSD analysis are leading us to develop optical specifications incorporating this function for the planned National Ignition Facility (NIF).

  16. Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications

    DTIC Science & Technology

    2006-03-01

    Mapreduce : Simplified data processing on large clusters . In Symposium on Operating System Design and Implementation, 2004. 6.3.2 S. C. Deerwester, S. T...Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications Ting Liu CMU-CS-06-124 March 2006 School of...4. TITLE AND SUBTITLE Fast Nonparametric Machine Learning Algorithms for High-dimensional Massive Data and Applications 5a. CONTRACT NUMBER 5b

  17. A finite element algorithm for high-lying eigenvalues with Neumann and Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Báez, G.; Méndez-Sánchez, R. A.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We present a finite element algorithm that computes eigenvalues and eigenfunctions of the Laplace operator for two-dimensional problems with homogeneous Neumann or Dirichlet boundary conditions, or combinations of either for different parts of the boundary. We use an inverse power plus Gauss-Seidel algorithm to solve the generalized eigenvalue problem. For Neumann boundary conditions the method is much more efficient than the equivalent finite difference algorithm. We checked the algorithm by comparing the cumulative level density of the spectrum obtained numerically with the theoretical prediction given by the Weyl formula. We found a systematic deviation due to the discretization, not to the algorithm itself.

  18. Double images encryption method with resistance against the specific attack based on an asymmetric algorithm.

    PubMed

    Wang, Xiaogang; Zhao, Daomu

    2012-05-21

    A double-image encryption technique that based on an asymmetric algorithm is proposed. In this method, the encryption process is different from the decryption and the encrypting keys are also different from the decrypting keys. In the nonlinear encryption process, the images are encoded into an amplitude cyphertext, and two phase-only masks (POMs) generated based on phase truncation are kept as keys for decryption. By using the classical double random phase encoding (DRPE) system, the primary images can be collected by an intensity detector that located at the output plane. Three random POMs that applied in the asymmetric encryption can be safely applied as public keys. Simulation results are presented to demonstrate the validity and security of the proposed protocol.

  19. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  20. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  1. GPU-based ray tracing algorithm for high-speed propagation prediction in multiroom indoor environments

    NASA Astrophysics Data System (ADS)

    Guan, Xiaowei; Guo, Lixin; Liu, Zhongyu

    2015-10-01

    A novel ray tracing algorithm for high-speed propagation prediction in multi-room indoor environments is proposed in this paper, whose theoretical foundations are geometrical optics (GO) and the uniform theory of diffraction(UTD). Taking the geometrical and electromagnetic information of the complex indoor scene into account, some acceleration techniques are adopted to raise the efficiency of the ray tracing algorithm. The simulation results indicate that the runtime of the ray tracing algorithm will sharply increase when the number of the objects in multi-room buildings is large enough. Therefore, GPU acceleration technology is used to solve that problem. Finally, a typical multi-room indoor environment with several objects in each room is simulated by using the serial ray tracing algorithm and the parallel one respectively. It can be found easily from the results that compared with the serial algorithm, the GPU-based one can achieve greater efficiency.

  2. A High Performance Cloud-Based Protein-Ligand Docking Prediction Algorithm

    PubMed Central

    Chen, Jui-Le; Yang, Chu-Sing

    2013-01-01

    The potential of predicting druggability for a particular disease by integrating biological and computer science technologies has witnessed success in recent years. Although the computer science technologies can be used to reduce the costs of the pharmaceutical research, the computation time of the structure-based protein-ligand docking prediction is still unsatisfied until now. Hence, in this paper, a novel docking prediction algorithm, named fast cloud-based protein-ligand docking prediction algorithm (FCPLDPA), is presented to accelerate the docking prediction algorithm. The proposed algorithm works by leveraging two high-performance operators: (1) the novel migration (information exchange) operator is designed specially for cloud-based environments to reduce the computation time; (2) the efficient operator is aimed at filtering out the worst search directions. Our simulation results illustrate that the proposed method outperforms the other docking algorithms compared in this paper in terms of both the computation time and the quality of the end result. PMID:23762864

  3. Algorithms and architectures for high performance analysis of semantic graphs.

    SciTech Connect

    Hendrickson, Bruce Alan

    2005-09-01

    analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.

  4. Method of preparing high specific activity platinum-195m

    SciTech Connect

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-06-15

    A method of preparing high-specific-activity .sup.195m Pt includes the steps of: exposing .sup.193 Ir to a flux of neutrons sufficient to convert a portion of the .sup.193 Ir to .sup.195m Pt to form an irradiated material; dissolving the irradiated material to form an intermediate solution comprising Ir and Pt; and separating the Pt from the Ir by cation exchange chromatography to produce .sup.195m Pt.

  5. Method for preparing high specific activity 177Lu

    DOEpatents

    Mirzadeh, Saed; Du, Miting; Beets, Arnold L.; Knapp, Jr., Furn F.

    2004-04-06

    A method of separating lutetium from a solution containing Lu and Yb, particularly reactor-produced .sup.177 Lu and .sup.177 Yb, includes the steps of: providing a chromatographic separation apparatus containing LN resin; loading the apparatus with a solution containing Lu and Yb; and eluting the apparatus to chromatographically separate the Lu and the Yb in order to produce high-specific-activity .sup.177 Yb.

  6. Accelerator Production and Separations for High Specific Activity Rhenium-186

    SciTech Connect

    Jurisson, Silvia S.; Wilbur, D. Scott

    2016-04-01

    Tungsten and osmium targets were evaluated for the production of high specific activity rhenium-186. Rhenium-186 has potential applications in radiotherapy for the treatment of a variety of diseases, including targeting with monoclonal antibodies and peptides. Methods were evaluated using tungsten metal, tungsten dioxide, tungsten disulfide and osmium disulfide. Separation of the rhenium-186 produced and recycling of the enriched tungsten-186 and osmium-189 enriched targets were developed.

  7. Solar-powered rocket engine optimization for high specific impulse

    NASA Astrophysics Data System (ADS)

    Pande, J. Bradley

    1993-11-01

    Hercules Aerospace is currently developing a solar-powered rocket engine (SPRE) design optimized for high specific impulse (Isp). The SPRE features a low loss geometry in its light-gathering cavity, which includes an integral secondary concentrator. The simple one-piece heat exchanger is made from refractory metal and/or ceramic open-celled foam. The foam's high surface-area-to-volume ratio will efficiently transfer the thermal energy to the hydrogen propellant. The single-pass flow of propellant through the heat exchanger further boosts thermal efficiency by regeneratively cooling surfaces near the entrance of the optical cavity. These surfaces would otherwise reradiate a significant portion of the captured solar energy back out of the solar entrance. Such design elements promote a high overall thermal efficiency and hence, a high operating Isp

  8. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    DTIC Science & Technology

    2009-01-30

    couple these elements using Finite- Volume-like surface Riemann solvers. This hybrid, dual-layer design allows DGTD to combine advantages from both of...electromagnetic waves , J. Comput. Phys., 114 (1994), pp. 185-200. [62] F. COLLINO, High order absorbing boundary conditions for wave propagation...formulation with high order absorbing boundary conditions for time-dependent waves , Comput. Meth. Appl. Mech., 195 (2006), pp. 3666-3690. [69] M. GUDDATI

  9. Phase-unwrapping algorithm for images with high noise content based on a local histogram.

    PubMed

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  10. Phase-unwrapping algorithm for images with high noise content based on a local histogram

    NASA Astrophysics Data System (ADS)

    Meneses, Jaime; Gharbi, Tijani; Humbert, Philippe

    2005-03-01

    We present a robust algorithm of phase unwrapping that was designed for use on phase images with high noise content. We proceed with the algorithm by first identifying regions with continuous phase values placed between fringe boundaries in an image and then phase shifting the regions with respect to one another by multiples of 2pi to unwrap the phase. Image pixels are segmented between interfringe and fringe boundary areas by use of a local histogram of a wrapped phase. The algorithm has been used successfully to unwrap phase images generated in a three-dimensional shape measurement for noninvasive quantification of human skin structure in dermatology, cosmetology, and plastic surgery.

  11. Application of a Modified Garbage Code Algorithm to Estimate Cause-Specific Mortality and Years of Life Lost in Korea

    PubMed Central

    2016-01-01

    Years of life lost (YLLs) are estimated based on mortality and cause of death (CoD); therefore, it is necessary to accurately calculate CoD to estimate the burden of disease. The garbage code algorithm was developed by the Global Burden of Disease (GBD) Study to redistribute inaccurate CoD and enhance the validity of CoD estimation. This study aimed to estimate cause-specific mortality rates and YLLs in Korea by applying a modified garbage code algorithm. CoD data for 2010–2012 were used to calculate the number of deaths. The garbage code algorithm was then applied to calculate target cause (i.e., valid CoD) and adjusted CoD using the garbage code redistribution. The results showed that garbage code deaths accounted for approximately 25% of all CoD during 2010–2012. In 2012, lung cancer contributed the most to cause-specific death according to the Statistics Korea. However, when CoD was adjusted using the garbage code redistribution, ischemic heart disease was the most common CoD. Furthermore, before garbage code redistribution, self-harm contributed the most YLLs followed by lung cancer and liver cancer; however, after application of the garbage code redistribution, though self-harm was the most common leading cause of YLL, it is followed by ischemic heart disease and lung cancer. Our results showed that garbage code deaths accounted for a substantial amount of mortality and YLLs. The results may enhance our knowledge of burden of disease and help prioritize intervention settings by changing the relative importance of burden of disease. PMID:27775249

  12. A high-resolution algorithm for wave number estimation using holographic array processing

    NASA Astrophysics Data System (ADS)

    Roux, Philippe; Cassereau, Didier; Roux, André

    2004-03-01

    This paper presents an original way to perform wave number inversion from simulated data obtained in a noisy shallow-water environment. In the studied configuration an acoustic source is horizontally towed with respect to a vertical hydrophone array. The inversion is achieved from the combination of three ingredients. First, a modified version of the Prony algorithm is presented and numerical comparison is made to another high-resolution wave number inversion algorithm based on the matrix-pencil technique. Second, knowing that these high-resolution algorithms are classically sensitive to noise, the use of a holographic array processing enables improvement of the signal-to-noise ratio before the inversion is performed. Last, particular care is taken in the representations of the solutions in the wave number space to improve resolution without suffering from aliasing. The dependence of this wave number inversion algorithm on the relevant parameters of the problem is discussed.

  13. Generalized computer algorithms for enthalpy, entropy and specific heat of superheated vapors

    NASA Astrophysics Data System (ADS)

    Cowden, Michael W.; Scaringe, Robert P.; Gebre-Amlak, Yonas D.

    This paper presents an innovative technique for the development of enthalpy, entropy, and specific heat correlations in the superheated vapor region. The method results in a prediction error of less than 5 percent and requires the storage of 39 constants for each fluid. These correlations are obtained by using the Beattie-Bridgeman equation of state and a least-squares regression for the coefficients involved.

  14. High efficiency cell-specific targeting of cytokine activity

    NASA Astrophysics Data System (ADS)

    Garcin, Geneviève; Paul, Franciane; Staufenbiel, Markus; Bordat, Yann; van der Heyden, José; Wilmes, Stephan; Cartron, Guillaume; Apparailly, Florence; de Koker, Stefaan; Piehler, Jacob; Tavernier, Jan; Uzé, Gilles

    2014-01-01

    Systemic toxicity currently prevents exploiting the huge potential of many cytokines for medical applications. Here we present a novel strategy to engineer immunocytokines with very high targeting efficacies. The method lies in the use of mutants of toxic cytokines that markedly reduce their receptor-binding affinities, and that are thus rendered essentially inactive. Upon fusion to nanobodies specifically binding to marker proteins, activity of these cytokines is selectively restored for cell populations expressing this marker. This ‘activity-by-targeting’ concept was validated for type I interferons and leptin. In the case of interferon, activity can be directed to target cells in vitro and to selected cell populations in mice, with up to 1,000-fold increased specific activity. This targeting strategy holds promise to revitalize the clinical potential of many cytokines.

  15. Cellulose antibody films for highly specific evanescent wave immunosensors

    NASA Astrophysics Data System (ADS)

    Hartmann, Andreas; Bock, Daniel; Jaworek, Thomas; Kaul, Sepp; Schulze, Matthais; Tebbe, H.; Wegner, Gerhard; Seeger, Stefan

    1996-01-01

    For the production of recognition elements for evanescent wave immunosensors optical waveguides have to be coated with ultrathin stable antibody films. In the present work non amphiphilic alkylated cellulose and copolyglutamate films are tested as monolayer matrices for the antibody immobilization using the Langmuir-Blodgett technique. These films are transferred onto optical waveguides and serve as excellent matrices for the immobilization of antibodies in high density and specificity. In addition to the multi-step immobilization of immunoglobulin G(IgG) on photochemically crosslinked and oxidized polymer films, the direct one-step transfer of mixed antibody-polymer films is performed. Both planar waveguides and optical fibers are suitable substrates for the immobilization. The activity and specificity of immobilized antibodies is controlled by the enzyme-linked immunosorbent assay (ELISA) technique. As a result reduced non-specific interactions between antigens and the substrate surface are observed if cinnamoylbutyether-cellulose is used as the film matrix for the antibody immobilization. Using the evanescent wave senor (EWS) technology immunosensor assays are performed in order to determine both the non-specific adsorption of different coated polymethylmethacrylat (PMMA) fibers and the long-term stability of the antibody films. Specificities of one-step transferred IgG-cellulose films are drastically enhanced compared to IgG-copolyglutamate films. Cellulose IgG films are used in enzymatic sandwich assays using mucine as a clinical relevant antigen that is recognized by the antibodies BM2 and BM7. A mucine calibration measurement is recorded. So far the observed detection limit for mucine is about 8 ng/ml.

  16. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group.

  17. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  18. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  19. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible.

  20. High Quality Typhoon Cloud Image Restoration by Combining Genetic Algorithm with Contourlet Transform

    SciTech Connect

    Zhang Changjiang; Wang Xiaodong

    2008-11-06

    An efficient typhoon cloud image restoration algorithm is proposed. Having implemented contourlet transform to a typhoon cloud image, noise is reduced in the high sub-bands. Weight median value filter is used to reduce the noise in the contourlet domain. Inverse contourlet transform is done to obtain the de-noising image. In order to enhance the global contrast of the typhoon cloud image, in-complete Beta transform (IBT) is used to determine non-linear gray transform curve so as to enhance global contrast for the de-noising typhoon cloud image. Genetic algorithm is used to obtain the optimal gray transform curve. Information entropy is used as the fitness function of the genetic algorithm. Experimental results show that the new algorithm is able to well enhance the global for the typhoon cloud image while well reducing the noises in the typhoon cloud image.

  1. The high performance parallel algorithm for Unified Gas-Kinetic Scheme

    NASA Astrophysics Data System (ADS)

    Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu

    2016-11-01

    A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.

  2. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    SciTech Connect

    Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.; Park, Haesun

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  3. A High-Performance Neural Prosthesis Enabled by Control Algorithm Design

    PubMed Central

    Gilja, Vikash; Nuyujukian, Paul; Chestek, Cindy A.; Cunningham, John P.; Yu, Byron M.; Fan, Joline M.; Churchland, Mark M.; Kaufman, Matthew T.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.

    2012-01-01

    Neural prostheses translate neural activity from the brain into control signals for guiding prosthetic devices, such as computer cursors and robotic limbs, and thus offer disabled patients greater interaction with the world. However, relatively low performance remains a critical barrier to successful clinical translation; current neural prostheses are considerably slower with less accurate control than the native arm. Here we present a new control algorithm, the recalibrated feedback intention-trained Kalman filter (ReFIT-KF), that incorporates assumptions about the nature of closed loop neural prosthetic control. When tested with rhesus monkeys implanted with motor cortical electrode arrays, the ReFIT-KF algorithm outperforms existing neural prostheses in all measured domains and halves acquisition time. This control algorithm permits sustained uninterrupted use for hours and generalizes to more challenging tasks without retraining. Using this algorithm, we demonstrate repeatable high performance for years after implantation across two monkeys, thereby increasing the clinical viability of neural prostheses. PMID:23160043

  4. Comparison between summing-up algorithms to determine areas of small peaks on high baselines

    NASA Astrophysics Data System (ADS)

    Shi, Quanlin; Zhang, Jiamei; Chang, Yongfu; Qian, Shaojun

    2005-12-01

    It is found that the minimum detectable activity (MDA) has a same tendency as the relative standard deviation (RSD) and a particular application is characteristic of the ratio of the peak area to the baseline height. Different applications need different algorithms to reduce the RSD of peak areas or the MDA of potential peaks. A model of Gaussian peaks superposed on linear baselines is established to simulate the multichannel spectrum and summing-up algorithms such as total peak area (TPA), and Covell and Sterlinski are compared to find the most appropriate algorithm for different applications. The results show that optimal Covell and Sterlinski algorithms will yield MDA or RSD half lower than TPA when the areas of small peaks on high baselines are to be determined. The conclusion is proved by experiment.

  5. High speed multiplier using Nikhilam Sutra algorithm of Vedic mathematics

    NASA Astrophysics Data System (ADS)

    Pradhan, Manoranjan; Panda, Rutuparna

    2014-03-01

    This article presents the design of a new high-speed multiplier architecture using Nikhilam Sutra of Vedic mathematics. The proposed multiplier architecture finds out the compliment of the large operand from its nearest base to perform the multiplication. The multiplication of two large operands is reduced to the multiplication of their compliments and addition. It is more efficient when the magnitudes of both operands are more than half of their maximum values. The carry save adder in the multiplier architecture increases the speed of addition of partial products. The multiplier circuit is synthesised and simulated using Xilinx ISE 10.1 software and implemented on Spartan 2 FPGA device XC2S30-5pq208. The output parameters such as propagation delay and device utilisation are calculated from synthesis results. The performance evaluation results in terms of speed and device utilisation are compared with earlier multiplier architecture. The proposed design has speed improvements compared to multiplier architecture presented in the literature.

  6. Efficiency Analysis of a High-Specific Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Jacobson, David (Technical Monitor); Hofer, Richard R.; Gallimore, Alec D.

    2004-01-01

    Performance and plasma measurements of the high-specific impulse NASA-173Mv2 Hall thruster were analyzed using a phenomenological performance model that accounts for a partially-ionized plasma containing multiply-charged ions. Between discharge voltages of 300 to 900 V, the results showed that although the net decrease of efficiency due to multiply-charged ions was only 1.5 to 3.0 percent, the effects of multiply-charged ions on the ion and electron currents could not be neglected. Between 300 to 900 V, the increase of the discharge current was attributed to the increasing fraction of multiply-charged ions, while the maximum deviation of the electron current from its average value was only +5/-14 percent. These findings revealed how efficient operation at high-specific impulse was enabled through the regulation of the electron current with the applied magnetic field. Between 300 to 900 V, the voltage utilization ranged from 89 to 97 percent, the mass utilization from 86 to 90 percent, and the current utilization from 77 to 81 percent. Therefore, the anode efficiency was largely determined by the current utilization. The electron Hall parameter was nearly constant with voltage, decreasing from an average of 210 at 300 V to an average of 160 between 400 to 900 V. These results confirmed our claim that efficient operation can be achieved only over a limited range of Hall parameters.

  7. A surgeon specific automatic path planning algorithm for deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Dawant, Benoit M.; Pallavaram, Srivatsan; Neimat, Joseph S.; Konrad, Peter E.; D'Haese, Pierre-Francois; Datteri, Ryan D.; Landman, Bennett A.; Noble, Jack H.

    2012-02-01

    In deep brain stimulation surgeries, stimulating electrodes are placed at specific targets in the deep brain to treat neurological disorders. Reaching these targets safely requires avoiding critical structures in the brain. Meticulous planning is required to find a safe path from the cortical surface to the intended target. Choosing a trajectory automatically is difficult because there is little consensus among neurosurgeons on what is optimal. Our goals are to design a path planning system that is able to learn the preferences of individual surgeons and, eventually, to standardize the surgical approach using this learned information. In this work, we take the first step towards these goals, which is to develop a trajectory planning approach that is able to effectively mimic individual surgeons and is designed such that parameters, which potentially can be automatically learned, are used to describe an individual surgeon's preferences. To validate the approach, two neurosurgeons were asked to choose between their manual and a computed trajectory, blinded to their identity. The results of this experiment showed that the neurosurgeons preferred the computed trajectory over their own in 10 out of 40 cases. The computed trajectory was judged to be equivalent to the manual one or otherwise acceptable in 27 of the remaining cases. These results demonstrate the potential clinical utility of computer-assisted path planning.

  8. Hemodynamic Assessment of Compliance of Pre-Stressed Pulmonary Valve-Vasculature in Patient Specific Geometry Using an Inverse Algorithm

    NASA Astrophysics Data System (ADS)

    Hebbar, Ullhas; Paul, Anup; Banerjee, Rupak

    2016-11-01

    Image based modeling is finding increasing relevance in assisting diagnosis of Pulmonary Valve-Vasculature Dysfunction (PVD) in congenital heart disease patients. This research presents compliant artery - blood interaction in a patient specific Pulmonary Artery (PA) model. This is an improvement over our previous numerical studies which assumed rigid walled arteries. The impedance of the arteries and the energy transfer from the Right Ventricle (RV) to PA is governed by compliance, which in turn is influenced by the level of pre-stress in the arteries. In order to evaluate the pre-stress, an inverse algorithm was developed using an in-house script written in MATLAB and Python, and implemented using the Finite Element Method (FEM). This analysis used a patient specific material model developed by our group, in conjunction with measured pressure (invasive) and velocity (non-invasive) values. The analysis was performed on an FEM solver, and preliminary results indicated that the Main PA (MPA) exhibited higher compliance as well as increased hysteresis over the cardiac cycle when compared with the Left PA (LPA). The computed compliance values for the MPA and LPA were 14% and 34% lesser than the corresponding measured values. Further, the computed pressure drop and flow waveforms were in close agreement with the measured values. In conclusion, compliant artery - blood interaction models of patient specific geometries can play an important role in hemodynamics based diagnosis of PVD.

  9. Specific Heat of High Temperature Superconductors: a Review

    NASA Astrophysics Data System (ADS)

    Junod, Alain

    The following sections are included: * INTRODUCTION * EXPERIMENTAL * LATTICE SPECIFIC HEAT * NORMAL-STATE ELECTRONIC SPECIFIC HEAT * SUPERCONDUCTING STATE * BEHAVIOR AT T→0 * CONCLUSION * ACKNOWLEDGEMENTS * APPENDIX * REFERENCES

  10. A class-based scheduling algorithm with high throughput for optical burst switching networks

    NASA Astrophysics Data System (ADS)

    Wu, Guiling; Chen, Jianping; Li, Xinwan; Wang, Hui

    2005-02-01

    Optical burst switching (OBS) is more efficient and feasible solution to build terabit IP-over-WDM optical network by employing relatively matured photonic and opto-electronic devices and combining the advantage of high bandwidth of optical transmission/switching and high flexibility of electronic control/processing. Channel scheduling algorithm is one of the key issues related to OBS networks. In this paper, a class-based scheduling algorithm is presented with emphasis on fairly utilizing the bandwidth among different services. A maximum reserved channel numbers and a maximum channel search times is introduced for each service based on its class of services, load and available bandwidth resource in the class-based scheduling algorithm. The performance of the scheduling algorithm is studied in detail by simulation. The results show that the scheduling algorithm can allocate the bandwidth more fairly among different services and the total burst loss ratio under high throughput can be lowered with acceptable expense on delay performance of services with lower delay requirement. Problems related with burst loss ratio and the delay requirement of different services can be well solved simultaneously.

  11. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  12. A highly specific coding system for structural chromosomal alterations.

    PubMed

    Martínez-Frías, M L; Martínez-Fernández, M L

    2013-04-01

    The Spanish Collaborative Study of Congenital Malformations (ECEMC, from the name in Spanish) has developed a very simple and highly specific coding system for structural chromosomal alterations. Such a coding system would be of value at present due to the dramatic increase in the diagnosis of submicroscopic chromosomal deletions and duplications through molecular techniques. In summary, our new coding system allows the characterization of: (a) the type of structural anomaly; (b) the chromosome affected; (c) if the alteration affects the short or/and the long arm, and (d) if it is a non-pure dicentric, a non-pure isochromosome, or if it affects several chromosomes. We show the distribution of 276 newborn patients with these types of chromosomal alterations using their corresponding codes according to our system. We consider that our approach may be useful not only for other registries, but also for laboratories performing these studies to store their results on case series. Therefore, the aim of this article is to describe this coding system and to offer the opportunity for this coding to be applied by others. Moreover, as this is a SYSTEM, rather than a fixed code, it can be implemented with the necessary modifications to include the specific objectives of each program.

  13. A Very-High-Specific-Impulse Relativistic Laser Thruster

    SciTech Connect

    Horisawa, Hideyuki; Kimura, Itsuro

    2008-04-28

    Characteristics of compact laser plasma accelerators utilizing high-power laser and thin-target interaction were reviewed as a potential candidate of future spacecraft thrusters capable of generating relativistic plasma beams for interstellar missions. Based on the special theory of relativity, motion of the relativistic plasma beam exhausted from the thruster was formulated. Relationships of thrust, specific impulse, input power and momentum coupling coefficient for the relativistic plasma thruster were derived. It was shown that under relativistic conditions, the thrust could be extremely large even with a small amount of propellant flow rate. Moreover, it was shown that for a given value of input power thrust tended to approach the value of the photon rocket under the relativistic conditions regardless of the propellant flow rate.

  14. A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas

    SciTech Connect

    Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q

    2007-04-18

    A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.

  15. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  16. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Plasmoid Thruster for High Specific-Impulse Propulsion

    NASA Technical Reports Server (NTRS)

    Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael

    2007-01-01

    A report discusses a new multi-turn, multi-lead design for the first generation PT-1 (Plasmoid Thruster) that produces thrust by expelling plasmas with embedded magnetic fields (plasmoids) at high velocities. This thruster is completely electrodeless, capable of using in-situ resources, and offers efficiencies as high as 70 percent at a specific impulse, I(sub sp), of up to 8,000 s. This unit consists of drive and bias coils wound around a ceramic form, and the capacitor bank and switches are an integral part of the assembly. Multiple thrusters may be gauged to inductively recapture unused energy to boost efficiency and to increase the repetition rate, which, in turn increases the average thrust of the system. The thruster assembly can use storable propellants such as H2O, ammonia, and NO, among others. Any available propellant gases can be used to produce an I(sub sp) in the range of 2,000 to 8,000 s with a single-stage thruster. These capabilities will allow the transport of greater payloads to outer planets, especially in the case of an I(sub sp) greater than 6,000 s.

  18. Nanoporous ultra-high specific surface inorganic fibres

    NASA Astrophysics Data System (ADS)

    Kanehata, Masaki; Ding, Bin; Shiratori, Seimei

    2007-08-01

    Nanoporous inorganic (silica) nanofibres with ultra-high specific surface have been fabricated by electrospinning the blend solutions of poly(vinyl alcohol) (PVA) and colloidal silica nanoparticles, followed by selective removal of the PVA component. The configurations of the composite and inorganic nanofibres were investigated by changing the average silica particle diameters and the concentrations of colloidal silica particles in polymer solutions. After the removal of PVA by calcination, the fibre shape of pure silica particle assembly was maintained. The nanoporous silica fibres were assembled as a porous membrane with a high surface roughness. From the results of Brunauer-Emmett-Teller (BET) measurements, the BET surface area of inorganic silica nanofibrous membranes was increased with the decrease of the particle diameters. The membrane composed of silica particles with diameters of 15 nm showed the largest BET surface area of 270.3 m2 g-1 and total pore volume of 0.66 cm3 g-1. The physical absorption of methylene blue dye molecules by nanoporous silica membranes was examined using UV-vis spectrometry. Additionally, the porous silica membranes modified with fluoroalkylsilane showed super-hydrophobicity due to their porous structures.

  19. Machine-learning algorithms define pathogen-specific local immune fingerprints in peritoneal dialysis patients with bacterial infections.

    PubMed

    Zhang, Jingjing; Friberg, Ida M; Kift-Morgan, Ann; Parekh, Gita; Morgan, Matt P; Liuzzi, Anna Rita; Lin, Chan-Yu; Donovan, Kieron L; Colmont, Chantal S; Morgan, Peter H; Davis, Paul; Weeks, Ian; Fraser, Donald J; Topley, Nicholas; Eberl, Matthias

    2017-03-16

    The immune system has evolved to sense invading pathogens, control infection, and restore tissue integrity. Despite symptomatic variability in patients, unequivocal evidence that an individual's immune system distinguishes between different organisms and mounts an appropriate response is lacking. We here used a systematic approach to characterize responses to microbiologically well-defined infection in a total of 83 peritoneal dialysis patients on the day of presentation with acute peritonitis. A broad range of cellular and soluble parameters was determined in peritoneal effluents, covering the majority of local immune cells, inflammatory and regulatory cytokines and chemokines as well as tissue damage-related factors. Our analyses, utilizing machine-learning algorithms, demonstrate that different groups of bacteria induce qualitatively distinct local immune fingerprints, with specific biomarker signatures associated with Gram-negative and Gram-positive organisms, and with culture-negative episodes of unclear etiology. Even more, within the Gram-positive group, unique immune biomarker combinations identified streptococcal and non-streptococcal species including coagulase-negative Staphylococcus spp. These findings have diagnostic and prognostic implications by informing patient management and treatment choice at the point of care. Thus, our data establish the power of non-linear mathematical models to analyze complex biomedical datasets and highlight key pathways involved in pathogen-specific immune responses.

  20. MS Amanda, a Universal Identification Algorithm Optimized for High Accuracy Tandem Mass Spectra

    PubMed Central

    2014-01-01

    Today’s highly accurate spectra provided by modern tandem mass spectrometers offer considerable advantages for the analysis of proteomic samples of increased complexity. Among other factors, the quantity of reliably identified peptides is considerably influenced by the peptide identification algorithm. While most widely used search engines were developed when high-resolution mass spectrometry data were not readily available for fragment ion masses, we have designed a scoring algorithm particularly suitable for high mass accuracy. Our algorithm, MS Amanda, is generally applicable to HCD, ETD, and CID fragmentation type data. The algorithm confidently explains more spectra at the same false discovery rate than Mascot or SEQUEST on examined high mass accuracy data sets, with excellent overlap and identical peptide sequence identification for most spectra also explained by Mascot or SEQUEST. MS Amanda, available at http://ms.imp.ac.at/?goto=msamanda, is provided free of charge both as standalone version for integration into custom workflows and as a plugin for the Proteome Discoverer platform. PMID:24909410

  1. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  2. Representation of high frequency Space Shuttle data by ARMA algorithms and random response spectra

    NASA Technical Reports Server (NTRS)

    Spanos, P. D.; Mushung, L. J.

    1990-01-01

    High frequency Space Shuttle lift-off data are treated by autoregressive (AR) and autoregressive-moving-average (ARMA) digital algorithms. These algorithms provide useful information on the spectral densities of the data. Further, they yield spectral models which lend themselves to incorporation to the concept of the random response spectrum. This concept yields a reasonably smooth power spectrum for the design of structural and mechanical systems when the available data bank is limited. Due to the non-stationarity of the lift-off event, the pertinent data are split into three slices. Each of the slices is associated with a rather distinguishable phase of the lift-off event, where stationarity can be expected. The presented results are rather preliminary in nature; it is aimed to call attention to the availability of the discussed digital algorithms and to the need to augment the Space Shuttle data bank as more flights are completed.

  3. An infrared small target detection algorithm based on high-speed local contrast method

    NASA Astrophysics Data System (ADS)

    Cui, Zheng; Yang, Jingli; Jiang, Shouda; Li, Junbao

    2016-05-01

    Small-target detection in infrared imagery with a complex background is always an important task in remote sensing fields. It is important to improve the detection capabilities such as detection rate, false alarm rate, and speed. However, current algorithms usually improve one or two of the detection capabilities while sacrificing the other. In this letter, an Infrared (IR) small target detection algorithm with two layers inspired by Human Visual System (HVS) is proposed to balance those detection capabilities. The first layer uses high speed simplified local contrast method to select significant information. And the second layer uses machine learning classifier to separate targets from background clutters. Experimental results show the proposed algorithm pursue good performance in detection rate, false alarm rate and speed simultaneously.

  4. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    SciTech Connect

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.

  5. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  6. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  7. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  8. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  9. High voltage and high specific capacity dual intercalating electrode Li-ion batteries

    NASA Technical Reports Server (NTRS)

    West, William C. (Inventor); Blanco, Mario (Inventor)

    2010-01-01

    The present invention provides high capacity and high voltage Li-ion batteries that have a carbonaceous cathode and a nonaqueous electrolyte solution comprising LiF salt and an anion receptor that binds the fluoride ion. The batteries can comprise dual intercalating electrode Li ion batteries. Methods of the present invention use a cathode and electrode pair, wherein each of the electrodes reversibly intercalate ions provided by a LiF salt to make a high voltage and high specific capacity dual intercalating electrode Li-ion battery. The present methods and systems provide high-capacity batteries particularly useful in powering devices where minimizing battery mass is important.

  10. Mechanism of substrate selection by a highly specific CRISPR endoribonuclease.

    PubMed

    Sternberg, Samuel H; Haurwitz, Rachel E; Doudna, Jennifer A

    2012-04-01

    Bacteria and archaea possess adaptive immune systems that rely on small RNAs for defense against invasive genetic elements. CRISPR (clustered regularly interspaced short palindromic repeats) genomic loci are transcribed as long precursor RNAs, which must be enzymatically cleaved to generate mature CRISPR-derived RNAs (crRNAs) that serve as guides for foreign nucleic acid targeting and degradation. This processing occurs within the repetitive sequence and is catalyzed by a dedicated Cas6 family member in many CRISPR systems. In Pseudomonas aeruginosa, crRNA biogenesis requires the endoribonuclease Csy4 (Cas6f), which binds and cleaves at the 3' side of a stable RNA stem-loop structure encoded by the CRISPR repeat. We show here that Csy4 recognizes its RNA substrate with an ~50 pM equilibrium dissociation constant, making it one of the highest-affinity protein:RNA interactions of this size reported to date. Tight binding is mediated exclusively by interactions upstream of the scissile phosphate that allow Csy4 to remain bound to its product and thereby sequester the crRNA for downstream targeting. Substrate specificity is achieved by RNA major groove contacts that are highly sensitive to helical geometry, as well as a strict preference for guanosine adjacent to the scissile phosphate in the active site. Collectively, our data highlight diverse modes of substrate recognition employed by Csy4 to enable accurate selection of CRISPR transcripts while avoiding spurious, off-target RNA binding and cleavage.

  11. A high throughput architecture for a low complexity soft-output demapping algorithm

    NASA Astrophysics Data System (ADS)

    Ali, I.; Wasenmüller, U.; Wehn, N.

    2015-11-01

    Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.

  12. Identification by ultrasound evaluation of the carotid and femoral arteries of high-risk subjects missed by three validated cardiovascular disease risk algorithms.

    PubMed

    Postley, John E; Luo, Yanting; Wong, Nathan D; Gardin, Julius M

    2015-11-15

    Atherosclerotic cardiovascular disease (ASCVD) events are the leading cause of death in the United States and globally. Traditional global risk algorithms may miss 50% of patients who experience ASCVD events. Noninvasive ultrasound evaluation of the carotid and femoral arteries can identify subjects at high risk for ASCVD events. We examined the ability of different global risk algorithms to identify subjects with femoral and/or carotid plaques found by ultrasound. The study population consisted of 1,464 asymptomatic adults (39.8% women) aged 23 to 87 years without previous evidence of ASCVD who had ultrasound evaluation of the carotid and femoral arteries. Three ASCVD risk algorithms (10-year Framingham Risk Score [FRS], 30-year FRS, and lifetime risk) were compared for the 939 subjects who met the algorithm age criteria. The frequency of femoral plaque as the only plaque was 18.3% in the total group and 14.8% in the risk algorithm groups (n = 939) without a significant difference between genders in frequency of femoral plaque as the only plaque. Those identified as high risk by the lifetime risk algorithm included the most men and women who had plaques either femoral or carotid (59% and 55%) but had lower specificity because the proportion of subjects who actually had plaques in the high-risk group was lower (50% and 35%) than in those at high risk defined by the FRS algorithms. In conclusion, ultrasound evaluation of the carotid and femoral arteries can identify subjects at risk of ASCVD events missed by traditional risk-predicting algorithms. The large proportion of subjects with femoral plaque only supports the use of including both femoral and carotid arteries in ultrasound evaluation.

  13. Trajectory Specification for High-Capacity Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    2004-01-01

    In the current air traffic management system, the fundamental limitation on airspace capacity is the cognitive ability of human air traffic controllers to maintain safe separation with high reliability. The doubling or tripling of airspace capacity that will be needed over the next couple of decades will require that tactical separation be at least partially automated. Standardized conflict-free four-dimensional trajectory assignment will be needed to accomplish that objective. A trajectory specification format based on the Extensible Markup Language is proposed for that purpose. This format can be used to downlink a trajectory request, which can then be checked on the ground for conflicts and approved or modified, if necessary, then uplinked as the assigned trajectory. The horizontal path is specified as a series of geodetic waypoints connected by great circles, and the great-circle segments are connected by turns of specified radius. Vertical profiles for climb and descent are specified as low-order polynomial functions of along-track position, which is itself specified as a function of time. Flight technical error tolerances in the along-track, cross-track, and vertical axes define a bounding space around the reference trajectory, and conformance will guarantee the required separation for a period of time known as the conflict time horizon. An important safety benefit of this regimen is that the traffic will be able to fly free of conflicts for at least several minutes even if all ground systems and the entire communication infrastructure fail. Periodic updates in the along-track axis will adjust for errors in the predicted along-track winds.

  14. Low-complexity, high-speed, and high-dynamic range time-to-impact algorithm

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-10-01

    We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.

  15. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  16. A novel robust and efficient algorithm for charge particle tracking in high background flux

    NASA Astrophysics Data System (ADS)

    Fanelli, C.; Cisbani, E.; Del Dotto, A.

    2015-05-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 1039cm-2s-1. To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results.

  17. A Jitter-Mitigating High Gain Antenna Pointing Algorithm for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia; Blaurock, Carl

    2007-01-01

    This paper details a High Gain Antenna (HGA) pointing algorithm which mitigates jitter during the motion of the antennas on the Solar Dynamics Observatory (SDO) spacecraft. SDO has two HGAs which point towards the Earth and send data to a ground station at a high rate. These antennas are required to track the ground station during the spacecraft Inertial and Science modes, which include periods of inertial Sunpointing as well as calibration slews. The HGAs also experience handoff seasons, where the antennas trade off between pointing at the ground station and pointing away from the Earth. The science instruments on SDO require fine Sun pointing and have a very low jitter tolerance. Analysis showed that the nominal tracking and slewing motions of the antennas cause enough jitter to exceed the HGA portion of the jitter budget. The HGA pointing control algorithm was expanded from its original form as a means to mitigate the jitter.

  18. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  19. Algorithm for Automatic Behavior Quantification of Laboratory Mice Using High-Frame-Rate Videos

    NASA Astrophysics Data System (ADS)

    Nie, Yuman; Takaki, Takeshi; Ishii, Idaku; Matsuda, Hiroshi

    In this paper, we propose an algorithm for automatic behavior quantification in laboratory mice to quantify several model behaviors. The algorithm can detect repetitive motions of the fore- or hind-limbs at several or dozens of hertz, which are too rapid for the naked eye, from high-frame-rate video images. Multiple repetitive motions can always be identified from periodic frame-differential image features in four segmented regions — the head, left side, right side, and tail. Even when a mouse changes its posture and orientation relative to the camera, these features can still be extracted from the shift- and orientation-invariant shape of the mouse silhouette by using the polar coordinate system and adjusting the angle coordinate according to the head and tail positions. The effectiveness of the algorithm is evaluated by analyzing long-term 240-fps videos of four laboratory mice for six typical model behaviors: moving, rearing, immobility, head grooming, left-side scratching, and right-side scratching. The time durations for the model behaviors determined by the algorithm have detection/correction ratios greater than 80% for all the model behaviors. This shows good quantification results for actual animal testing.

  20. An efficient and high performance linear recursive variable expansion implementation of the Smith-Waterman algorithm.

    PubMed

    Hasan, Laiq; Al-Ars, Zaid

    2009-01-01

    In this paper, we present an efficient and high performance linear recursive variable expansion (RVE) implementation of the Smith-Waterman (S-W) algorithm and compare it with a traditional linear systolic array implementation. The results demonstrate that the linear RVE implementation performs up to 2.33 times better than the traditional linear systolic array implementation, at the cost of utilizing 2 times more resources.

  1. Development and Characterization of High-Efficiency, High-Specific Impulse Xenon Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Hofer, Richard R.; Jacobson, David (Technical Monitor)

    2004-01-01

    This dissertation presents research aimed at extending the efficient operation of 1600 s specific impulse Hall thruster technology to the 2000 to 3000 s range. Motivated by previous industry efforts and mission studies, the aim of this research was to develop and characterize xenon Hall thrusters capable of both high-specific impulse and high-efficiency operation. During the development phase, the laboratory-model NASA 173M Hall thrusters were designed and their performance and plasma characteristics were evaluated. Experiments with the NASA-173M version 1 (v1) validated the plasma lens magnetic field design. Experiments with the NASA 173M version 2 (v2) showed there was a minimum current density and optimum magnetic field topography at which efficiency monotonically increased with voltage. Comparison of the thrusters showed that efficiency can be optimized for specific impulse by varying the plasma lens. During the characterization phase, additional plasma properties of the NASA 173Mv2 were measured and a performance model was derived. Results from the model and experimental data showed how efficient operation at high-specific impulse was enabled through regulation of the electron current with the magnetic field. The electron Hall parameter was approximately constant with voltage, which confirmed efficient operation can be realized only over a limited range of Hall parameters.

  2. Connected Component Labeling algorithm for very complex and high-resolution images on an FPGA platform

    NASA Astrophysics Data System (ADS)

    Schwenk, Kurt; Huber, Felix

    2015-10-01

    Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded

  3. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping

    2014-10-01

    Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.

  4. Design and algorithm research of high precision airborne infrared touch screen

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan

    2016-10-01

    There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.

  5. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  6. TEA HF laser with a high specific radiation energy

    NASA Astrophysics Data System (ADS)

    Puchikin, A. V.; Andreev, M. V.; Losev, V. F.; Panchenko, Yu. N.

    2017-01-01

    Results of experimental studies of the chemical HF laser with a non-chain reaction are presented. The possibility of the total laser efficiency of 5 % is shown when a traditional C-to-C pumping circuit with the charging voltage of 20-24 kV is used. It is experimentally shown that the specific radiation output energy of 21 J/l is reached at the specific pump energy of 350 J/l in SF6/H2 = 14/1 mixture at the total pressure of 0.27 bar.

  7. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    SciTech Connect

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  8. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Alm Carlsson, Gudrun

    2013-04-01

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from 125I, 169Yb and 192Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  9. Specification of absorbed dose to water using model-based dose calculation algorithms for treatment planning in brachytherapy.

    PubMed

    Tedgren, Åsa Carlsson; Carlsson, Gudrun Alm

    2013-04-21

    Model-based dose calculation algorithms (MBDCAs), recently introduced in treatment planning systems (TPS) for brachytherapy, calculate tissue absorbed doses. In the TPS framework, doses have hereto been reported as dose to water and water may still be preferred as a dose specification medium. Dose to tissue medium Dmed then needs to be converted into dose to water in tissue Dw,med. Methods to calculate absorbed dose to differently sized water compartments/cavities inside tissue, infinitesimal (used for definition of absorbed dose), small, large or intermediate, are reviewed. Burlin theory is applied to estimate photon energies at which cavity sizes in the range 1 nm-10 mm can be considered small or large. Photon and electron energy spectra are calculated at 1 cm distance from the central axis in cylindrical phantoms of bone, muscle and adipose tissue for 20, 50, 300 keV photons and photons from (125)I, (169)Yb and (192)Ir sources; ratios of mass-collision-stopping powers and mass energy absorption coefficients are calculated as applicable to convert Dmed into Dw,med for small and large cavities. Results show that 1-10 nm sized cavities are small at all investigated photon energies; 100 µm cavities are large only at photon energies <20 keV. A choice of an appropriate conversion coefficient Dw, med/Dmed is discussed in terms of the cavity size in relation to the size of important cellular targets. Free radicals from DNA bound water of nanometre dimensions contribute to DNA damage and cell killing and may be the most important water compartment in cells implying use of ratios of mass-collision-stopping powers for converting Dmed into Dw,med.

  10. Specification of High Activity Gamma-Ray Sources.

    ERIC Educational Resources Information Center

    International Commission on Radiation Units and Measurements, Washington, DC.

    The report is concerned with making recommendations for the specifications of gamma ray sources, which relate to the quantity of radioactive material and the radiation emitted. Primary consideration is given to sources in teletherapy and to a lesser extent those used in industrial radiography and in irradiation units used in industry and research.…

  11. A high precision position sensor design and its signal processing algorithm for a maglev train.

    PubMed

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.

  12. A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train

    PubMed Central

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582

  13. A grid algorithm for high throughput fitting of dose-response curve data.

    PubMed

    Wang, Yuhong; Jadhav, Ajit; Southal, Noel; Huang, Ruili; Nguyen, Dac-Trung

    2010-10-21

    We describe a novel algorithm, Grid algorithm, and the corresponding computer program for high throughput fitting of dose-response curves that are described by the four-parameter symmetric logistic dose-response model. The Grid algorithm searches through all points in a grid of four dimensions (parameters) and finds the optimum one that corresponds to the best fit. Using simulated dose-response curves, we examined the Grid program's performance in reproducing the actual values that were used to generate the simulated data and compared it with the DRC package for the language and environment R and the XLfit add-in for Microsoft Excel. The Grid program was robust and consistently recovered the actual values for both complete and partial curves with or without noise. Both DRC and XLfit performed well on data without noise, but they were sensitive to and their performance degraded rapidly with increasing noise. The Grid program is automated and scalable to millions of dose-response curves, and it is able to process 100,000 dose-response curves from high throughput screening experiment per CPU hour. The Grid program has the potential of greatly increasing the productivity of large-scale dose-response data analysis and early drug discovery processes, and it is also applicable to many other curve fitting problems in chemical, biological, and medical sciences.

  14. A High-Order Statistical Tensor Based Algorithm for Anomaly Detection in Hyperspectral Imagery

    PubMed Central

    Geng, Xiurui; Sun, Kang; Ji, Luyan; Zhao, Yongchao

    2014-01-01

    Recently, high-order statistics have received more and more interest in the field of hyperspectral anomaly detection. However, most of the existing high-order statistics based anomaly detection methods require stepwise iterations since they are the direct applications of blind source separation. Moreover, these methods usually produce multiple detection maps rather than a single anomaly distribution image. In this study, we exploit the concept of coskewness tensor and propose a new anomaly detection method, which is called COSD (coskewness detector). COSD does not need iteration and can produce single detection map. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm. PMID:25366706

  15. [Structural characteristics providing for high specificity of enteropeptidase].

    PubMed

    Mikhaĭlova, A G; Rumsh, L D

    1998-04-01

    The effects of structural modification upon the specificity of enteropeptidase were studied. A variation in the unique specificity of the enzyme was shown to be the result of an autolysis caused by the enzyme's loss of calcium ions. The cleavage sites of the autolysis were determined. A truncated enzyme containing the C-terminal fragment of its heavy chain (466-800 residues) and the intact light chain were shown to be the products of autolysis. The kinetic parameters of the hydrolysis of trypsinogen, a recombinant protein, and a peptide substrate with both forms of enteropeptidase were determined. Conditions were found that can help regulate the transition of the native enzyme into the truncated form. A hypothesis was proposed concerning the autoactivational character of proenteropeptidase processing.

  16. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  17. A high-accuracy signal processing algorithm for frequency scanned interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Yang, Liangen; Wang, Xuanze; Zhai, Zhongsheng; Liu, Wenchao

    2013-10-01

    A high-accuracy signal processing algorithm was designed for the absolute distance measurement system performed with frequency scanned interferometry. The system uses frequency-modulated laser as light source and consists of two interferometers: the reference interferometer is used to compensate the errors and the measurement interferometer is used to measure the displacement. The reference interferometer and the measurement interferometer are used to measure synchronously. The principle of the measuring system and the current modulation circuit were presented. The smoothing convolution was used for processing the signals. The optical path difference of the reference interferometer has been calibrated, so the absolute distance can be measured by acquiring the phase information extracted from interference signals produced while scanning the laser frequency. Finally, measurement results of absolute distances ranging from 0.1m to 0.5m were presented. The experimental results demonstrated that the proposed algorithm had major computing advantages.

  18. Coaxial plasma thrusters for high specific impulse propulsion

    NASA Technical Reports Server (NTRS)

    Schoenberg, Kurt F.; Gerwin, Richard A.; Barnes, Cris W.; Henins, Ivars; Mayo, Robert; Moses, Ronald, Jr.; Scarberry, Richard; Wurden, Glen

    1991-01-01

    A fundamental basis for coaxial plasma thruster performance is presented and the steady-state, ideal MHD properties of a coaxial thruster using an annular magnetic nozzle are discussed. Formulas for power usage, thrust, mass flow rate, and specific impulse are acquired and employed to assess thruster performance. The performance estimates are compared with the observed properties of an unoptimized coaxial plasma gun. These comparisons support the hypothesis that ideal MHD has an important role in coaxial plasma thruster dynamics.

  19. Enhanced high dynamic range 3D shape measurement based on generalized phase-shifting algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Minmin; Du, Guangliang; Zhou, Canlin; Zhang, Chaorui; Si, Shuchun; Li, Hui; Lei, Zhenkun; Li, YanJie

    2017-02-01

    Measuring objects with large reflectivity variations across their surface is one of the open challenges in phase measurement profilometry (PMP). Saturated or dark pixels in the deformed fringe patterns captured by the camera will lead to phase fluctuations and errors. Jiang et al. proposed a high dynamic range real-time three-dimensional (3D) shape measurement method (Jiang et al., 2016) [17] that does not require changing camera exposures. Three inverted phase-shifted fringe patterns are used to complement three regular phase-shifted fringe patterns for phase retrieval whenever any of the regular fringe patterns are saturated. Nonetheless, Jiang's method has some drawbacks: (1) the phases of saturated pixels are estimated by different formulas on a case by case basis; in other words, the method lacks a universal formula; (2) it cannot be extended to the four-step phase-shifting algorithm, because inverted fringe patterns are the repetition of regular fringe patterns; (3) for every pixel in the fringe patterns, only three unsaturated intensity values can be chosen for phase demodulation, leaving the other unsaturated ones idle. We propose a method to enhance high dynamic range 3D shape measurement based on a generalized phase-shifting algorithm, which combines the complementary techniques of inverted and regular fringe patterns with a generalized phase-shifting algorithm. Firstly, two sets of complementary phase-shifted fringe patterns, namely the regular and the inverted fringe patterns, are projected and collected. Then, all unsaturated intensity values at the same camera pixel from two sets of fringe patterns are selected and employed to retrieve the phase using a generalized phase-shifting algorithm. Finally, simulations and experiments are conducted to prove the validity of the proposed method. The results are analyzed and compared with those of Jiang's method, demonstrating that our method not only expands the scope of Jiang's method, but also improves

  20. High-resolution algorithms for the Navier-Stokes equations for generalized discretizations

    NASA Astrophysics Data System (ADS)

    Mitchell, Curtis Randall

    Accurate finite volume solution algorithms for the two dimensional Navier Stokes equations and the three dimensional Euler equations for both structured and unstructured grid topologies are presented. Results for two dimensional quadrilateral and triangular elements and three dimensional tetrahedral elements will be provided. Fundamental to the solution algorithm is a technique for generating multidimensional polynomials which model the spatial variation of the flow variables. Cell averaged data is used to reconstruct pointwise distributions of the dependent variables. The reconstruction errors are evaluated on triangular meshes. The implementation of the algorithm is unique in that three reconstructions are performed for each cell face in the domain. Two of the reconstructions are used to evaluate the inviscid fluxes and correspond to the right and left interface states needed for the solution of a Riemann problem. The third reconstruction is used to evaluate the viscous fluxes. The gradient terms that appear in the viscous fluxes are formed by simply differentiating the polynomial. By selecting the appropriate cell control volumes, centered, upwind and upwind-biased stencils are possible. Numerical calculations in two dimensions include solutions to elliptic boundary value problems, Ringlebs' flow, an inviscid shock reflection, a flat plate boundary layer, and a shock induced separation over a flat plate. Three dimensional results include the ONERA M6 wing. All of the unstructured grids were generated using an advancing front mesh generation procedure. Modifications to the three dimensional grid generator were necessary to discretize the surface grids for bodies with high curvature. In addition, mesh refinement algorithms were implemented to improve the surface grid integrity. Examples include a Glasair fuselage, High Speed Civil Transport, and the ONERA M6 wing. The role of reconstruction as applied to adaptive remeshing is discussed and a new first order error

  1. New figure-fracturing algorithm for high-quality variable-shaped e-beam exposure data generation

    NASA Astrophysics Data System (ADS)

    Nakao, Hiroomi; Moriizumi, Koichi; Kamiyama, Kinya; Terai, Masayuki; Miwa, Hisaharu

    1996-07-01

    We present a new figure fracturing algorithm that partitions each polygon in layout design data into trapezoids for vriab1eshaped EB exposure data generation. In order to improve the dimension accuracy of fabricated mask patterns created using the figure fracturing result, our algorithm has two new effective functions, one for suppressing narrow figure generation and the other for suppressing critical part partition. Furthermore, using a new graph based approach, our algorithm efficiently chooses from all the possible partitioning lines an appropriate set of lines by which optimal figure fracturing is performed. The application results show that the algorithm produces high quality results in a reasonable processing time.

  2. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    NASA Technical Reports Server (NTRS)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  3. High concordance of gene expression profiling-correlated immunohistochemistry algorithms in diffuse large B-cell lymphoma, not otherwise specified.

    PubMed

    Hwang, Hee Sang; Park, Chan-Sik; Yoon, Dok Hyun; Suh, Cheolwon; Huh, Jooryung

    2014-08-01

    Diffuse large B-cell lymphoma (DLBCL) is classified into prognostically distinct germinal center B-cell (GCB) and activated B-cell subtypes by gene expression profiling (GEP). Recent reports suggest the role of GEP subtypes in targeted therapy. Immunohistochemistry (IHC) algorithms have been proposed as surrogates of GEP, but their utility remains controversial. Using microarray, we examined the concordance of 4 GEP-correlated and 2 non-GEP-correlated IHC algorithms in 381 DLBCLs, not otherwise specified. Subtypes and variants of DLBCL were excluded to minimize the possible confounding effect on prognosis and phenotype. Survival was analyzed in 138 cyclophosphamide, adriamycin, vincristine, and prednisone (CHOP)-treated and 147 rituximab plus CHOP (R-CHOP)-treated patients. Of the GEP-correlated algorithms, high concordance was observed among Hans, Choi, and Visco-Young algorithms (total concordance, 87.1%; κ score: 0.726 to 0.889), whereas Tally algorithm exhibited slightly lower concordance (total concordance 77.4%; κ score: 0.502 to 0.643). Two non-GEP-correlated algorithms (Muris and Nyman) exhibited poor concordance. Compared with the Western data, incidence of the non-GCB subtype was higher in all algorithms. Univariate analysis showed prognostic significance for Hans, Choi, and Visco-Young algorithms and BCL6, GCET1, LMO2, and BCL2 in CHOP-treated patients. On multivariate analysis, Hans algorithm retained its prognostic significance. By contrast, neither the algorithms nor individual antigens predicted survival in R-CHOP treatment. The high concordance among GEP-correlated algorithms suggests their usefulness as reliable discriminators of molecular subtype in DLBCL, not otherwise specified. Our study also indicates that prognostic significance of IHC algorithms may be limited in R-CHOP-treated Asian patients because of the predominance of the non-GCB type.

  4. Explicit high-order noncanonical symplectic algorithms for ideal two-fluid systems

    NASA Astrophysics Data System (ADS)

    Xiao, Jianyuan; Qin, Hong; Morrison, Philip J.; Liu, Jian; Yu, Zhi; Zhang, Ruili; He, Yang

    2016-11-01

    An explicit high-order noncanonical symplectic algorithm for ideal two-fluid systems is developed. The fluid is discretized as particles in the Lagrangian description, while the electromagnetic fields and internal energy are treated as discrete differential form fields on a fixed mesh. With the assistance of Whitney interpolating forms [H. Whitney, Geometric Integration Theory (Princeton University Press, 1957); M. Desbrun et al., Discrete Differential Geometry (Springer, 2008); J. Xiao et al., Phys. Plasmas 22, 112504 (2015)], this scheme preserves the gauge symmetry of the electromagnetic field, and the pressure field is naturally derived from the discrete internal energy. The whole system is solved using the Hamiltonian splitting method discovered by He et al. [Phys. Plasmas 22, 124503 (2015)], which was been successfully adopted in constructing symplectic particle-in-cell schemes [J. Xiao et al., Phys. Plasmas 22, 112504 (2015)]. Because of its structure preserving and explicit nature, this algorithm is especially suitable for large-scale simulations for physics problems that are multi-scale and require long-term fidelity and accuracy. The algorithm is verified via two tests: studies of the dispersion relation of waves in a two-fluid plasma system and the oscillating two-stream instability.

  5. MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm based on ICPF

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Xu, Shiyou; Chen, Zengping; Yuan, Bin

    2014-12-01

    In this paper, we present a detailed analysis on the performance degradation of inverse synthetic aperture radar (ISAR) imagery with the polar format algorithm (PFA) due to the inaccurate rotation center. And a novel algorithm is developed to estimate the rotation center for ISAR targets to overcome the degradation. In real ISAR scenarios, the real rotation center shift is usually not coincided with the gravity center of the high-resolution range profile (HRRP), due to the data-driven translational motion compensation. Because of the imprecise information of rotation center, PFA image yields model errors and severe blurring in the cross-range direction. To tackle this problem, an improved PFA based on integrated cubic phase function (ICPF) is proposed. In the method, the rotation center in the slant range is estimated firstly by ICPF, and the signal is shifted accordingly. Finally, the standard PFA algorithm can be carried out straightforwardly. With the proposed method, wide-angle ISAR imagery of non-cooperative targets can be achieved by PFA with improved focus quality. Simulation and real-data experiments confirm the effectiveness of the proposal.

  6. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  7. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    PubMed

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  8. Micro-channel-based high specific power lithium target

    NASA Astrophysics Data System (ADS)

    Mastinu, P.; Martın-Hernández, G.; Praena, J.; Gramegna, F.; Prete, G.; Agostini, P.; Aiello, A.; Phoenix, B.

    2016-11-01

    A micro-channel-based heat sink has been produced and tested. The device has been developed to be used as a Lithium target for the LENOS (Legnaro Neutron Source) facility and for the production of radioisotope. Nevertheless, applications of such device can span on many areas: cooling of electronic devices, diode laser array, automotive applications etc. The target has been tested using a proton beam of 2.8MeV energy and delivering total power shots from 100W to 1500W with beam spots varying from 5mm2 to 19mm2. Since the target has been designed to be used with a thin deposit of lithium and since lithium is a low-melting-point material, we have measured that, for such application, a specific power of about 3kW/cm2 can be delivered to the target, keeping the maximum surface temperature not exceeding 150° C.

  9. Specific Abilities May Increment Psychometric g for High Ability Populations

    DTIC Science & Technology

    2016-04-14

    factoring of cognitive ability batteries yields primary group factors that are highly g-loaded ( Carroll , 1993). Using military data, Ree and Earles... Carroll , J. B. (1993). Human Cognitive Abilities. New York: Cambridge University Press. Detterman, D. K., Daniel, M. H. (1989). Correlations of

  10. Automated SNP genotype clustering algorithm to improve data completeness in high-throughput SNP genotyping datasets from custom arrays.

    PubMed

    Smith, Edward M; Littrell, Jack; Olivier, Michael

    2007-12-01

    High-throughput SNP genotyping platforms use automated genotype calling algorithms to assign genotypes. While these algorithms work efficiently for individual platforms, they are not compatible with other platforms, and have individual biases that result in missed genotype calls. Here we present data on the use of a second complementary SNP genotype clustering algorithm. The algorithm was originally designed for individual fluorescent SNP genotyping assays, and has been optimized to permit the clustering of large datasets generated from custom-designed Affymetrix SNP panels. In an analysis of data from a 3K array genotyped on 1,560 samples, the additional analysis increased the overall number of genotypes by over 45,000, significantly improving the completeness of the experimental data. This analysis suggests that the use of multiple genotype calling algorithms may be advisable in high-throughput SNP genotyping experiments. The software is written in Perl and is available from the corresponding author.

  11. High Specific Energy Pulsed Electric Discharge Laser Research.

    DTIC Science & Technology

    1975-12-01

    drop out excess water, filtered, dried, filtered again, and then pumped up to the storage bottle pressure (Fig. 47). At the exit of the high...pressure pump, an oil filter was used to remove any oil that may have been introduced by the compressor. Bottles were pumped up to 2000 psig...Lowder, R. S. , "Air-Combustion Product N2-C02 Electric Laser, " J. Appl. Phys. Lett. 26, 373 (1975). 5. Miller, D. J. and Millikan , R. C

  12. High-quality image magnification applying Gerchberg-Papoulis iterative algorithm with discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Shinbori, Eiji; Takagi, Mikio

    1992-11-01

    A new image magnification method, called 'IM-GPDCT' (image magnification applying the Gerchberg-Papoulis (GP) iterative algorithm with discrete cosine transform (DCT)), is described and its performance evaluated. This method markedly improves image quality of a magnified image using a concept which restores the spatial high frequencies which are conventionally lost due to use of a low pass filter. These frequencies are restored using two known constraints applied during iterative DCT: (1) correct information in a passband is known and (2) the spatial extent of an image is finite. Simulation results show that the IM- GPDCT outperforms three conventional interpolation methods from both a restoration error and image quality standpoint.

  13. An efficient algorithm for some highly nonlinear fractional PDEs in mathematical physics.

    PubMed

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature.

  14. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  15. Range-Specific High-resolution Mesoscale Model Setup

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2013-01-01

    This report summarizes the findings from an AMU task to determine the best model configuration for operational use at the ER and WFF to best predict winds, precipitation, and temperature. The AMU ran test cases in the warm and cool seasons at the ER and for the spring and fall seasons at WFF. For both the ER and WFF, the ARW core outperformed the NMM core. Results for the ER indicate that the Lin microphysical scheme and the YSU PBL scheme is the optimal model configuration for the ER. It consistently produced the best surface and upper air forecasts, while performing fairly well for the precipitation forecasts. Both the Ferrier and Lin microphysical schemes in combination with the YSU PBL scheme performed well for WFF in the spring and fall seasons. The AMU has been tasked with a follow-on modeling effort to recommended local DA and numerical forecast model design optimized for both the ER and WFF to support space launch activities. The AMU will determine the best software and type of assimilation to use, as well as determine the best grid resolution for the initialization based on spatial and temporal availability of data and the wall clock run-time of the initialization. The AMU will transition from the WRF EMS to NU-WRF, a NASA-specific version of the WRF that takes advantage of unique NASA software and datasets. 37

  16. A truncated Levenberg-Marquardt algorithm for the calibration of highly parameterized nonlinear models

    SciTech Connect

    Finsterle, S.; Kowalsky, M.B.

    2010-10-15

    We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.

  17. Surface contribution to high-order aberrations using the Aldis therem and Andersen's algorithms

    NASA Astrophysics Data System (ADS)

    Ortiz-Estardante, A.; Cornejo-Rodriguez, Alejandro

    1990-07-01

    Formulae and computer programs were developed for surface contributions to high order aberrations coefficients using the Aldis theorem and Andersen algor ithms for a symmetr ical optical system. 2. THEORY Using the algorithms developed by T. B. Andersent which allow to calculate the high order aberrations coefficients of an optical system. We were able to obtain a set of equations for the contributions of each surface of a centered optical system to such aberration coefficiets by using the equations of Andersen and the so called Aldis theorem 3. COMPUTER PROGRAMS AND EXAMPLES. The study for the case of an object at infinite has been completed and more recently the object to finite distance case has been also finished . The equations have been properly programed for the two above mentioned situations . Some typical designs of optical systems will be presented and some advantages and disadvantages of the developed formulae and method will be discussed. 4. CONCLUSIONS The algorithm developed by Anderson has a compact notation and structure which is suitable for computers. Using those results obtained by Anderson together with the Aldis theorem a set of equations were derived and programmed for the surface contributions of a centered optical system to high order aberrations. 5. REFERENCES 1. T . B. Andersen App 1. Opt. 3800 (1980) 2. A. Cox A system of Optical Design Focal Press 1964 18 / SPIE

  18. Algorithms for Low-Cost High Accuracy Geomagnetic Measurements in LEO

    NASA Astrophysics Data System (ADS)

    Beach, T. L.; Zesta, E.; Allen, L.; Chepko, A.; Bonalsky, T.; Wendel, D. E.; Clavier, O.

    2013-12-01

    Geomagnetic field measurements are a fundamental, key parameter measurement for any space weather application, particularly for tracking the electromagnetic energy input in the Ionosphere-Thermosphere system and for high latitude dynamics governed by the large-scale field-aligned currents. The full characterization of the Magnetosphere-Ionosphere-Thermosphere coupled system necessitates measurements with higher spatial/temporal resolution and from multiple locations simultaneously. This becomes extremely challenging in the current state of shrinking budgets. Traditionally, including a science-grade magnetometer in a mission necessitates very costly integration and design (sensor on long boom) and imposes magnetic cleanliness restrictions on all components of the bus and payload. This work presents an innovative algorithm approach that enables high quality magnetic field measurements by one or more high-quality magnetometers mounted on the spacecraft without booms. The algorithm estimates the background field using multiple magnetometers and current telemetry on board a spacecraft. Results of a hardware-in-the-loop simulation showed an order of magnitude reduction in the magnetic effects of spacecraft onboard time-varying currents--from 300 nT to an average residual of 15 nT.

  19. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/sq cm, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  20. High Specific Power Motors in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Jansen, Ralph H.; Trudell, Jeffrey J.

    2007-01-01

    A switched reluctance motor has been operated in liquid nitrogen (LN2) with a power density as high as that reported for any motor or generator. The high performance stems from the low resistivity of Cu at LN2 temperature and from the geometry of the windings, the combination of which permits steady-state rms current density up to 7000 A/cm2, about 10 times that possible in coils cooled by natural convection at room temperature. The Joule heating in the coils is conducted to the end turns for rejection to the LN2 bath. Minimal heat rejection occurs in the motor slots, preserving that region for conductor. In the end turns, the conductor layers are spaced to form a heat-exchanger-like structure that permits nucleate boiling over a large surface area. Although tests were performed in LN2 for convenience, this motor was designed as a prototype for use with liquid hydrogen (LH2) as the coolant. End-cooled coils would perform even better in LH2 because of further increases in copper electrical and thermal conductivities. Thermal analyses comparing LN2 and LH2 cooling are presented verifying that end-cooled coils in LH2 could be either much longer or could operate at higher current density without thermal runaway than in LN2.

  1. Numerical algorithms for highly oscillatory dynamic system based on commutator-free method

    NASA Astrophysics Data System (ADS)

    Li, Wencheng; Deng, Zichen; Zhang, Suying

    2007-04-01

    In the present paper, an efficiently improved modified Magnus integrator algorithm based on commutator-free method is proposed for the second-order dynamic systems with time-dependent high frequencies. Firstly, the second-order dynamic systems are transferred to the frame of reference by introducing new variable so that highly oscillatory behaviour inherited from the entries. Then the modified Magnus integrator method based on local linearization is appropriately designed for solving the above new form. And some optimized strategies for reducing the number of function evaluations and matrix operations are also suggested. Finally, several numerical examples for highly oscillatory dynamic systems, such as Airy equation, Bessel equation, Mathieu equation, are presented to demonstrate the validity and effectiveness of the proposed method.

  2. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  3. Preparation of tritium-labeled tetrahydropteroylpolyglutamates of high specific radioactivity

    SciTech Connect

    Paquin, J.; Baugh, C.M.; MacKenzie, R.E.

    1985-04-01

    Tritium-labeled (6S)-tetrahydropteroylpolyglutamates of high radiospecific activity were prepared from the corresponding pteroylpolyglutamates. Malic enzyme and D,L-(2-/sup 3/H)malate were used as a generating system to produce (4A-/sup 3/H)NADPH which was coupled to the dihydrofolate reductase-catalyzed reduction of chemically prepared dihydropteroylpolyglutamate derivatives. Passage of the reaction mixtures through a column of immobilized boronate effectively removed NADPH, and the tetrahydropteroylpolyglutamates were subsequently purified by chromatography on DEAE-cellulose. Overall yields of the (6S)-tetrahydro derivatives were 18-48% and the radiospecific activities were 3-4.5 mCi X mumol-1.

  4. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  5. Defining and evaluating classification algorithm for high-dimensional data based on latent topics.

    PubMed

    Luo, Le; Li, Li

    2014-01-01

    Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications.

  6. The Optimized Block-Regression Fusion Algorithm for Pansharpening of Very High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, J. X.; Yang, J. H.; Reinartz, P.

    2016-06-01

    Pan-sharpening of very high resolution remotely sensed imagery need enhancing spatial details while preserving spectral characteristics, and adjusting the sharpened results to realize the different emphases between the two abilities. In order to meet the requirements, this paper is aimed at providing an innovative solution. The block-regression-based algorithm (BR), which was previously presented for fusion of SAR and optical imagery, is firstly applied to sharpen the very high resolution satellite imagery, and the important parameter for adjustment of fusion result, i.e., block size, is optimized according to the two experiments for Worldview-2 and QuickBird datasets in which the optimal block size is selected through the quantitative comparison of the fusion results of different block sizes. Compared to five fusion algorithms (i.e., PC, CN, AWT, Ehlers, BDF) in fusion effects by means of quantitative analysis, BR is reliable for different data sources and can maximize enhancement of spatial details at the expense of a minimum spectral distortion.

  7. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  8. Defining and Evaluating Classification Algorithm for High-Dimensional Data Based on Latent Topics

    PubMed Central

    Luo, Le; Li, Li

    2014-01-01

    Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA) algorithm and the Support Vector Machine (SVM). LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM) is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications. PMID:24416136

  9. Algorithm research of high-precision optical interferometric phase demodulation based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhi, Chunxiao; Sun, Jinghua

    2012-11-01

    Optical interferometric phase demodulation algorithm is provided based on the principle of phase generated carrier (PGC), which can realize the optical interference measurement of high-precision signal demodulation, applied to optical fiber displacement, vibration sensor. Modulated photoelectric detection signal is performanced by interval 8 frequency multiplication sampling. The samples calculate the phase modulation depth and phase error through a feedback loop to achieve optimum working point control. On the other hand the results of sampling calculate precision of numerical of the phase. The algorithm uses the addition and subtraction method instead of correlation filtering and other related complex calculation process of the traditional PGC digital demodulation, making full use of FPGA data processing with advantage of high speed and parallel; This method can give full play to the advantage of FPGA performance. Otherwise, the speed at the same time, FPGA can also ensure that the phase demodulation precision, wide dynamic range, and give full play to the advantage of completing the data access by single clock cycle.

  10. Study on a digital pulse processing algorithm based on template-matching for high-throughput spectroscopy

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Yang, Haori

    2015-06-01

    A major challenge in utilizing spectroscopy techniques for nuclear safeguards is to perform high-resolution measurements at an ultra-high throughput rate. Traditionally, piled-up pulses are rejected to ensure good energy resolution. To improve throughput rate, high-pass filters are normally implemented to shorten pulses. However, this reduces signal-to-noise ratio and causes degradation in energy resolution. In this work, a pulse pile-up recovery algorithm based on template-matching was proved to be an effective approach to achieve high-throughput gamma ray spectroscopy. First, a discussion of the algorithm was given in detail. Second, the algorithm was then successfully utilized to process simulated piled-up pulses from a scintillator detector. Third, the algorithm was implemented to analyze high rate data from a NaI detector, a silicon drift detector and a HPGe detector. The promising results demonstrated the capability of this algorithm to achieve high-throughput rate without significant sacrifice in energy resolution. The performance of the template-matching algorithm was also compared with traditional shaping methods.

  11. pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data.

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-07-01

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.

  12. Reprint of "pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data".

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-11-03

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics.

  13. An optimal point spread function subtraction algorithm for high-contrast imaging: a demonstration with angular differential imaging

    SciTech Connect

    Lafreniere, D; Marois, C; Doyon, R; Artigau, E; Nadeau, D

    2006-09-19

    Direct imaging of exoplanets is limited by bright quasi-static speckles in the point spread function (PSF) of the central star. This limitation can be reduced by subtraction of reference PSF images. We have developed an algorithm to construct an optimal reference PSF image from an arbitrary set of reference images. This image is built as a linear combination of all available images and is optimized independently inside multiple subsections of the image to ensure that the absolute minimum residual noise is achieved within each subsection. The algorithm developed is completely general and can be used with many high contrast imaging observing strategies, such as angular differential imaging (ADI), roll subtraction, spectral differential imaging, reference star observations, etc. The performance of the algorithm is demonstrated for ADI data. It is shown that for this type of data the new algorithm provides a gain in sensitivity by up 22 to a factor 3 at small separation over the algorithm previously used.

  14. Development of a decision tree to classify the most accurate tissue-specific tissue to plasma partition coefficient algorithm for a given compound.

    PubMed

    Yun, Yejin Esther; Cotton, Cecilia A; Edginton, Andrea N

    2014-02-01

    Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism and are used to predict a drug's pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), key PBPK model parameters, define the steady-state concentration differential between tissue and plasma and are used to predict the volume of distribution. The experimental determination of these parameters once limited the development of PBPK models; however, in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy, and none are considered standard, warranting further research. In this study, a novel decision-tree-based Kp prediction method was developed using six previously published algorithms. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physicochemical space. Three versions of tissue-specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy than that of any single Kp prediction algorithm for all tissues, the current mode of use in PBPK model building. Because built-in estimation equations for those input parameters are not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The presented innovative method will improve tissue distribution prediction accuracy, thus enhancing the confidence in PBPK modeling outputs.

  15. Cosmo-SkyMed Di Seconda Generazione Innovative Algorithms and High Performance SAR Data Processors

    NASA Astrophysics Data System (ADS)

    Mari, S.; Porfilio, M.; Valentini, G.; Serva, S.; Fiorentino, C. A. M.

    2016-08-01

    In the frame of COSMO-SkyMed di Seconda Generazione (CSG) programme, extensive research activities have been conducted on SAR data processing, with particular emphasis on high resolution processors, wide field products noise and coregistration algorithms.As regards the high resolution, it is essential to create a model for the management of all those elements that are usually considered as negligible but alter the target phase responses when it is "integrated" for several seconds. Concerning the SAR wide-field products noise removal, one of the major problems is the ability compensate all the phenomena that affect the received signal intensity. Research activities are aimed at developing adaptive- iterative techniques for the compensation of inaccuracies on the knowledge of radar antenna pointing, up to achieve compensation of the order of thousandths of degree. Moreover, several modifications of the image coregistration algortithm have been studied aimed at improving the performences and reduce the computational effort.

  16. High effective algorithm of the detection and identification of substance using the noisy reflected THz pulse

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.

    2015-08-01

    Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.

  17. Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD

    NASA Astrophysics Data System (ADS)

    Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo

    Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  19. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    SciTech Connect

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.

  20. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  1. Long Term Maturation of Congenital Diaphragmatic Hernia Treatment Results: Toward Development of a Severity-Specific Treatment Algorithm

    PubMed Central

    Kays, David W.; Islam, Saleem; Larson, Shawn D.; Perkins, Joy; Talbert, James L.

    2015-01-01

    Objective To assess the impact of varying approaches to CDH repair timing on survival and need for ECMO when controlled for anatomic and physiologic disease severity in a large consecutive series of CDH patients. Summary Background Data Our publication of 60 consecutive CDH patients in 1999 showed that survival is significantly improved by limiting lung inflation pressures and eliminating hyperventilation. Methods We retrospectively reviewed 268 consecutive CDH patients, combining 208 new patients with the 60 previously reported. Management and ventilator strategy were highly consistent throughout. Varying approaches to surgical timing were applied as the series matured. Results Patients with anatomically less-severe left liver-down CDH had significantly increased need for ECMO if repaired in the first 48 hours, while patients with more-severe left liver-up CDH survived at a higher rate when repair was performed before ECMO. Overall survival of 268 patients was 78%. For those without lethal associated anomalies, survival was 88%. Of these, 99% of left liver-down CDH survived, 91% of right CDH survived. and 76% of left liver-up CDH survived. Conclusions This study shows that patients with anatomically less severe CDH benefit from delayed surgery while patients with anatomically more severe CDH may benefit from a more aggressive surgical approach. These findings show that patients respond differently across the CDH anatomic severity spectrum, and lay the foundation for the development of risk specific treatment protocols for patients with CDH. PMID:23989050

  2. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  3. Highly specific expression of luciferase gene in lungs of naive nude mice directed by prostate-specific antigen promoter

    SciTech Connect

    Li Hongwei; Li Jinzhong; Helm, Gregory A.; Pan Dongfeng . E-mail: Dongfeng_pan@yahoo.com

    2005-09-09

    PSA promoter has been demonstrated the utility for tissue-specific toxic gene therapy in prostate cancer models. Characterization of foreign gene overexpression in normal animals elicited by PSA promoter should help evaluate therapy safety. Here we constructed an adenovirus vector (AdPSA-Luc), containing firefly luciferase gene under the control of the 5837 bp long prostate-specific antigen promoter. A charge coupled device video camera was used to non-invasively image expression of firefly luciferase in nude mice on days 3, 7, 11 after injection of 2 x 10{sup 9} PFU of AdPSA-Luc virus via tail vein. The result showed highly specific expression of the luciferase gene in lungs of mice from day 7. The finding indicates the potential limitations of the suicide gene therapy of prostate cancer based on selectivity of PSA promoter. By contrary, it has encouraging implications for further development of vectors via PSA promoter to enable gene therapy for pulmonary diseases.

  4. Shift and Mean Algorithm for Functional Imaging with High Spatio-Temporal Resolution

    PubMed Central

    Rama, Sylvain

    2015-01-01

    Understanding neuronal physiology requires to record electrical activity in many small and remote compartments such as dendrites, axon or dendritic spines. To do so, electrophysiology has long been the tool of choice, as it allows recording very subtle and fast changes in electrical activity. However, electrophysiological measurements are mostly limited to large neuronal compartments such as the neuronal soma. To overcome these limitations, optical methods have been developed, allowing the monitoring of changes in fluorescence of fluorescent reporter dyes inserted into the neuron, with a spatial resolution theoretically only limited by the dye wavelength and optical devices. However, the temporal and spatial resolutive power of functional fluorescence imaging of live neurons is often limited by a necessary trade-off between image resolution, signal to noise ratio (SNR) and speed of acquisition. Here, I propose to use a Super-Resolution Shift and Mean (S&M) algorithm previously used in image computing to improve the SNR, time sampling and spatial resolution of acquired fluorescent signals. I demonstrate the benefits of this methodology using two examples: voltage imaging of action potentials (APs) in soma and dendrites of CA3 pyramidal cells and calcium imaging in the dendritic shaft and spines of CA3 pyramidal cells. I show that this algorithm allows the recording of a broad area at low speed in order to achieve a high SNR, and then pick the signal in any small compartment and resample it at high speed. This method allows preserving both the SNR and the temporal resolution of the signal, while acquiring the original images at high spatial resolution. PMID:26635526

  5. The Importance of Specific Skills to High School Social Studies Teachers.

    ERIC Educational Resources Information Center

    Guenther, John

    This study determines those specific social studies skills that high school social studies teachers believe students should have developed as a result of their instruction in a high school social studies program, and differences in the importance attached to specific skills between high school social studies teachers classified as having a…

  6. Synthesis of a high specific activity methyl sulfone tritium isotopologue of fevipiprant (NVP-QAW039).

    PubMed

    Luu, Van T; Goujon, Jean-Yves; Meisterhans, Christian; Frommherz, Matthias; Bauer, Carsten

    2015-05-15

    The synthesis of a triple tritiated isotopologue of the CRTh2 antagonist NVP-QAW039 (fevipiprant) with a specific activity >3 TBq/mmol is described. Key to the high specific activity is the methylation of a bench-stable dimeric disulfide precursor that is in situ reduced to the corresponding thiol monomer and methylated with [(3)H3]MeONos having per se a high specific activity. The high specific activity of the tritiated active pharmaceutical ingredient obtained by a build-up approach is discussed in the light of the specific activity usually to be expected if hydrogen tritium exchange methods were applied.

  7. Novel Diagnostic Algorithm for Identification of Mycobacteria Using Genus-Specific Amplification of the 16S-23S rRNA Gene Spacer and Restriction Endonucleases

    PubMed Central

    Roth, Andreas; Reischl, Udo; Streubel, Anna; Naumann, Ludmila; Kroppenstedt, Reiner M.; Habicht, Marion; Fischer, Marga; Mauch, Harald

    2000-01-01

    A novel genus-specific PCR for mycobacteria with simple identification to the species level by restriction fragment length polymorphism (RFLP) was established using the 16S-23S ribosomal RNA gene (rDNA) spacer as a target. Panspecificity of primers was demonstrated on the genus level by testing 811 bacterial strains (122 species in 37 genera from 286 reference strains and 525 clinical isolates). All mycobacterial isolates (678 strains among 48 defined species and 5 indeterminate taxons) were amplified by the new primers. Among nonmycobacterial isolates, only Gordonia terrae was amplified. The RFLP scheme devised involves estimation of variable PCR product sizes together with HaeIII and CfoI restriction analysis. It yielded 58 HaeIII patterns, of which 49 (84%) were unique on the species level. Hence, HaeIII digestion together with CfoI results was sufficient for correct identification of 39 of 54 mycobacterial taxons and one of three or four of seven RFLP genotypes found in Mycobacterium intracellulare and Mycobacterium kansasii, respectively. Following a clearly laid out diagnostic algorithm, the remaining unidentified organisms fell into five clusters of closely related species (i.e., the Mycobacterium avium complex or Mycobacterium chelonae-Mycobacterium abscessus) that were successfully separated using additional enzymes (TaqI, MspI, DdeI, or AvaII). Thus, next to slowly growing mycobacteria, all rapidly growing species studied, including M. abscessus, M. chelonae, Mycobacterium farcinogenes, Mycobacterium fortuitum, Mycobacterium peregrinum, and Mycobacterium senegalense (with a very high 16S rDNA sequence similarity) were correctly identified. A high intraspecies sequence stability and the good discriminative power of patterns indicate that this method is very suitable for rapid and cost-effective identification of a wide variety of mycobacterial species without the need for sequencing. Phylogenetically, spacer sequence data stand in good agreement with 16S r

  8. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    SciTech Connect

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  9. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems

    NASA Astrophysics Data System (ADS)

    Omelyan, I. P.; Mryglod, I. M.; Folk, R.

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.

  10. Construction of high-order force-gradient algorithms for integration of motion in classical and quantum systems.

    PubMed

    Omelyan, I P; Mryglod, I M; Folk, R

    2002-08-01

    A consequent approach is proposed to construct symplectic force-gradient algorithms of arbitrarily high orders in the time step for precise integration of motion in classical and quantum mechanics simulations. Within this approach the basic algorithms are first derived up to the eighth order by direct decompositions of exponential propagators and further collected using an advanced composition scheme to obtain the algorithms of higher orders. Contrary to the scheme proposed by Chin and Kidwell [Phys. Rev. E 62, 8746 (2000)], where high-order algorithms are introduced by standard iterations of a force-gradient integrator of order four, the present method allows one to reduce the total number of expensive force and its gradient evaluations to a minimum. At the same time, the precision of the integration increases significantly, especially with increasing the order of the generated schemes. The algorithms are tested in molecular dynamics and celestial mechanics simulations. It is shown, in particular, that the efficiency of the advanced fourth-order-based algorithms is better approximately in factors 5 to 1000 for orders 4 to 12, respectively. The results corresponding to sixth- and eighth-order-based composition schemes are also presented up to the sixteenth order. For orders 14 and 16, such highly precise schemes, at considerably smaller computational costs, allow to reduce unphysical deviations in the total energy up in 100 000 times with respect to those of the standard fourth-order-based iteration approach.

  11. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  12. A binned clustering algorithm to detect high-Z material using cosmic muons

    NASA Astrophysics Data System (ADS)

    Thomay, C.; Velthuis, J. J.; Baesso, P.; Cussans, D.; Morris, P. A. W.; Steer, C.; Burns, J.; Quillin, S.; Stapleton, M.

    2013-10-01

    We present a novel approach to the detection of special nuclear material using cosmic rays. Muon Scattering Tomography (MST) is a method for using cosmic muons to scan cargo containers and vehicles for special nuclear material. Cosmic muons are abundant, highly penetrating, not harmful for organic tissue, cannot be screened against, and can easily be detected, which makes them highly suited to the use of cargo scanning. Muons undergo multiple Coulomb scattering when passing through material, and the amount of scattering is roughly proportional to the square of the atomic number Z of the material. By reconstructing incoming and outgoing tracks, we can obtain variables to identify high-Z material. In a real life application, this has to happen on a timescale of 1 min and thus with small numbers of muons. We have built a detector system using resistive plate chambers (RPCs): 12 layers of RPCs allow for the readout of 6 x and 6 y positions, by which we can reconstruct incoming and outgoing tracks. In this work we detail the performance of an algorithm by which we separate high-Z targets from low-Z background, both for real data from our prototype setup and for MC simulation of a cargo container-sized setup. (c) British Crown Owned Copyright 2013/AWE

  13. Can a partial volume edge effect reduction algorithm improve the repeatability of subject-specific finite element models of femurs obtained from CT data?

    PubMed

    Peleg, Eran; Herblum, Ryan; Beek, Maarten; Joskowicz, Leo; Liebergall, Meir; Mosheiff, Rami; Whyne, Cari

    2014-01-01

    The reliability of patient-specific finite element (FE) modelling is dependent on the ability to provide repeatable analyses. Differences of inter-operator generated grids can produce variability in strain and stress readings at a desired location, which are magnified at the surface of the model as a result of the partial volume edge effects (PVEEs). In this study, a new approach is introduced based on an in-house developed algorithm which adjusts the location of the model's surface nodes to a consistent predefined threshold Hounsfield unit value. Three cadaveric human femora specimens were CT scanned, and surface models were created after a semi-automatic segmentation by three different experienced operators. A FE analysis was conducted for each model, with and without applying the surface-adjustment algorithm (a total of 18 models), implementing identical boundary conditions. Maximum principal strain and stress and spatial coordinates were probed at six equivalent surface nodes from the six generated models for each of the three specimens at locations commonly utilised for experimental strain guage measurement validation. A Wilcoxon signed-ranks test was conducted to determine inter-operator variability and the impact of the PVEE-adjustment algorithm. The average inter-operator difference in stress values was significantly reduced after applying the adjustment algorithm (before: 3.32 ± 4.35 MPa, after: 1.47 ± 1.77 MPa, p = 0.025). Strain values were found to be less sensitive to inter-operative variability (p = 0.286). In summary, the new approach as presented in this study may provide a means to improve the repeatability of subject-specific FE models of bone obtained from CT data.

  14. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    SciTech Connect

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.

  15. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  16. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  17. Extended nonlinear chirp scaling algorithm for highly squinted missile-borne synthetic aperture radar with diving acceleration

    NASA Astrophysics Data System (ADS)

    Liu, Rengli; Wang, Yanfei

    2016-04-01

    An extended nonlinear chirp scaling (NLCS) algorithm is proposed to process data of highly squinted, high-resolution, missile-borne synthetic aperture radar (SAR) diving with a constant acceleration. Due to the complex diving movement, the traditional signal model and focusing algorithm are no longer suited for missile-borne SAR signal processing. Therefore, an accurate range equation is presented, named as the equivalent hyperbolic range model (EHRM), which is more accurate and concise compared with the conventional fourth-order polynomial range equation. Based on the EHRM, a two-dimensional point target reference spectrum is derived, and an extended NLCS algorithm for missile-borne SAR image formation is developed. In the algorithm, a linear range walk correction is used to significantly remove the range-azimuth cross coupling, and an azimuth NLCS processing is adopted to solve the azimuth space variant focusing problem. Moreover, the operations of the proposed algorithm are carried out without any interpolation, thus having small computational loads. Finally, the simulation results and real-data processing results validate the proposed focusing algorithm.

  18. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality

    PubMed Central

    Wang, Xueyi

    2011-01-01

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818

  19. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Liu, Jian; He, Yang; Zhang, Ruili; Qin, Hong; Sun, Yajuan

    2015-11-15

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.

  20. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    SciTech Connect

    Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.

  1. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    NASA Astrophysics Data System (ADS)

    Xiao, Jianyuan; Qin, Hong; Liu, Jian; He, Yang; Zhang, Ruili; Sun, Yajuan

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv:1505.06076 (2015)], which produces five exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.

  2. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  3. An inverse kinematics algorithm for a highly redundant variable-geometry-truss manipulator

    NASA Technical Reports Server (NTRS)

    Naccarato, Frank; Hughes, Peter

    1989-01-01

    A new class of robotic arm consists of a periodic sequence of truss substructures, each of which has several variable-length members. Such variable-geometry-truss manipulator (VGTMs) are inherently highly redundant and promise a significant increase in dexterity over conventional anthropomorphic manipulators. This dexterity may be exploited for both obstacle avoidance and controlled deployment in complex workspaces. The inverse kinematics problem for such unorthodox manipulators, however, becomes complex because of the large number of degrees of freedom, and conventional solutions to the inverse kinematics problem become inefficient because of the high degree of redundancy. A solution is presented to this problem based on a spline-like reference curve for the manipulator's shape. Such an approach has a number of advantages: (1) direct, intuitive manipulation of shape; (2) reduced calculation time; and (3) direct control over the effective degree of redundancy of the manipulator. Furthermore, although the algorithm was developed primarily for variable-geometry-truss manipulators, it is general enough for application to a number of manipulator designs.

  4. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  5. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  6. Global convergence analysis of fast multiobjective gradient-based dose optimization algorithms for high-dose-rate brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S

    2003-03-07

    We consider the problem of the global convergence of gradient-based optimization algorithms for interstitial high-dose-rate (HDR) brachytherapy dose optimization using variance-based objectives. Possible local minima could lead to only sub-optimal solutions. We perform a configuration space analysis using a representative set of the entire non-dominated solution space. A set of three prostate implants is used in this study. We compare the results obtained by conjugate gradient algorithms, two variable metric algorithms and fast-simulated annealing. For the variable metric algorithm BFGS from numerical recipes, large fluctuations are observed. The limited memory L-BFGS algorithm and the conjugate gradient algorithm FRPR are globally convergent. Local minima or degenerate states are not observed. We study the possibility of obtaining a representative set of non-dominated solutions using optimal solution rearrangement and a warm start mechanism. For the surface and volume dose variance and their derivatives, a method is proposed which significantly reduces the number of required operations. The optimization time, ignoring a preprocessing step, is independent of the number of sampling points in the planning target volume. Multiobjective dose optimization in HDR brachytherapy using L-BFGS and a new modified computation method for the objectives and derivatives has been accelerated, depending on the number of sampling points, by a factor in the range 10-100.

  7. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  8. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  9. Image reconstruction algorithm for recovering high-frequency information in parallel phase-shifting digital holography [Invited].

    PubMed

    Xia, Peng; Shimozato, Yuki; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-01-01

    We propose an image reconstruction algorithm for recovering high-frequency information in parallel phase-shifting digital holography. The proposed algorithm applies three kinds of interpolations and generates three different kinds of object waves. A Fourier transform is applied to each object wave, and the spatial-frequency domain is divided into 3×3 segments for each Fourier-transformed object wave. After that the segment in which interpolation error is the least among the segments having the same address of the segment in the spatial-frequency domain is extracted. The extracted segments are combined to generate an information-enhanced spatial-frequency spectrum of the object wave, and after that the formed spatial-frequency spectrum is inversely Fourier transformed. Then the high-frequency information of the reconstructed image is recovered. The effectiveness of the proposed algorithm was verified by a numerical simulation and an experiment.

  10. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-05-31

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.

  11. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  12. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  13. High specificity but low sensitivity of mutation-specific antibodies against EGFR mutations in non-small-cell lung cancer.

    PubMed

    Bondgaard, Anna-Louise; Høgdall, Estrid; Mellemgaard, Anders; Skov, Birgit G

    2014-12-01

    Determination of epidermal growth factor receptor (EGFR) mutations has a pivotal impact on treatment of non-small-cell lung cancer (NSCLC). A standardized test has not yet been approved. So far, Sanger DNA sequencing has been widely used. Its rather low sensitivity has led to the development of more sensitive methods including real-time PCR (RT-PCR). Immunohistochemistry with mutation-specific antibodies might be a promising detection method. We evaluated 210 samples with NSCLC from an unselected Caucasian population. Extracted DNA was analyzed for EGFR mutations by RT-PCR (Therascreen EGFR PCR kit, Qiagen, UK; reference method). For immunohistochemistry, antibodies against exon19 deletions (clone 6B6), exon21 mutations (clone 43B2) from Cell Signaling Technology (Boston, USA) and EGFR variantIII (clone 218C9) from Dako (Copenhagen, DK) were applied. Protein expression was evaluated, and staining score (multipum of intensity (graded 0-3) and percentages (0-100%) of stained tumor cells) was calculated. Positivity was defined as staining score >0. Specificity of exon19 antibody was 98.8% (95% confidence interval=95.9-99.9%) and of exon21 antibody 97.8% (95% confidence interval=94.4-99.4%). Sensitivity of exon19 antibody was 63.2% (95% confidence interval=38.4-83.7%) and of exon21 antibody was 80.0% (95% confidence interval=44.4-97.5%). Seven exon19 and four exon21 mutations were false negatives (immunohistochemistry negative, RT-PCR positive). Two exon19 and three exon21 mutations were false positive (immunohistochemistry positive, RT-PCR negative). One false positive exon21 mutation had staining score 300. The EGFR variantIII antibody showed no correlation to EGFR mutation status determined by RT-PCR or to EGFR immunohistochemistry. High specificity of the mutation-specific antibodies was demonstrated. However, sensitivity was low, especially for exon19 deletions, and thus these antibodies cannot yet be used as screening method for EGFR mutations in NSCLC

  14. An adaptive sampling algorithm for Doppler-shift fluorescence velocimetry in high-speed flows

    NASA Astrophysics Data System (ADS)

    Le Page, Laurent M.; O'Byrne, Sean

    2017-03-01

    We present an approach to improving the efficiency of obtaining samples over a given domain for the peak location of Gaussian line-shapes. The method uses parameter estimates obtained from previous measurements to determine subsequent sampling locations. The method may be applied to determine the location of a spectral peak, where the monetary or time cost is too high to allow a less efficient search method, such as sampling at uniformly distributed domain locations, to be used. We demonstrate the algorithm using linear least-squares fitting of log-scaled planar laser-induced fluorescence data combined with Monte-Carlo simulation of measurements, to accurately determine the Doppler-shifted fluorescence peak frequency for each pixel of a fluorescence image. A simulated comparison between this approach and a uniformly spaced sampling approach is carried out using fits both for a single pixel and for a collection of pixels representing the fluorescence images that would be obtained in a hypersonic flow facility. In all cases, the peak location of Doppler-shifted line-shapes were determined to a similar precision with fewer samples than could be achieved using the more typical uniformly distributed sampling approach.

  15. Simulating chemical energies to high precision with fully-scalable quantum algorithms on superconducting qubits

    NASA Astrophysics Data System (ADS)

    O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John

    Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.

  16. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    NASA Technical Reports Server (NTRS)

    Wieserman, W. R.; Schwarze, G. E.; Niedra, J. M.

    1990-01-01

    Limited experimental data exists for the specific core loss and dynamic B-H loops for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglas 2605SC over the frequency range of 1 to 50 kHz and temperature range of 23 to 300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  17. High frequency, high temperature specific core loss and dynamic B-H hysteresis loop characteristics of soft magnetic alloys

    NASA Technical Reports Server (NTRS)

    Wieserman, W. R.; Schwarze, G. E.; Niedra, J. M.

    1990-01-01

    Limited experimental data exists for the specific core loss and dynamic B-H loop for soft magnetic materials for the combined conditions of high frequency and high temperature. This experimental study investigates the specific core loss and dynamic B-H loop characteristics of Supermalloy and Metglas 2605SC over the frequency range of 1 to 50 kHz and temperature range of 23 to 300 C under sinusoidal voltage excitation. The experimental setup used to conduct the investigation is described. The effects of the maximum magnetic flux density, frequency, and temperature on the specific core loss and on the size and shape of the B-H loops are examined.

  18. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  19. A rain pixel recovery algorithm for videos with highly dynamic scenes.

    PubMed

    Jie Chen; Lap-Pui Chau

    2014-03-01

    Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.

  20. Correlation algorithm for computing the velocity fields in microchannel flows with high resolution

    NASA Astrophysics Data System (ADS)

    Karchevskiy, M. N.; Tokarev, M. P.; Yagodnitsyna, A. A.; Kozinkin, L. A.

    2015-11-01

    A cross-correlation algorithm, which enables the obtaining of the velocity field in the flow with a spatial resolution up to a single pixel per vector, has been realized in the work. It gives new information about the structure of microflows as well as increases considerably the accuracy of the measurement of the flow velocity field. In addition, the realized algorithm renders information about the velocity fluctuations in the flow structure. The algorithm was tested on synthetic data at a different number of test images the velocity distribution on which was specified by the Siemens star. The experimental validation was done on the data provided within the international project "4th International PIV Challenge". Besides, a detailed comparison with the Particle Image Velocimetry algorithm, which was realized previously, was carried out.

  1. Patient-specific dose calculation methods for high-dose-rate iridium-192 brachytherapy

    NASA Astrophysics Data System (ADS)

    Poon, Emily S.

    In high-dose-rate 192Ir brachytherapy, the radiation dose received by the patient is calculated according to the AAPM Task Group 43 (TG-43) formalism. This table-based dose superposition method uses dosimetry parameters derived with the radioactive 192Ir source centered in a water phantom. It neglects the dose perturbations caused by inhomogeneities, such as the patient anatomy, applicators, shielding, and radiographic contrast solution. In this work, we evaluated the dosimetric characteristics of a shielded rectal applicator with an endocavitary balloon injected with contrast solution. The dose distributions around this applicator were calculated by the GEANT4 Monte Carlo (MC) code and measured by ionization chamber and GAFCHROMIC EBT film. A patient-specific dose calculation study was then carried out for 40 rectal treatment plans. The PTRAN_CT MC code was used to calculate the dose based on computed tomography (CT) images. This study involved the development of BrachyGUI, an integrated treatment planning tool that can process DICOM-RT data and create PTRAN_CT input initialization files. BrachyGUI also comes with dose calculation and evaluation capabilities. We proposed a novel scatter correction method to account for the reduction in backscatter radiation near tissue-air interfaces. The first step requires calculating the doses contributed by primary and scattered photons separately, assuming a full scatter environment. The scatter dose in the patient is subsequently adjusted using a factor derived by MC calculations, which depends on the distances between the point of interest, the 192Ir source, and the body contour. The method was validated for multicatheter breast brachytherapy, in which the target and skin doses for 18 patient plans agreed with PTRAN_CT calculations better than 1%. Finally, we developed a CT-based analytical dose calculation method. It corrects for the photon attenuation and scatter based upon the radiological paths determined by ray tracing

  2. Engineered specific and high-affinity inhibitor for a subtype of inward-rectifier K+ channels

    PubMed Central

    Ramu, Yajamana; Xu, Yanping; Lu, Zhe

    2008-01-01

    Inward-rectifier K+ (Kir) channels play many important biological roles and are emerging as important therapeutic targets. Subtype-specific inhibitors would be useful tools for studying the channels' physiological functions. Unfortunately, available K+ channel inhibitors generally lack the necessary specificity for their reliable use as pharmacological tools to dissect the various kinds of K+ channel currents in situ. The highly conserved nature of the inhibitor targets accounts for the great difficulty in finding inhibitors specific for a given class of K+ channels or, worse, individual subtypes within a class. Here, by modifying a toxin from the honey bee venom, we have successfully engineered an inhibitor that blocks Kir1 with high (1 nM) affinity and high (>250-fold) selectivity over many commonly studied Kir subtypes. This success not only yields a highly desirable tool but, perhaps more importantly, demonstrates the practical feasibility of engineering subtype-specific K+ channel inhibitors. PMID:18669667

  3. Design and Implementation of High-Speed Input-Queued Switches Based on a Fair Scheduling Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Qingsheng; Zhao, Hua-An

    To increase both the capacity and the processing speed for input-queued (IQ) switches, we proposed a fair scalable scheduling architecture (FSSA). By employing FSSA comprised of several cascaded sub-schedulers, a large-scale high performance switches or routers can be realized without the capacity limitation of monolithic device. In this paper, we present a fair scheduling algorithm named FSSA_DI based on an improved FSSA where a distributed iteration scheme is employed, the scheduler performance can be improved and the processing time can be reduced as well. Simulation results show that FSSA_DI achieves better performance on average delay and throughput under heavy loads compared to other existing algorithms. Moreover, a practical 64 × 64 FSSA using FSSA_DI algorithm is implemented by four Xilinx Vertex-4 FPGAs. Measurement results show that the data rates of our solution can be up to 800Mbps and the tradeoff between performance and hardware complexity has been solved peacefully.

  4. Genetic Algorithm for Innovative Device Designs in High-Efficiency III–V Nitride Light-Emitting Diodes

    SciTech Connect

    Zhu, Di; Schubert, Martin F.; Cho, Jaehee; Schubert, E. Fred; Crawford, Mary H.; Koleske, Daniel D.; Shim, Hyunwook; Sone, Cheolsoo

    2012-01-01

    Light-emitting diodes are becoming the next-generation light source because of their prominent benefits in energy efficiency, versatility, and benign environmental impact. However, because of the unique polarization effects in III–V nitrides and the high complexity of light-emitting diodes, further breakthroughs towards truly optimized devices are required. Here we introduce the concept of artificial evolution into the device optimization process. Reproduction and selection are accomplished by means of an advanced genetic algorithm and device simulator, respectively. We demonstrate that this approach can lead to new device structures that go beyond conventional approaches. The innovative designs originating from the genetic algorithm and the demonstration of the predicted results by implementing structures suggested by the algorithm establish a new avenue for complex semiconductor device design and optimization.

  5. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  6. Algorithmic Approach to High-Throughput Molecular Screening for Alpha Interferon-Resistant Genotypes in Hepatitis C Patients

    PubMed Central

    Sreevatsan, Srinand; Bookout, Jack B.; Ringpis, Fidel M.; Pottathil, Mridula R.; Marshall, David J.; De Arruda, Monika; Murvine, Christopher; Fors, Lance; Pottathil, Raveendran M.; Barathur, Raj R.

    1998-01-01

    This study was designed to analyze the feasibility and validity of using Cleavase Fragment Length Polymorphism (CFLP) analysis as an alternative to DNA sequencing for high-throughput screening of hepatitis C virus (HCV) genotypes in a high-volume molecular pathology laboratory setting. By using a 244-bp amplicon from the 5′ untranslated region of the HCV genome, 61 clinical samples received for HCV reverse transcription-PCR (RT-PCR) were genotyped by this method. The genotype frequencies assigned by the CFLP method were 44.3% for type 1a, 26.2% for 1b, 13.1% for type 2b, and 5% type 3a. The results obtained by nucleotide sequence analysis provided 100% concordance with those obtained by CFLP analysis at the major genotype level, with resolvable differences as to subtype designations for five samples. CFLP analysis-derived HCV genotype frequencies also concurred with the national estimates (N. N. Zein et al., Ann. Intern. Med. 125:634–639, 1996). Reanalysis of 42 of these samples in parallel in a different research laboratory reproduced the CFLP fingerprints for 100% of the samples. Similarly, the major subtype designations for 19 samples subjected to different incubation temperature-time conditions were also 100% reproducible. Comparative cost analysis for genotyping of HCV by line probe assay, CFLP analysis, and automated DNA sequencing indicated that the average cost per amplicon was lowest for CFLP analysis, at $20 (direct costs). On the basis of these findings we propose that CFLP analysis is a robust, sensitive, specific, and an economical method for large-scale screening of HCV-infected patients for alpha interferon-resistant HCV genotypes. The paper describes an algorithm that uses as a reflex test the RT-PCR-based qualitative screening of samples for HCV detection and also addresses genotypes that are ambiguous. PMID:9650932

  7. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  8. Axle counter for high-speed railway based on fibre Bragg grating sensor and algorithm optimization for peak searching

    NASA Astrophysics Data System (ADS)

    Quan, Yu; He, Dawei; Wang, Yongsheng; Wang, Pengfei

    2014-08-01

    For the benefit of electrical isolation, corrosion resistance and quasi-distributed detecting, Fiber Bragg Grating Sensor has been studied for high-speed railway application progressively. Existing Axle counter system based on fiber Bragg grating sensor isn't appropriate for high-speed railway for the shortcoming of emplacement of fiber Bragg grating sensor, low Sampling rate and un-optimized algorithm for peak searching. We propose a new design for the Axle counter of high-speed railway based on high-speed fiber Bragg grating demodulating system. We also optimized algorithm for peak searching by synthesizing the three sensor data, bringing forward the time axle, Gaussian fitting and Finite Element Analysis. The feasibility was verified by field experiment.

  9. Pseudonephritis is associated with high urinary osmolality and high specific gravity in adolescent soccer players.

    PubMed

    Van Biervliet, Stephanie; Van Biervliet, Jean Pierre; Watteyne, Karel; Langlois, Michel; Bernard, Dirk; Vande Walle, Johan

    2013-08-01

    The study aimed to evaluate the effect of exercise on urine sediment in adolescent soccer players. In 25 15-year-old (range 14.4-15.8 yrs) athletes, urinary protein, osmolality and cytology were analyzed by flow cytometry and automated dipstick analysis before (T(0)), during (T(1)), and after a match (T(2)). All athletes had normal urine analysis and blood pressure at rest, tested before the start of the soccer season. Fifty-eight samples were collected (T(0): 20, T(1): 17, T(2): 21). Proteinuria was present in 20 of 38 samples collected after exercise. Proteinuria was associated with increased urinary osmolality (p < .001) and specific gravity (p < .001). Hyaline and granular casts were present in respectively 8 of 38 and 8 of 38 of the urinary samples after exercise. The presence of casts was associated with urine protein concentration, osmolality, and specific gravity. This was also the case for hematuria (25 of 38) and leucocyturia (9 of 38). Squamous epithelial cells were excreted in equal amounts to white and red blood cells. A notable proportion of adolescent athletes developed sediment abnormalities, which were associated with urinary osmolality and specific gravity.

  10. An evaluation of SEBAL algorithm using high resolution aircraft data acquired during BEAREX07

    NASA Astrophysics Data System (ADS)

    Paul, G.; Gowda, P. H.; Prasad, V. P.; Howell, T. A.; Staggenborg, S.

    2010-12-01

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade SEBAL has been tested over various regions and has found its application in solving water resources and irrigation problems. This research combines high resolution remote sensing data and field measurements of the surface radiation and agro-meteorological variables to review various SEBAL steps for mapping ET in the Texas High Plains (THP). High resolution aircraft images (0.5-1.8 m) acquired during the Bushland Evapotranspiration and Agricultural Remote Sensing Experiment 2007 (BEAREX07) conducted at the USDA-ARS Conservation and Production Research Laboratory in Bushland, Texas, was utilized to evaluate the SEBAL. Accuracy of individual relationships and predicted ET were investigated using observed hourly ET rates from 4 large weighing lysimeters, each located at the center of 4.7 ha field. The uniqueness and the strength of this study come from the fact that it evaluates the SEBAL for irrigated and dryland conditions simultaneously with each lysimeter field planted to irrigated forage sorghum, irrigated forage corn, dryland clumped grain sorghum, and dryland row sorghum. Improved coefficients for the local conditions were developed for the computation of roughness length for momentum transport. The decision involved in selection of dry and wet pixels, which essentially determines the partitioning of the available energy between sensible (H) and latent (LE) heat fluxes has been discussed. The difference in roughness length referred to as the kB-1 parameter was modified in the current study. Performance of the SEBAL was evaluated using mean bias error (MBE) and root mean square error (RMSE). An RMSE of ±37.68 W m-2 and ±0.11 mm h-1 was observed for the net radiation and hourly actual ET, respectively

  11. High-order algorithms for compressible reacting flow with complex chemistry

    NASA Astrophysics Data System (ADS)

    Emmett, Matthew; Zhang, Weiqun; Bell, John B.

    2014-05-01

    In this paper we describe a numerical algorithm for integrating the multicomponent, reacting, compressible Navier-Stokes equations, targeted for direct numerical simulation of combustion phenomena. The algorithm addresses two shortcomings of previous methods. First, it incorporates an eighth-order narrow stencil approximation of diffusive terms that reduces the communication compared to existing methods and removes the need to use a filtering algorithm to remove Nyquist frequency oscillations that are not damped with traditional approaches. The methodology also incorporates a multirate temporal integration strategy that provides an efficient mechanism for treating chemical mechanisms that are stiff relative to fluid dynamical time-scales. The overall methodology is eighth order in space with options for fourth order to eighth order in time. The implementation uses a hybrid programming model designed for effective utilisation of many-core architectures. We present numerical results demonstrating the convergence properties of the algorithm with realistic chemical kinetics and illustrating its performance characteristics. We also present a validation example showing that the algorithm matches detailed results obtained with an established low Mach number solver.

  12. Highly Sensitive, Highly Specific Whole-Cell Bioreporters for the Detection of Chromate in Environmental Samples

    PubMed Central

    Branco, Rita; Cristóvão, Armando; Morais, Paula V.

    2013-01-01

    Microbial bioreporters offer excellent potentialities for the detection of the bioavailable portion of pollutants in contaminated environments, which currently cannot be easily measured. This paper describes the construction and evaluation of two microbial bioreporters designed to detect the bioavailable chromate in contaminated water samples. The developed bioreporters are based on the expression of gfp under the control of the chr promoter and the chrB regulator gene of TnOtChr determinant from Ochrobactrum tritici 5bvl1. pCHRGFP1 Escherichia coli reporter proved to be specific and sensitive, with minimum detectable concentration of 100 nM chromate and did not react with other heavy metals or chemical compounds analysed. In order to have a bioreporter able to be used under different environmental toxics, O. tritici type strain was also engineered to fluoresce in the presence of micromolar levels of chromate and showed to be as specific as the first reporter. Their applicability on environmental samples (spiked Portuguese river water) was also demonstrated using either freshly grown or cryo-preserved cells, a treatment which constitutes an operational advantage. These reporter strains can provide on-demand usability in the field and in a near future may become a powerful tool in identification of chromate-contaminated sites. PMID:23326558

  13. A High-Efficiency Uneven Cluster Deployment Algorithm Based on Network Layered for Event Coverage in UWSNs

    PubMed Central

    Yu, Shanen; Liu, Shuai; Jiang, Peng

    2016-01-01

    Most existing deployment algorithms for event coverage in underwater wireless sensor networks (UWSNs) usually do not consider that network communication has non-uniform characteristics on three-dimensional underwater environments. Such deployment algorithms ignore that the nodes are distributed at different depths and have different probabilities for data acquisition, thereby leading to imbalances in the overall network energy consumption, decreasing the network performance, and resulting in poor and unreliable late network operation. Therefore, in this study, we proposed an uneven cluster deployment algorithm based network layered for event coverage. First, according to the energy consumption requirement of the communication load at different depths of the underwater network, we obtained the expected value of deployment nodes and the distribution density of each layer network after theoretical analysis and deduction. Afterward, the network is divided into multilayers based on uneven clusters, and the heterogeneous communication radius of nodes can improve the network connectivity rate. The recovery strategy is used to balance the energy consumption of nodes in the cluster and can efficiently reconstruct the network topology, which ensures that the network has a high network coverage and connectivity rate in a long period of data acquisition. Simulation results show that the proposed algorithm improves network reliability and prolongs network lifetime by significantly reducing the blind movement of overall network nodes while maintaining a high network coverage and connectivity rate. PMID:27973448

  14. Electrolytes with Improved Safety Characteristics for High Voltage, High Specific Energy Li-ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Krause, F. C.; Hwang, C.; West, W. C.; Soler, J.; Whitcanack, L. W.; Prakash, G. K. S.; Ratnakumar, B. V.

    2012-01-01

    (1) NASA is actively pursuing the development of advanced electrochemical energy storage and conversion devices for future lunar and Mars missions; (2) The Exploration Technology Development Program, Energy Storage Project is sponsoring the development of advanced Li-ion batteries and PEM fuel cell and regenerative fuel cell systems for the Altair Lunar Lander, Extravehicular Activities (EVA), and rovers and as the primary energy storage system for Lunar Surface Systems; (3) At JPL, in collaboration with NASA-GRC, NASA-JSC and industry, we are actively developing advanced Li-ion batteries with improved specific energy, energy density and safety. One effort is focused upon developing Li-ion battery electrolyte with enhanced safety characteristics (i.e., low flammability); and (4) A number of commercial applications also require Li-ion batteries with enhanced safety, especially for automotive applications.

  15. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    SciTech Connect

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate of $O(n^{-1/2})$, the corresponding IRUQ converges at $O(n^{-1})$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.

  16. Potential of a Pharmacogenetic-Guided Algorithm to Predict Optimal Warfarin Dosing in a High-Risk Hispanic Patient

    PubMed Central

    Hernandez-Suarez, Dagmar F.; Claudio-Campos, Karla; Mirabal-Arroyo, Javier E.; Torres-Hernández, Bianca A.; López-Candales, Angel; Melin, Kyle; Duconge, Jorge

    2016-01-01

    Deep abdominal vein thrombosis is extremely rare among thrombotic events secondary to the use of contraceptives. A case to illustrate the clinical utility of ethno-specific pharmacogenetic testing in warfarin management of a Hispanic patient is reported. A 37-year-old Hispanic Puerto Rican, non-gravid female with past medical history of abnormal uterine bleeding on hormonal contraceptive therapy was evaluated for abdominal pain. Physical exam was remarkable for unspecific diffuse abdominal tenderness, and general initial laboratory results—including coagulation parameters—were unremarkable. A contrast-enhanced computed tomography showed a massive thrombosis of the main portal, splenic, and superior mesenteric veins. On admission the patient was started on oral anticoagulation therapy with warfarin at 5 mg/day and low-molecular-weight heparin. The prediction of an effective warfarin dose of 7.5 mg/day, estimated by using a recently developed pharmacogenetic-guided algorithm for Caribbean Hispanics, coincided with the actual patient’s warfarin dose to reach the international normalized ratio target. We speculate that the slow rise in patient’s international normalized ratio observed on the initiation of warfarin therapy, the resulting high risk for thromboembolic events, and the required warfarin dose of 7.5 mg/day are attributable in some part to the presence of the NQO1*2 (g.559C>T, p.P187S) polymorphism, which seems to be significantly associated with resistance to warfarin in Hispanics. By adding genotyping results of this novel variant, the predictive model can inform clinicians better about the optimal warfarin dose in Caribbean Hispanics. The results highlight the potential for pharmacogenetic testing of warfarin to improve patient care. PMID:28210634

  17. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  18. Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application

    DTIC Science & Technology

    2016-02-26

    the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...systems a significant portion of processors may suffer from hardware or software faults rendering large-scale computations useless. In this project...simulation, domain decomposition, CFD, gappy data , estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  19. GPU-based ray tracing algorithm for high-speed propagation prediction in typical indoor environments

    NASA Astrophysics Data System (ADS)

    Guo, Lixin; Guan, Xiaowei; Liu, Zhongyu

    2015-10-01

    A fast 3-D ray tracing propagation prediction model based on virtual source tree is presented in this paper, whose theoretical foundations are geometrical optics(GO) and the uniform theory of diffraction(UTD). In terms of typical single room indoor scene, taking the geometrical and electromagnetic information into account, some acceleration techniques are adopted to raise the efficiency of the ray tracing algorithm. The simulation results indicate that the runtime of the ray tracing algorithm will sharply increase when the number of the objects in the single room is large enough. Therefore, GPU acceleration technology is used to solve that problem. As is known to all, GPU is good at calculation operation rather than logical judgment, so that tens of thousands of threads in CUDA programs are able to calculate at the same time, in order to achieve massively parallel acceleration. Finally, a typical single room with several objects is simulated by using the serial ray tracing algorithm and the parallel one respectively. It can be found easily from the results that compared with the serial algorithm, the GPU-based one can achieve greater efficiency.

  20. An Evaluation of SEBAL Algorithm Using High Resolution Aircraft Data Acquired During BEAREX07

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Surface Energy Balance Algorithm for Land (SEBAL) computes spatially distributed surface energy fluxes and evapotranspiration (ET) rates using a combination of empirical and deterministic equations executed in a strictly hierarchical sequence. Over the past decade, SEBAL has been tested over various...

  1. Overall plant design specification Modular High Temperature Gas-cooled Reactor. Revision 9

    SciTech Connect

    1990-05-01

    Revision 9 of the ``Overall Plant Design Specification Modular High Temperature Gas-Cooled Reactor,`` DOE-HTGR-86004 (OPDS) has been completed and is hereby distributed for use by the HTGR Program team members. This document, Revision 9 of the ``Overall Plant Design Specification`` (OPDS) reflects those changes in the MHTGR design requirements and configuration resulting form approved Design Change Proposals DCP BNI-003 and DCP BNI-004, involving the Nuclear Island Cooling and Spent Fuel Cooling Systems respectively.

  2. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  3. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  4. Denatured G-protein coupled receptors as immunogens to generate highly specific antibodies.

    PubMed

    Talmont, Franck; Moulédous, Lionel; Boué, Jérôme; Mollereau, Catherine; Dietrich, Gilles

    2012-01-01

    G-protein coupled receptors (GPCRs) play a major role in a number of physiological and pathological processes. Thus, GPCRs have become the most frequent targets for development of new therapeutic drugs. In this context, the availability of highly specific antibodies may be decisive to obtain reliable findings on localization, function and medical relevance of GPCRs. However, the rapid and easy generation of highly selective anti-GPCR antibodies is still a challenge. Herein, we report that highly specific antibodies suitable for detection of GPCRs in native and unfolded forms can be elicited by immunizing animals against purified full length denatured recombinant GPCRs. Contrasting with the currently admitted postulate, our study shows that an active and well-folded GPCR is not required for the production of specific anti-GPCR antibodies. This new immunizing strategy validated with three different human GPCR (μ-opioid, κ-opioid, neuropeptide FF2 receptors) might be generalized to other members of the GPCR family.

  5. Wide Operating Temperature Range Electrolytes for High Voltage and High Specific Energy Li-Ion Cells

    NASA Technical Reports Server (NTRS)

    Smart, M. C.; Hwang, C.; Krause, F. C.; Soler, J.; West, W. C.; Ratnakumar, B. V.; Amine, K.

    2012-01-01

    A number of electrolyte formulations that have been designed to operate over a wide temperature range have been investigated in conjunction with layered-layered metal oxide cathode materials developed at Argonne. In this study, we have evaluated a number of electrolytes in Li-ion cells consisting of Conoco Phillips A12 graphite anodes and Toda HE5050 Li(1.2)Ni(0.15)Co(0.10)Mn(0.55)O2 cathodes. The electrolytes studied consisted of LiPF6 in carbonate-based electrolytes that contain ester co-solvents with various solid electrolyte interphase (SEI) promoting additives, many of which have been demonstrated to perform well in 4V systems. More specifically, we have investigated the performance of a number of methyl butyrate (MB) containing electrolytes (i.e., LiPF6 in ethylene carbonate (EC) + ethyl methyl carbonate (EMC) + MB (20:20:60 v/v %) that contain various additives, including vinylene carbonate, lithium oxalate, and lithium bis(oxalato)borate (LiBOB). When these systems were evaluated at various rates at low temperatures, the methyl butyrate-based electrolytes resulted in improved rate capability compared to cells with all carbonate-based formulations. It was also ascertained that the slow cathode kinetics govern the generally poor rate capability at low temperature in contrast to traditionally used LiNi(0.80)Co(0.15)Al(0.05)O2-based systems, rather than being influenced strongly by the electrolyte type.

  6. Hybrid-PIC Modeling of a High-Voltage, High-Specific-Impulse Hall Thruster

    NASA Technical Reports Server (NTRS)

    Smith, Brandon D.; Boyd, Iain D.; Kamhawi, Hani; Huang, Wensheng

    2013-01-01

    The primary life-limiting mechanism of Hall thrusters is the sputter erosion of the discharge channel walls by high-energy propellant ions. Because of the difficulty involved in characterizing this erosion experimentally, many past efforts have focused on numerical modeling to predict erosion rates and thruster lifespan, but those analyses were limited to Hall thrusters operating in the 200-400V discharge voltage range. Thrusters operating at higher discharge voltages (V(sub d) >= 500 V) present an erosion environment that may differ greatly from that of the lower-voltage thrusters modeled in the past. In this work, HPHall, a well-established hybrid-PIC code, is used to simulate NASA's High-Voltage Hall Accelerator (HiVHAc) at discharge voltages of 300, 400, and 500V as a first step towards modeling the discharge channel erosion. It is found that the model accurately predicts the thruster performance at all operating conditions to within 6%. The model predicts a normalized plasma potential profile that is consistent between all three operating points, with the acceleration zone appearing in the same approximate location. The expected trend of increasing electron temperature with increasing discharge voltage is observed. An analysis of the discharge current oscillations shows that the model predicts oscillations that are much greater in amplitude than those measured experimentally at all operating points, suggesting that the differences in oscillation amplitude are not strongly associated with discharge voltage.

  7. Vision Algorithm for the Solar Aspect System of the High Energy Replicated Optics to Explore the Sun Mission

    NASA Technical Reports Server (NTRS)

    Cramer, Alexander Krishnan

    2014-01-01

    This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.

  8. Fluorescence encoded super resolution imaging based on a location estimation algorithm for high-density fluorescence probes

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun

    2016-11-01

    In this paper, we propose a fluorescence encoded super resolution technique based on an estimation algorithm to determine locations of high-density fluorescence emitters. In our method, several types of fluorescence coded probes are employed to reduce densities of target molecules labeled with individual codes. By applying an estimation algorithm to each coded image, the locations of the high density probes can be determined. Due to multiplexed fluorescence imaging, this approach will provide fast super resolution microscopy. In experiments, we evaluated the performance of the method using probes with different fluorescence wavelengths. Numerical simulation results show that the locations of probes with the density of 200 μ m^{-2} , which is a typical membrane-receptor expression level, are determined with acquisition of 16 different coded images.

  9. Applicability of data mining algorithms in the identification of beach features/patterns on high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.

    2015-01-01

    The available beach classification algorithms and sediment budget models are mainly based on in situ parameters, usually unavailable for several coastal areas. A morphological analysis using remotely sensed data is a valid alternative. This study focuses on the application of data mining techniques, particularly decision trees (DTs) and artificial neural networks (ANNs) to an IKONOS-2 image in order to identify beach features/patterns in a stretch of the northwest coast of Portugal. Based on knowledge of the coastal features, five classes were defined. In the identification of beach features/patterns, the ANN algorithm presented an overall accuracy of 98.6% and a kappa coefficient of 0.97. The best DTs algorithm (with pruning) presents an overall accuracy of 98.2% and a kappa coefficient of 0.97. The results obtained through the ANN and DTs were in agreement. However, the ANN presented a classification more sensitive to rip currents. The use of ANNs and DTs for beach classification from remotely sensed data resulted in an increased classification accuracy when compared with traditional classification methods. The association of remotely sensed high-spatial resolution data and data mining algorithms is an effective methodology with which to identify beach features/patterns.

  10. Surviving at high elevations: an inter- and intra-specific analysis in a mountain bird community.

    PubMed

    Bastianelli, G; Tavecchia, G; Meléndez, L; Seoane, J; Obeso, J R; Laiolo, P

    2017-03-20

    Elevation represents an important selection agent on self-maintenance traits and correlated life histories in birds, but no study has analysed whether life-history variation along this environmental cline is consistent among and within species. In a sympatric community of passerines, we analysed how the average adult survival of 25 open-habitat species varied with their elevational distribution and how adult survival varied with elevation at the intra-specific level. For such purpose, we estimated intra-specific variation in adult survival in two mountainous species, the Water pipit (Anthus spinoletta) and the Northern wheatear (Oenanthe oenanthe) in NW Spain, by means of capture-recapture analyses. At the inter-specific level, high-elevation species showed higher survival values than low elevation ones, likely because a greater allocation to self-maintenance permits species to persist in alpine environments. At the intra-specific level, the magnitude of survival variation was lower by far. Nevertheless, Water pipit survival slightly decreased at high elevations, while the proportion of transient birds increased. In contrast, no such relationships were found in the Northern wheatear. Intra-specific analyses suggest that living at high elevation may be costly, such as for the Water pipit in our case study. Therefore, it seems that a species can persist with viable populations in uplands, where extrinsic mortality is high, by increasing the investment in self-maintenance and prospecting behaviours.

  11. Mapping cell type-specific transcriptional enhancers using high affinity, lineage-specific Ep300 bioChIP-seq

    PubMed Central

    Zhou, Pingzhu; Gu, Fei; Zhang, Lina; Akerberg, Brynn N; Ma, Qing; Li, Kai; He, Aibin; Lin, Zhiqiang; Stevens, Sean M; Zhou, Bin; Pu, William T

    2017-01-01

    Understanding the mechanisms that regulate cell type-specific transcriptional programs requires developing a lexicon of their genomic regulatory elements. We developed a lineage-selective method to map transcriptional enhancers, regulatory genomic regions that activate transcription, in mice. Since most tissue-specific enhancers are bound by the transcriptional co-activator Ep300, we used Cre-directed, lineage-specific Ep300 biotinylation and pulldown on immobilized streptavidin followed by next generation sequencing of co-precipitated DNA to identify lineage-specific enhancers. By driving this system with lineage-specific Cre transgenes, we mapped enhancers active in embryonic endothelial cells/blood or skeletal muscle. Analysis of these enhancers identified new transcription factor heterodimer motifs that likely regulate transcription in these lineages. Furthermore, we identified candidate enhancers that regulate adult heart- or lung- specific endothelial cell specialization. Our strategy for tissue-specific protein biotinylation opens new avenues for studying lineage-specific protein-DNA and protein-protein interactions. DOI: http://dx.doi.org/10.7554/eLife.22039.001 PMID:28121289

  12. Influence of measuring algorithm on shape accuracy in the compensating turning of high gradient thin-wall parts

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi

    2015-02-01

    In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.

  13. A low-jitter and high-throughput scheduling based on genetic algorithm in slotted WDM networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Jin, Yaohui; Su, Yikai; Xu, Buwei; Zhang, Chunlei; Zhu, Yi; Hu, Weisheng

    2005-02-01

    Slotted WDM, which achieves higher capacity compared with conventional WDM and SDH networks, has been discussed a lot recently. The ring network for this architecture has been demonstrated experimentally. In slotted WDM ring network, each node is equipped with a wavelength-tunable transmitter and a fixed receiver and assigned with a specific wavelength. A node can send data to every other node by tuning wavelength accordingly in a time slot. One of the important issues for it is scheduling. Scheduling of it can be reduced to input queued switch when synchronization and propagation are solved and many schemes have been proposed to solve these two issues. However, it"s proved that scheduling of such a network taking both jitter and throughput into consideration is NP hard. Greedy algorithm has been proposed to solve it before. The main contribution of this paper lies in a novel genetic algorithm to obtain optimal or near optimal value of this specific NP hard problem. We devise problem specific chromosome codes, fitness function, crossover and mutation operations. Experimental results show that our GA provides better performances in terms of throughput and jitter than a greedy heuristic.

  14. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    NASA Technical Reports Server (NTRS)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  15. Functional Characteristics of a Highly Specific Integrase Encoded by an LTR-Retrotransposon

    PubMed Central

    Peyretaillade, Eric; Brasset, Emilie; Dastugue, Bernard; Vaury, Chantal

    2008-01-01

    Background The retroviral Integrase protein catalyzes the insertion of linear viral DNA into host cell DNA. Although different retroviruses have been shown to target distinctive chromosomal regions, few of them display a site-specific integration. ZAM, a retroelement from Drosophila melanogaster very similar in structure and replication cycle to mammalian retroviruses is highly site-specific. Indeed, ZAM copies target the genomic 5′-CGCGCg-3′ consensus-sequences. To enlighten the determinants of this high integration specificity, we investigated the functional properties of its integrase protein denoted ZAM-IN. Principal Findings Here we show that ZAM-IN displays the property to nick DNA molecules in vitro. This endonuclease activity targets specific sequences that are present in a 388 bp fragment taken from the white locus and known to be a genomic ZAM integration site in vivo. Furthermore, ZAM-IN displays the unusual property to directly bind specific genomic DNA sequences. Two specific and independent sites are recognized within the 388 bp fragment of the white locus: the CGCGCg sequence and a closely apposed site different in sequence. Conclusion This study strongly argues that the intrinsic properties of ZAM-IN, ie its binding properties and its endonuclease activity, play an important part in ZAM integration specificity. Its ability to select two binding sites and to nick the DNA molecule reminds the strategy used by some site-specific recombination enzymes and forms the basis for site-specific integration strategies potentially useful in a broad range of genetic engineering applications. PMID:18784842

  16. Hollow carbon nanofiber-encapsulated sulfur cathodes for high specific capacity rechargeable lithium batteries.

    PubMed

    Zheng, Guangyuan; Yang, Yuan; Cha, Judy J; Hong, Seung Sae; Cui, Yi

    2011-10-12

    Sulfur has a high specific capacity of 1673 mAh/g as lithium battery cathodes, but its rapid capacity fading due to polysulfides dissolution presents a significant challenge for practical applications. Here we report a hollow carbon nanofiber-encapsulated sulfur cathode for effective trapping of polysulfides and demonstrate experimentally high specific capacity and excellent electrochemical cycling of the cells. The hollow carbon nanofiber arrays were fabricated using anodic aluminum oxide (AAO) templates, through thermal carbonization of polystyrene. The AAO template also facilitates sulfur infusion into the hollow fibers and prevents sulfur from coating onto the exterior carbon wall. The high aspect ratio of the carbon nanofibers provides an ideal structure for trapping polysulfides, and the thin carbon wall allows rapid transport of lithium ions. The small dimension of these nanofibers provides a large surface area per unit mass for Li(2)S deposition during cycling and reduces pulverization of electrode materials due to volumetric expansion. A high specific capacity of about 730 mAh/g was observed at C/5 rate after 150 cycles of charge/discharge. The introduction of LiNO(3) additive to the electrolyte was shown to improve the Coulombic efficiency to over 99% at C/5. The results show that the hollow carbon nanofiber-encapsulated sulfur structure could be a promising cathode design for rechargeable Li/S batteries with high specific energy.

  17. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    NASA Technical Reports Server (NTRS)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  18. Modified Omega-k Algorithm for High-Speed Platform Highly-Squint Staggered SAR Based on Azimuth Non-Uniform Interpolation

    PubMed Central

    Zeng, Hong-Cheng; Chen, Jie; Liu, Wei; Yang, Wei

    2015-01-01

    In this work, the staggered SAR technique is employed for high-speed platform highly-squint SAR by varying the pulse repetition interval (PRI) as a linear function of range-walk. To focus the staggered SAR data more efficiently, a low-complexity modified Omega-k algorithm is proposed based on a novel method for optimal azimuth non-uniform interpolation, avoiding zero padding in range direction for recovering range cell migration (RCM) and saving in both data storage and computational load. An approximate model on continuous PRI variation with respect to sliding receive-window is employed in the proposed algorithm, leaving a residual phase error only due to the effect of a time-varying Doppler phase caused by staggered SAR. Then, azimuth non-uniform interpolation (ANI) at baseband is carried out to compensate the azimuth non-uniform sampling (ANS) effect resulting from continuous PRI variation, which is further followed by the modified Omega-k algorithm. The proposed algorithm has a significantly lower computational complexity, but with an equally effective imaging performance, as shown in our simulation results. PMID:25664433

  19. Establishment of an Algorithm Using prM/E- and NS1-Specific IgM Antibody-Capture Enzyme-Linked Immunosorbent Assays in Diagnosis of Japanese Encephalitis Virus and West Nile Virus Infections in Humans.

    PubMed

    Galula, Jedhan U; Chang, Gwong-Jen J; Chuang, Shih-Te; Chao, Day-Yu

    2016-02-01

    The front-line assay for the presumptive serodiagnosis of acute Japanese encephalitis virus (JEV) and West Nile virus (WNV) infections is the premembrane/envelope (prM/E)-specific IgM antibody-capture enzyme-linked immunosorbent assay (MAC-ELISA). Due to antibody cross-reactivity, MAC-ELISA-positive samples may be confirmed with a time-consuming plaque reduction neutralization test (PRNT). In the present study, we applied a previously developed anti-nonstructural protein 1 (NS1)-specific MAC-ELISA (NS1-MAC-ELISA) on archived acute-phase serum specimens from patients with confirmed JEV and WNV infections and compared the results with prM/E containing virus-like particle-specific MAC-ELISA (VLP-MAC-ELISA). Paired-receiver operating characteristic (ROC) curve analyses revealed no statistical differences in the overall assay performances of the VLP- and NS1-MAC-ELISAs. The two methods had high sensitivities of 100% but slightly lower specificities that ranged between 80% and 100%. When the NS1-MAC-ELISA was used to confirm positive results in the VLP-MAC-ELISA, the specificity of serodiagnosis, especially for JEV infection, was increased to 90% when applied in areas where JEV cocirculates with WNV, or to 100% when applied in areas that were endemic for JEV. The results also showed that using multiple antigens could resolve the cross-reactivity in the assays. Significantly higher positive-to-negative (P/N) values were consistently obtained with the homologous antigens than those with the heterologous antigens. JEV or WNV was reliably identified as the currently infecting flavivirus by a higher ratio of JEV-to-WNV P/N values or vice versa. In summary of the above-described results, the diagnostic algorithm combining the use of multiantigen VLP- and NS1-MAC-ELISAs was developed and can be practically applied to obtain a more specific and reliable result for the serodiagnosis of JEV and WNV infections without the need for PRNT. The developed algorithm should provide great

  20. New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes

    SciTech Connect

    Agrawal, Rakesh

    2013-11-21

    This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.

  1. An autonomous navigation algorithm for high orbit satellite using star sensor and ultraviolet earth sensor.

    PubMed

    Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu

    2013-01-01

    An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.

  2. Novel method for the high-throughput production of phosphorylation site-specific monoclonal antibodies

    PubMed Central

    Kurosawa, Nobuyuki; Wakata, Yuka; Inobe, Tomonao; Kitamura, Haruki; Yoshioka, Megumi; Matsuzawa, Shun; Kishi, Yoshihiro; Isobe, Masaharu

    2016-01-01

    Threonine phosphorylation accounts for 10% of all phosphorylation sites compared with 0.05% for tyrosine and 90% for serine. Although monoclonal antibody generation for phospho-serine and -tyrosine proteins is progressing, there has been limited success regarding the production of monoclonal antibodies against phospho-threonine proteins. We developed a novel strategy for generating phosphorylation site-specific monoclonal antibodies by cloning immunoglobulin genes from single plasma cells that were fixed, intracellularly stained with fluorescently labeled peptides and sorted without causing RNA degradation. Our high-throughput fluorescence activated cell sorting-based strategy, which targets abundant intracellular immunoglobulin as a tag for fluorescently labeled antigens, greatly increases the sensitivity and specificity of antigen-specific plasma cell isolation, enabling the high-efficiency production of monoclonal antibodies with desired antigen specificity. This approach yielded yet-undescribed guinea pig monoclonal antibodies against threonine 18-phosphorylated p53 and threonine 68-phosphorylated CHK2 with high affinity and specificity. Our method has the potential to allow the generation of monoclonal antibodies against a variety of phosphorylated proteins. PMID:27125496

  3. Improvements in immunoprecipitation of specific messenger RNA. Isolation of highly purified conalbumin mRNA in high yield.

    PubMed

    Payvar, F; Schimke, R T

    1979-11-01

    We have described previously procedures for the isolation of specific mRNA employing immunoprecipitation of polysomes. In spite of our success with ovalbumin mRNA in the chicken oviduct, we have had considerable difficulties in applying these same published techniques to the immunopurification of conalbumin mRNA, despite the fact that the chicken oviduct synthesizes up to 10% of protein as conalbumin. Here we describe a number of modifications and refinements which have proved essential in obtaining intact conalbumin mRNA in high purity and high yields. These refinements include: (a) improved purification of conalbumin in order to remove contaminating proteins that result in impure antibodies; (b) improved isolation of specific conalbumin antibody in high yields; (c) improved methods for reducing contamination by non-specific polysomes; (d) improved techniques for isolation of RNA from immunoprecipitates resulting in less degradation and higher recovery of conalbumin mRNA; (E) improved techniques for efficient translation of conalbumin mRNA involving treatment of the RNA with methylmercury prior to translation. We conclude that problems involved in the immunoprecipitation of different mRNAs may differ, and that various refinements in techniques may be required for obtaining highly purified preparations of intact mRNA in high yields.

  4. Specification Improvement Through Analysis of Proof Structure (SITAPS): High Assurance Software Development

    DTIC Science & Technology

    2016-02-01

    SPECIFICATION IMPROVEMENT THROUGH ANALYSIS OF PROOF STRUCTURE (SITAPS): HIGH ASSURANCE SOFTWARE DEVELOPMENT BAE SYSTEMS FEBRUARY...ANALYSIS OF PROOF STRUCTURE (SITAPS): HIGH ASSURANCE SOFTWARE DEVELOPMENT 5a. CONTRACT NUMBER FA8750-13-C-0240 5b. GRANT NUMBER N/A 5c. PROGRAM...Release; Distribution Unlimited. PA# 88ABW-2016-0232 Date Cleared: 22 JAN 2016 13. SUPPLEMENTARY NOTES 14. ABSTRACT Formal software verification

  5. Quantifying domain-ligand affinities and specificities by high-throughput holdup assay.

    PubMed

    Vincentelli, Renaud; Luck, Katja; Poirson, Juline; Polanowska, Jolanta; Abdat, Julie; Blémont, Marilyne; Turchetto, Jeremy; Iv, François; Ricquier, Kevin; Straub, Marie-Laure; Forster, Anne; Cassonnet, Patricia; Borg, Jean-Paul; Jacob, Yves; Masson, Murielle; Nominé, Yves; Reboul, Jérôme; Wolff, Nicolas; Charbonnier, Sebastian; Travé, Gilles

    2015-08-01

    Many protein interactions are mediated by small linear motifs interacting specifically with defined families of globular domains. Quantifying the specificity of a motif requires measuring and comparing its binding affinities to all its putative target domains. To this end, we developed the high-throughput holdup assay, a chromatographic approach that can measure up to 1,000 domain-motif equilibrium binding affinities per day. After benchmarking the approach on 210 PDZ-peptide pairs with known affinities, we determined the affinities of two viral PDZ-binding motifs derived from human papillomavirus E6 oncoproteins for 209 PDZ domains covering 79% of the human 'PDZome'. We obtained sharply sequence-dependent binding profiles that quantitatively describe the PDZome recognition specificity of each motif. This approach, applicable to many categories of domain-ligand interactions, has wide potential for quantifying the specificities of interactomes.

  6. Highly specific and sensitive electrochemical genotyping via gap ligation reaction and surface hybridization detection.

    PubMed

    Huang, Yong; Zhang, Yan-Li; Xu, Xiangmin; Jiang, Jian-Hui; Shen, Guo-Li; Yu, Ru-Qin

    2009-02-25

    This paper developed a novel electrochemical genotyping strategy based on gap ligation reaction with surface hybridization detection. This strategy utilized homogeneous enzymatic reactions to generate molecular beacon-structured allele-specific products that could be cooperatively annealed to capture probes stably immobilized on the surface via disulfide anchors, thus allowing ultrasensitive surface hybridization detection of the allele-specific products through redox tags in close proximity to the electrode. Such a unique biphasic architecture provided a universal methodology for incorporating enzymatic discrimination reactions in electrochemical genotyping with desirable reproducibility, high efficiency and no interferences from interficial steric hindrance. The developed technique was demonstrated to show intrinsic high sensitivity for direct genomic analysis, and excellent specificity with discriminativity of single nucleotide variations.

  7. Chemical synthesis of high specific-activity (/sup 35/S)adenosylhomocysteine

    SciTech Connect

    Stern, P.H.; Hoffman, R.M.

    1986-11-01

    The study of the family of transmethylases, critical to normal cellular function and often altered in cancer, can be facilitated by the availability of a high specific-activity S-adenosylhomocysteine. The authors report the two-step preparation of (/sup 35/S)adenosylhomocysteine from (/sup 35/S)methionine at a specific activity of 1420 Ci/mmol in an overall yield of 24% by a procedure involving demethylation of the (/sup 35/S)methionine to (/sup 35/S)homocysteine followed by condensation with 5'-chloro-5'-deoxyadenosine. The ease of the reactions, ready availability and low cost of the reagents and high specific-activity and stability of the product make the procedure an attractive one with many uses, and superior to current methodology.

  8. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M. Allen; Yu, Conrad M.; Raley, Norman F.

    1999-01-01

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gasses in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters.

  9. Porous silicon structures with high surface area/specific pore size

    DOEpatents

    Northrup, M.A.; Yu, C.M.; Raley, N.F.

    1999-03-16

    Fabrication and use of porous silicon structures to increase surface area of heated reaction chambers, electrophoresis devices, and thermopneumatic sensor-actuators, chemical preconcentrates, and filtering or control flow devices. In particular, such high surface area or specific pore size porous silicon structures will be useful in significantly augmenting the adsorption, vaporization, desorption, condensation and flow of liquids and gases in applications that use such processes on a miniature scale. Examples that will benefit from a high surface area, porous silicon structure include sample preconcentrators that are designed to adsorb and subsequently desorb specific chemical species from a sample background; chemical reaction chambers with enhanced surface reaction rates; and sensor-actuator chamber devices with increased pressure for thermopneumatic actuation of integrated membranes. Examples that benefit from specific pore sized porous silicon are chemical/biological filters and thermally-activated flow devices with active or adjacent surfaces such as electrodes or heaters. 9 figs.

  10. Highly Specific Detection of Five Exotic Quarantine Plant Viruses using RT-PCR

    PubMed Central

    Choi, Hoseong; Cho, Won Kyong; Yu, Jisuk; Lee, Jong-Seung; Kim, Kook-Hyung

    2013-01-01

    To detect five plant viruses (Beet black scorch virus, Beet necrotic yellow vein virus, Eggplant mottled dwarf virus, Pelargonium zonate spot virus, and Rice yellow mottle virus) for quarantine purposes, we designed 15 RT-PCR primer sets. Primer design was based on the nucleotide sequence of the coat protein gene, which is highly conserved within species. All but one primer set successfully amplified the targets, and gradient PCRs indicated that the optimal temperature for the 14 useful primer sets was 51.9°C. Some primer sets worked well regardless of annealing temperature while others required a very specific annealing temperature. A primer specificity test using plant total RNAs and cDNAs of other plant virus-infected samples demonstrated that the designed primer sets were highly specific and generated reproducible results. The newly developed RT-PCR primer sets would be useful for quarantine inspections aimed at preventing the entry of exotic plant viruses into Korea. PMID:25288934

  11. 75 FR 33731 - Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration 50 CFR Part 635 RIN 0648-AY77 Atlantic Highly Migratory Species; 2010 Atlantic Bluefin Tuna Quota Specifications Correction In rule document 2010-13207...

  12. Using the SCR Specification Technique in a High School Programming Course.

    ERIC Educational Resources Information Center

    Rosen, Edward; McKim, James C., Jr.

    1992-01-01

    Presents the underlying ideas of the Software Cost Reduction (SCR) approach to requirements specifications. Results of applying this approach to the teaching of programing to high school students indicate that students perform better in writing programs. An appendix provides two examples of how the method is applied to problem solving. (MDH)

  13. Effects of Collaborative Preteaching on Science Performance of High School Students with Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Thornton, Amanda; McKissick, Bethany R.; Spooner, Fred; Lo, Ya-yu; Anderson, Adrienne L.

    2015-01-01

    Investigating the effectiveness of inclusive practices in science instruction and determining how to best support high school students with specific learning disabilities (SLD) in the general education classroom is a topic of increasing research attention in the field. In this study, the researchers conducted a single-subject multiple probe across…

  14. Algorithm-based high-speed video analysis yields new insights into Strombolian eruptions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Taddeucci, Jacopo; Moroni, Monica; Scarlato, Piergiorgio

    2014-05-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. The observation of the products of the explosion is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging allows a first characterization of several eruptive processes, but the use of high speed cameras, with frame rates of 500 Hz or more, allows to follow the particles on multiples frames, and to reconstruct their trajectories. However, the manual processing of the images is time consuming. Consequently, it does not allow neither the routine monitoring nor averaged statistics, since only relatively few, selected particles (usually the fastest) can be taken into account. In addition, manual processing is quite inefficient to compute the total ejected mass, since it requires to count each individual particle. In this presentation, we discuss the advantages of using numerical methods for the tracking of the particles and the description of the explosion. A toolbox called "Pyroclast Tracking Velocimetry" is used to compute the size and the trajectory of each individual particle. A large variety of parameters can be derived and statistically compared: ejection velocity, ejection angle, deceleration, size, mass, etc. At the scale of the explosion, the total mass, the mean velocity of the particles, the number and the frequency of ejection pulses can be estimated. The study of high speed videos from 2 vents from Yasur volcano (Vanuatu) and 4 from Stromboli volcano (Italy) reveals that these parameters are positively correlated. As a consequence, the intensity of an explosion can be quantitatively, and operator-independently described by the total kinetic energy of the bombs, taking into account both the mass and the velocity of the particles. For each vent, a specific range of total kinetic energy can be defined, demonstrating the strong influence of the conduit in

  15. A simple greedy algorithm for reconstructing pedigrees.

    PubMed

    Cowell, Robert G

    2013-02-01

    This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study.

  16. AxonQuant: A Microfluidic Chamber Culture-Coupled Algorithm That Allows High-Throughput Quantification of Axonal Damage

    PubMed Central

    Li, Yang; Yang, Mengxue; Huang, Zhuo; Chen, Xiaoping; Maloney, Michael T.; Zhu, Li; Liu, Jianghong; Yang, Yanmin; Du, Sidan; Jiang, Xingyu; Wu, Jane Y.

    2014-01-01

    Published methods for imaging and quantitatively analyzing morphological changes in neuronal axons have serious limitations because of their small sample sizes, and their time-consuming and nonobjective nature. Here we present an improved microfluidic chamber design suitable for fast and high-throughput imaging of neuronal axons. We developed the Axon-Quant algorithm, which is suitable for automatic processing of axonal imaging data. This microfluidic chamber-coupled algorithm allows calculation of an ‘axonal continuity index’ that quantitatively measures axonal health status in a manner independent of neuronal or axonal density. This method allows quantitative analysis of axonal morphology in an automatic and nonbiased manner. Our method will facilitate large-scale high-throughput screening for genes or therapeutic compounds for neurodegenerative diseases involving axonal damage. When combined with imaging technologies utilizing different gene markers, this method will provide new insights into the mechanistic basis for axon degeneration. Our microfluidic chamber culture-coupled AxonQuant algorithm will be widely useful for studying axonal biology and neurodegenerative disorders. PMID:24603552

  17. Output-only modal dynamic identification of frames by a refined FDD algorithm at seismic input and high damping

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio

    2016-02-01

    The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.

  18. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  19. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  20. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  1. The application of Quadtree algorithm to information integration for geological disposal of high-level radioactive waste

    NASA Astrophysics Data System (ADS)

    Gao, Min; Huang, Shutao; Zhong, Xia

    2009-09-01

    The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.

  2. The application of Quadtree algorithm to information integration for geological disposal of high-level radioactive waste

    NASA Astrophysics Data System (ADS)

    Gao, Min; Huang, Shutao; Zhong, Xia

    2010-11-01

    The establishment of multi-source database was designed to promote the informatics process of the geological disposal of High-level Radioactive Waste, the integration of multi-dimensional and multi-source information and its application are related to computer software and hardware. Based on the analysis of data resources in Beishan area, Gansu Province, and combined with GIS technologies and methods. This paper discusses the technical ideas of how to manage, fully share and rapidly retrieval the information resources in this area by using open source code GDAL and Quadtree algorithm, especially in terms of the characteristics of existing data resources, spatial data retrieval algorithm theory, programming design and implementation of the ideas.

  3. Validation of a diagnostic algorithm for the discrimination of actinic keratosis from normal skin and squamous cell carcinoma by means of high-definition optical coherence tomography.

    PubMed

    Marneffe, Alice; Suppa, Mariano; Miyamoto, Makiko; Del Marmol, Véronique; Boone, Marc

    2016-09-01

    Actinic keratoses (AKs) commonly arise on sun-damaged skin. Visible lesions are often associated with subclinical lesions on surrounding skin, giving rise to field cancerization. To avoid multiple biopsies to diagnose subclinical/early invasive lesions, there is an increasing interest in non-invasive diagnostic tools, such as high-definition optical coherence tomography (HD-OCT). We previously developed a HD-OCT-based diagnostic algorithm for the discrimination of AK from squamous cell carcinoma (SCC) and normal skin. The aim of this study was to test the applicability of HD-OCT for non-invasive discrimination of AK from SCC and normal skin using this algorithm. Three-dimensional (3D) HD-OCT images of histopathologically proven AKs and SCCs and images of normal skin were collected. All images were shown in a random sequence to three independent observers with different experience in HD-OCT, blinded to the clinical and histopathological data and with different experience with HD-OCT. Observers classified each image as AK, SCC or normal skin based on the diagnostic algorithm. A total of 106 (38 AKs, 16 SCCs and 52 normal skin sites) HD-OCT images from 71 patients were included. Sensitivity and specificity for the most experienced observer were 81.6% and 92.6% for AK diagnosis and 93.8% and 98.9% for SCC diagnosis. A moderate interobserver agreement was demonstrated. HD-OCT represents a promising technology for the non-invasive diagnosis of AKs. Thanks to its high potential in discriminating SCC from AK, HD-OCT could be used as a relevant tool for second-level examination, increasing diagnostic confidence and sparing patients unnecessary excisions.

  4. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific

    PubMed Central

    Weiner, Kevin S.; Grill-Spector, Kalanit

    2015-01-01

    Prevailing hierarchical models propose that temporal processing capacity—the amount of information that a brain region processes in a unit time—decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. SIGNIFICANCE STATEMENT Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic

  5. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic

  6. Novel fluorescently labeled peptide compounds for detection of oxidized low-density lipoprotein at high specificity.

    PubMed

    Sato, Akira; Yamanaka, Hikaru; Oe, Keitaro; Yamazaki, Yoji; Ebina, Keiichi

    2014-10-01

    The probes for specific detection of oxidized low-density lipoprotein (ox-LDL) in plasma and in atherosclerotic plaques are expected to be useful for the identification, diagnosis, prevention, and treatment for atherosclerosis. In this study, to develop a fluorescent peptide probe for specific detection of ox-LDL, we investigated the interaction of fluorescein isothiocyanate (FITC)-labeled peptides with ox-LDL using polyacrylamide gel electrophoresis. Two heptapeptides (KWYKDGD and KP6) coupled through the ε-amino group of K at the N-terminus to FITC in the presence/absence of 6-amino-n-caproic acid (AC) linker to FITC--(FITC-AC)KP6 and (FITC)KP6--both bound with high specificity to ox-LDL in a dose-dependent manner. In contrast, a tetrapeptide (YKDG) labeled with FITC at the N-terminus and a pentapeptide (YKDGK) coupled through the ε-amino group of K at the C-terminus to FITC did not bind selectively to ox-LDL. Furthermore, (FITC)KP6 and (FITC-AC)KP6 bound with high specificity to the protein in mouse plasma (probably ox-LDL fraction). These findings strongly suggest that (FITC)KP6 and (FITC-AC)KP6 may be effective novel fluorescent probes for specific detection of ox-LDL.

  7. Effective Inhibition of Bone Morphogenetic Protein Function by Highly Specific Llama-Derived Antibodies.

    PubMed

    Calpe, Silvia; Wagner, Koen; El Khattabi, Mohamed; Rutten, Lucy; Zimberlin, Cheryl; Dolk, Edward; Verrips, C Theo; Medema, Jan Paul; Spits, Hergen; Krishnadath, Kausilia K

    2015-11-01

    Bone morphogenetic proteins (BMP) have important but distinct roles in tissue homeostasis and disease, including carcinogenesis and tumor progression. A large number of BMP inhibitors are available to study BMP function; however, as most of these antagonists are promiscuous, evaluating specific effects of individual BMPs is not feasible. Because the oncogenic role of the different BMPs varies for each neoplasm, highly selective BMP inhibitors are required. Here, we describe the generation of three types of llama-derived heavy chain variable domains (VHH) that selectively bind to either BMP4, to BMP2 and 4, or to BMP2, 4, 5, and 6. These generated VHHs have high affinity to their targets and are able to inhibit BMP signaling. Epitope binning and docking modeling have shed light into the basis for their BMP specificity. As opposed to the wide structural reach of natural inhibitors, these small molecules target the grooves and pockets of BMPs involved in receptor binding. In organoid experiments, specific inhibition of BMP4 does not affect the activation of normal stem cells. Furthermore, in vitro inhibition of cancer-derived BMP4 noncanonical signals results in an increase of chemosensitivity in a colorectal cancer cell line. Therefore, because of their high specificity and low off-target effects, these VHHs could represent a therapeutic alternative for BMP4(+) malignancies.

  8. TALENs facilitate targeted genome editing in human cells with high specificity and low cytotoxicity.

    PubMed

    Mussolino, Claudio; Alzubi, Jamal; Fine, Eli J; Morbitzer, Robert; Cradick, Thomas J; Lahaye, Thomas; Bao, Gang; Cathomen, Toni

    2014-06-01

    Designer nucleases have been successfully employed to modify the genomes of various model organisms and human cell types. While the specificity of zinc-finger nucleases (ZFNs) and RNA-guided endonucleases has been assessed to some extent, little data are available for transcription activator-like effector-based nucleases (TALENs). Here, we have engineered TALEN pairs targeting three human loci (CCR5, AAVS1 and IL2RG) and performed a detailed analysis of their activity, toxicity and specificity. The TALENs showed comparable activity to benchmark ZFNs, with allelic gene disruption frequencies of 15-30% in human cells. Notably, TALEN expression was overall marked by a low cytotoxicity and the absence of cell cycle aberrations. Bioinformatics-based analysis of designer nuclease specificity confirmed partly substantial off-target activity of ZFNs targeting CCR5 and AAVS1 at six known and five novel sites, respectively. In contrast, only marginal off-target cleavage activity was detected at four out of 49 predicted off-target sites for CCR5- and AAVS1-specific TALENs. The rational design of a CCR5-specific TALEN pair decreased off-target activity at the closely related CCR2 locus considerably, consistent with fewer genomic rearrangements between the two loci. In conclusion, our results link nuclease-associated toxicity to off-target cleavage activity and corroborate TALENs as a highly specific platform for future clinical translation.

  9. Specific heat of pristine and brominated graphite fibers, composites and HOPG. [Highly Oriented Pyrolytic Graphite

    NASA Technical Reports Server (NTRS)

    Hung, Ching-Chen; Maciag, Carolyn

    1987-01-01

    Differential scanning calorimetry was used to obtain specific heat values of pristine and brominated P-100 graphite fibers and brominated P-100/epoxy composite as well as pristine and brominated highly oriented pyrolytic graphite (HOPG) for comparison. Based on the experimental results obtained, specific heat values are calculated for several different temperatures, with a standard deviation estimated at 1.4 percent of the average values. The data presented here are useful in designing heat transfer devices (such as airplane de-icing heaters) from bromine fibers.

  10. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  11. Application of wavelet neural network model based on genetic algorithm in the prediction of high-speed railway settlement

    NASA Astrophysics Data System (ADS)

    Tang, Shihua; Li, Feida; Liu, Yintao; Lan, Lan; Zhou, Conglin; Huang, Qing

    2015-12-01

    With the advantage of high speed, big transport capacity, low energy consumption, good economic benefits and so on, high-speed railway is becoming more and more popular all over the world. It can reach 350 kilometers per hour, which requires high security performances. So research on the prediction of high-speed railway settlement that as one of the important factors affecting the safety of high-speed railway becomes particularly important. This paper takes advantage of genetic algorithms to seek all the data in order to calculate the best result and combines the advantage of strong learning ability and high accuracy of wavelet neural network, then build the model of genetic wavelet neural network for the prediction of high-speed railway settlement. By the experiment of back propagation neural network, wavelet neural network and genetic wavelet neural network, it shows that the absolute value of residual errors in the prediction of high-speed railway settlement based on genetic algorithm is the smallest, which proves that genetic wavelet neural network is better than the other two methods. The correlation coefficient of predicted and observed value is 99.9%. Furthermore, the maximum absolute value of residual error, minimum absolute value of residual error-mean value of relative error and value of root mean squared error(RMSE) that predicted by genetic wavelet neural network are all smaller than the other two methods'. The genetic wavelet neural network in the prediction of high-speed railway settlement is more stable in terms of stability and more accurate in the perspective of accuracy.

  12. Distinctive features of kinetics of plasma at high specific energy deposition

    NASA Astrophysics Data System (ADS)

    Lepikhin, Nikita; Popov, Nikolay; Starikovskaia, Svetlana

    2016-09-01

    A nanosecond capillary discharge in pure nitrogen at moderate pressures is used as an experimental tool for plasma kinetics studies at conditions of high specific deposited energy up to 1 eV/molecule. Experimental observations based on electrical (back current shunts, capacitive probe) and spectroscopic measurements (quenching rates; translational, rotational and vibrational temperature measurements) demonstrate that high specific deposited energy, at electric fields of 200-300 Td, can significantly change gas kinetics in the discharge and in the afterglow. The numerical calculations in 1D axially symmetric geometry using experimental data as input parameters show that changes in the plasma kinetics are caused by extremely high excitation degree: up to 10% of molecular nitrogen is electronically excited at present conditions. Distinctive features of kinetics of plasma at high specific energy deposition as well as details of the experimental technique and numerical calculations will be present. The work was partially supported by French National Agency, ANR (PLASMAFLAME Project, 2011 BS09 025 01), AOARD AFOSR, FA2386-13-1-4064 grant (Program Officer Prof. Chiping Li), LabEx Plas@Par and Linked International Laboratory LIA KaPPA (France-Russia).

  13. Direct glass bonded high specific power silicon solar cells for space applications

    NASA Technical Reports Server (NTRS)

    Dinetta, L. C.; Rand, J. A.; Cummings, J. R.; Lampo, S. M.; Shreve, K. P.; Barnett, Allen M.

    1991-01-01

    A lightweight, radiation hard, high performance, ultra-thin silicon solar cell is described that incorporates light trapping and a cover glass as an integral part of the device. The manufacturing feasibility of high specific power, radiation insensitive, thin silicon solar cells was demonstrated experimentally and with a model. Ultra-thin, light trapping structures were fabricated and the light trapping demonstrated experimentally. The design uses a micro-machined, grooved back surface to increase the optical path length by a factor of 20. This silicon solar cell will be highly tolerant to radiation because the base width is less than 25 microns making it insensitive to reduction in minority carrier lifetime. Since the silicon is bonded without silicone adhesives, this solar cell will also be insensitive to UV degradation. These solar cells are designed as a form, fit, and function replacement for existing state of the art silicon solar cells with the effect of simultaneously increasing specific power, power/area, and power supply life. Using a 3-mil thick cover glass and a 0.3 g/sq cm supporting Al honeycomb, a specific power for the solar cell plus cover glass and honeycomb of 80.2 W/Kg is projected. The development of this technology can result in a revolutionary improvement in high survivability silicon solar cell products for space with the potential to displace all existing solar cell technologies for single junction space applications.

  14. Evaluation of an algorithm for semiautomated segmentation of thin tissue layers in high-frequency ultrasound images.

    PubMed

    Qiu, Qiang; Dunmore-Buyze, Joy; Boughner, Derek R; Lacefield, James C

    2006-02-01

    An algorithm consisting of speckle reduction by median filtering, contrast enhancement using top- and bottom-hat morphological filters, and segmentation with a discrete dynamic contour (DDC) model was implemented for nondestructive measurements of soft tissue layer thickness. Algorithm performance was evaluated by segmenting simulated images of three-layer phantoms and high-frequency (40 MHz) ultrasound images of porcine aortic valve cusps in vitro. The simulations demonstrated the necessity of the median and morphological filtering steps and enabled testing of user-specified parameters of the morphological filters and DDC model. In the experiments, six cusps were imaged in coronary perfusion solution (CPS) then in distilled water to test the algorithm's sensitivity to changes in the dimensions of thin tissue layers. Significant increases in the thickness of the fibrosa, spongiosa, and ventricularis layers, by 53.5% (p < 0.001), 88.5% (p < 0.001), and 35.1% (p = 0.033), respectively, were observed when the specimens were submerged in water. The intraobserver coefficient of variation of repeated thickness estimates ranged from 0.044 for the fibrosa in water to 0.164 for the spongiosa in CPS. Segmentation accuracy and variability depended on the thickness and contrast of the layers, but the modest variability provides confidence in the thickness measurements.

  15. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition- and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Juana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2017-01-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components. This helps to reveal underlying dynamics, to identify potential environmental drivers and, thus, to calculate reliable CH4 emission estimates. The flux separation is based on identification of ebullition-related sudden concentration changes during single measurements. Therefore, a variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R script, adjusted for the purpose of CH4 flux calculation. The algorithm was validated by performing a laboratory experiment and tested using flux measurement data (July to September 2013) from a former fen grassland site, which converted into a shallow lake as a result of rewetting. Ebullition and diffusion contributed equally (46 and 55 %) to total CH4 emissions, which is comparable to ratios given in the literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period. The water temperature gradient was identified as one of the major drivers of diffusive CH4 emissions, whereas no significant driver was found in the case of erratic CH4 ebullition events.

  16. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  17. Hydrazide functionalized core-shell magnetic nanocomposites for highly specific enrichment of N-glycopeptides.

    PubMed

    Liu, Liting; Yu, Meng; Zhang, Ying; Wang, Changchun; Lu, Haojie

    2014-05-28

    In view of the biological significance of glycosylation for human health, profiling of glycoproteome from complex biological samples is highly inclined toward the discovery of disease biomarkers and clinical diagnosis. Nevertheless, because of the existence of glycopeptides at relatively low abundances compared with nonglycosylated peptides and glycan microheterogeneity, glycopeptides need to be highly selectively enriched from complex biological samples for mass spectrometry analysis. Herein, a new type of hydrazide functionalized core-shell magnetic nanocomposite has been synthesized for highly specific enrichment of N-glycopeptides. The nanocomposites with both the magnetic core and the polymer shell hanging high density of hydrazide groups were prepared by first functionalization of the magnetic core with polymethacrylic acid by reflux precipitation polymerization to obtain the Fe3O4@poly(methacrylic acid) (Fe3O4@PMAA) and then modification of the surface of Fe3O4@PMAA with adipic acid dihydrazide (ADH) to obtain Fe3O4@poly(methacrylic hydrazide) (Fe3O4@PMAH). The abundant hydrazide groups toward highly specific enrichment of glycopeptides and the magnetic core make it suitable for large-scale, high-throughput, and automated sample processing. In addition, the hydrophilic polymer surface can provide low nonspecific adsorption of other peptides. Compared to commercially available hydrazide resin, Fe3O4@PMAH improved more than 5 times the signal-to-noise ratio of standard glycopeptides. Finally, this nanocomposite was applied in the profiling of N-glycoproteome from the colorectal cancer patient serum. In total, 175 unique glycopeptides and 181 glycosylation sites corresponding to 63 unique glycoproteins were identified in three repeated experiments, with the specificities of the enriched glycopeptides and corresponding glycoproteins of 69.6% and 80.9%, respectively. Because of all these attractive features, we believe that this novel hydrazide functionalized

  18. Computationally designed high specificity inhibitors delineate the roles of BCL2 family proteins in cancer

    PubMed Central

    Berger, Stephanie; Procko, Erik; Margineantu, Daciana; Lee, Erinna F; Shen, Betty W; Zelter, Alex; Silva, Daniel-Adriano; Chawla, Kusum; Herold, Marco J; Garnier, Jean-Marc; Johnson, Richard; MacCoss, Michael J; Lessene, Guillaume; Davis, Trisha N; Stayton, Patrick S; Stoddard, Barry L; Fairlie, W Douglas; Hockenbery, David M; Baker, David

    2016-01-01

    Many cancers overexpress one or more of the six human pro-survival BCL2 family proteins to evade apoptosis. To determine which BCL2 protein or proteins block apoptosis in different cancers, we computationally designed three-helix bundle protein inhibitors specific for each BCL2 pro-survival protein. Following in vitro optimization, each inhibitor binds its target with high picomolar to low nanomolar affinity and at least 300-fold specificity. Expression of the designed inhibitors in human cancer cell lines revealed unique dependencies on BCL2 proteins for survival which could not be inferred from other BCL2 profiling methods. Our results show that designed inhibitors can be generated for each member of a closely-knit protein family to probe the importance of specific protein-protein interactions in complex biological processes. DOI: http://dx.doi.org/10.7554/eLife.20352.001 PMID:27805565

  19. Computationally designed high specificity inhibitors delineate the roles of BCL2 family proteins in cancer.

    PubMed

    Berger, Stephanie; Procko, Erik; Margineantu, Daciana; Lee, Erinna F; Shen, Betty W; Zelter, Alex; Silva, Daniel-Adriano; Chawla, Kusum; Herold, Marco J; Garnier, Jean-Marc; Johnson, Richard; MacCoss, Michael J; Lessene, Guillaume; Davis, Trisha N; Stayton, Patrick S; Stoddard, Barry L; Fairlie, W Douglas; Hockenbery, David M; Baker, David

    2016-11-02

    Many cancers overexpress one or more of the six human pro-survival BCL2 family proteins to evade apoptosis. To determine which BCL2 protein or proteins block apoptosis in different cancers, we computationally designed three-helix bundle protein inhibitors specific for each BCL2 pro-survival protein. Following in vitro optimization, each inhibitor binds its target with high picomolar to low nanomolar affinity and at least 300-fold specificity. Expression of the designed inhibitors in human cancer cell lines revealed unique dependencies on BCL2 proteins for survival which could not be inferred from other BCL2 profiling methods. Our results show that designed inhibitors can be generated for each member of a closely-knit protein family to probe the importance of specific protein-protein interactions in complex biological processes.

  20. Scattering rates and specific heat jumps in high-Tc cuprates

    NASA Astrophysics Data System (ADS)

    Storey, James

    Inspired by recent ARPES and tunneling studies on high-Tc cuprates, we examine the effect of a pair-breaking term in the self-energy on the shape of the electronic specific heat jump. It is found that the observed specific heat jump can be described in terms of a superconducting gap, that persists above the observed Tc, in the presence of a strongly temperature dependent pair-breaking scattering rate. An increase in the scattering rate is found to explain the non-BCS-like suppression of the specific heat jump with magnetic field. A discussion of these results in the context of other properties such as the superfluid density and Raman spectra will also be presented. Supported by the Marsden Fund Council from Government funding, administered by the Royal Society of New Zealand.

  1. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity

    PubMed Central

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination. PMID:27110562

  2. A Rapid In-Clinic Test Detects Acute Leptospirosis in Dogs with High Sensitivity and Specificity.

    PubMed

    Kodjo, Angeli; Calleja, Christophe; Loenser, Michael; Lin, Dan; Lizer, Joshua

    2016-01-01

    A rapid IgM-detection immunochromatographic test (WITNESS® Lepto, Zoetis) has recently become available to identify acute canine leptospirosis at the point of care. Diagnostic sensitivity and specificity of the test were evaluated by comparison with the microscopic agglutination assay (MAT), using a positive cut-off titer of ≥800. Banked serum samples from dogs exhibiting clinical signs and suspected leptospirosis were selected to form three groups based on MAT titer: (1) positive (n = 50); (2) borderline (n = 35); and (3) negative (n = 50). Using an analysis to weight group sizes to reflect French prevalence, the sensitivity and specificity were 98% and 93.5% (88.2% unweighted), respectively. This test rapidly identifies cases of acute canine leptospirosis with high levels of sensitivity and specificity with no interference from previous vaccination.

  3. Synthesis of high specific activity (1- sup 3 H) farnesyl pyrophosphate

    SciTech Connect

    Saljoughian, M.; Morimoto, H.; Williams, P.G.

    1991-08-01

    The synthesis of tritiated farnesyl pyrophosphate with high specific activity is reported. trans-trans Farnesol was oxidized to the corresponding aldehyde followed by reduction with lithium aluminium tritide (5%-{sup 3}H) to give trans-trans (1-{sup 3}H)farnesol. The specific radioactivity of the alcohol was determined from its triphenylsilane derivative, prepared under very mild conditions. The tritiated alcohol was phosphorylated by initial conversion to an allylic halide, and subsequent treatment of the halide with tris-tetra-n-butylammonium hydrogen pyrophosphate. The hydride procedure followed in this work has advantages over existing methods for the synthesis of tritiated farnesyl pyrophosphate, with the possibility of higher specific activity and a much higher yield obtained. 10 refs., 3 figs.

  4. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive

  5. A Highly Specific Monoclonal Antibody for Botulinum Neurotoxin Type A-Cleaved SNAP25

    PubMed Central

    Rhéaume, Catherine; Cai, Brian B.; Wang, Joanne; Fernández-Salas, Ester; Aoki, K. Roger; Francis, Joseph; Broide, Ron S.

    2015-01-01

    Botulinum neurotoxin type-A (BoNT/A), as onabotulinumtoxinA, is approved globally for 11 major therapeutic and cosmetic indications. While the mechanism of action for BoNT/A at the presynaptic nerve terminal has been established, questions remain regarding intracellular trafficking patterns and overall fate of the toxin. Resolving these questions partly depends on the ability to detect BoNT/A’s location, distribution, and movement within a cell. Due to BoNT/A’s high potency and extremely low concentrations within neurons, an alternative approach has been employed. This involves utilizing specific antibodies against the BoNT/A-cleaved SNAP25 substrate (SNAP25197) to track the enzymatic activity of toxin within cells. Using our highly specific mouse monoclonal antibody (mAb) against SNAP25197, we generated human and murine recombinant versions (rMAb) using specific backbone immunoglobulins. In this study, we validated the specificity of our anti-SNAP25197 rMAbs in several different assays and performed side-by-side comparisons to commercially-available and in-house antibodies against SNAP25. Our rMAbs were highly specific for SNAP25197 in all assays and on several different BoNT/A-treated tissues, showing no cross-reactivity with full-length SNAP25. This was not the case with other reportedly SNAP25197-selective antibodies, which were selective in some, but not all assays. The rMAbs described herein represent effective new tools for detecting BoNT/A activity within cells. PMID:26114335

  6. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  7. Development of a phantom to validate high-dose-rate brachytherapy treatment planning systems with heterogeneous algorithms

    SciTech Connect

    Moura, Eduardo S.; Rostelato, Maria Elisa C. M.; Zeituni, Carlos A.

    2015-04-15

    Purpose: This work presents the development of a phantom to verify the treatment planning system (TPS) algorithms used for high-dose-rate (HDR) brachytherapy. It is designed to measure the relative dose in a heterogeneous media. The experimental details used, simulation methods, and comparisons with a commercial TPS are also provided. Methods: To simulate heterogeneous conditions, four materials were used: Virtual Water™ (VM), BR50/50™, cork, and aluminum. The materials were arranged in 11 heterogeneity configurations. Three dosimeters were used to measure the relative response from a HDR {sup 192}Ir source: TLD-100™, Gafchromic{sup ®} EBT3 film, and an Exradin™ A1SL ionization chamber. To compare the results from the experimental measurements, the various configurations were modeled in the PENELOPE/penEasy Monte Carlo code. Images of each setup geometry were acquired from a CT scanner and imported into BrachyVision™ TPS software, which includes a grid-based Boltzmann solver Acuros™. The results of the measurements performed in the heterogeneous setups were normalized to the dose values measured in the homogeneous Virtual Water™ setup and the respective differences due to the heterogeneities were considered. Additionally, dose values calculated based on the American Association of Physicists in Medicine-Task Group 43 formalism were compared to dose values calculated with the Acuros™ algorithm in the phantom. Calculated doses were compared at the same points, where measurements have been performed. Results: Differences in the relative response as high as 11.5% were found from the homogeneous setup when the heterogeneous materials were inserted into the experimental phantom. The aluminum and cork materials produced larger differences than the plastic materials, with the BR50/50™ material producing results similar to the Virtual Water™ results. Our experimental methods agree with the PENELOPE/penEasy simulations for most setups and dosimeters. The

  8. Highly sensitive and specific colorimetric detection of cancer cells via dual-aptamer target binding strategy.

    PubMed

    Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang

    2015-11-15

    Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis.

  9. High Transferability of Homoeolog-Specific Markers between Bread Wheat and Newly Synthesized Hexaploid Wheat Lines

    PubMed Central

    Zeng, Deying; Luo, Jiangtao; Li, Zenglin; Chen, Gang; Zhang, Lianquan; Ning, Shunzong; Yuan, Zhongwei; Zheng, Youliang; Hao, Ming; Liu, Dengcai

    2016-01-01

    Bread wheat (Triticum aestivum, 2n = 6x = 42, AABBDD) has a complex allohexaploid genome, which makes it difficult to differentiate between the homoeologous sequences and assign them to the chromosome A, B, or D subgenomes. The chromosome-based draft genome sequence of the ‘Chinese Spring’ common wheat cultivar enables the large-scale development of polymerase chain reaction (PCR)-based markers specific for homoeologs. Based on high-confidence ‘Chinese Spring’ genes with known functions, we developed 183 putative homoeolog-specific markers for chromosomes 4B and 7B. These markers were used in PCR assays for the 4B and 7B nullisomes and their euploid synthetic hexaploid wheat (SHW) line that was newly generated from a hybridization between Triticum turgidum (AABB) and the wild diploid species Aegilops tauschii (DD). Up to 64% of the markers for chromosomes 4B or 7B in the SHW background were confirmed to be homoeolog-specific. Thus, these markers were highly transferable between the ‘Chinese Spring’ bread wheat and SHW lines. Homoeolog-specific markers designed using genes with known functions may be useful for genetic investigations involving homoeologous chromosome tracking and homoeolog expression and interaction analyses. PMID:27611704

  10. A novel and highly specific phage endolysin cell wall binding domain for detection of Bacillus cereus.

    PubMed

    Kong, Minsuk; Sim, Jieun; Kang, Taejoon; Nguyen, Hoang Hiep; Park, Hyun Kyu; Chung, Bong Hyun; Ryu, Sangryeol

    2015-09-01

    Rapid, specific and sensitive detection of pathogenic bacteria is crucial for public health and safety. Bacillus cereus is harmful as it causes foodborne illness and a number of systemic and local infections. We report a novel phage endolysin cell wall-binding domain (CBD) for B. cereus and the development of a highly specific and sensitive surface plasmon resonance (SPR)-based B. cereus detection method using the CBD. The newly discovered CBD from endolysin of PBC1, a B. cereus-specific bacteriophage, provides high specificity and binding capacity to B. cereus. By using the CBD-modified SPR chips, B. cereus can be detected at the range of 10(5)-10(8) CFU/ml. More importantly, the detection limit can be improved to 10(2) CFU/ml by using a subtractive inhibition assay based on the pre-incubation of B. cereus and CBDs, removal of CBD-bound B. cereus, and SPR detection of the unbound CBDs. The present study suggests that the small and genetically engineered CBDs can be promising biological probes for B. cereus. We anticipate that the CBD-based SPR-sensing methods will be useful for the sensitive, selective, and rapid detection of B. cereus.

  11. Development of pressure cell for specific heat measurement at low temperature and high Magnetic field

    NASA Astrophysics Data System (ADS)

    Kawae, T.; Yaita, K.; Yoshida, Y.; Inagaki, Y.; Ohashi, M.; Oomi, G.; Matsubayashi, K.; Matsumoto, T.; Uwatoko, Y.

    2009-02-01

    We report the performance of Ag-Pd-Cu alloy as the material of a pressure cell to carry out specific heat measurements at low temperatures and high magnetic fields. The Ag-Pd-Cu alloy is advantageous to reduce the background due to the nuclear specific heat in the pressure cell growing at low temperatures and high magnetic fields. We prepared 70-20-10 alloy composed of 70 mass % of Ag, 20 mass % of Pd, and 10 mass % of Cu. The maximum hardness over 100 HRB (Rockwell-B scale) is achieved by the heat treatment. The magnetization and susceptibility results show that the alloy includes a small amount of magnetic ions, whose concentration is smaller than that in the Be-Cu alloy. We confirm that the specific heat of a piston cylinder cell made of the 70-20-10 alloy increases smoothly from 0.2 to 9 K and the nuclear specific heat decreases drastically in magnetic field compared to that expected in the Be-Cu alloy. The pressure value in the cell at low temperature increases almost linearly up to P =0.4 GPa, which is nearly the limit of the inner piston made of the 70-20-10 alloy, with increasing of the load applied at room temperature.

  12. Development of a high specific 1.5 to 5 kW thermal arcjet

    NASA Technical Reports Server (NTRS)

    Riehle, M.; Glocker, B.; Auweter-Kurtz, M.; Kurtz, H.

    1993-01-01

    A research and development project on the experimental study of a 1.5-5 kW thermal arcjet thruster was started in 1992 at the IRS. Two radiation cooled thrusters were designed, constructed, and adapted to the test facilities, one at each end of the intended power range. These thrusters are currently subjected to an intensive test program with main emphasis on the exploration of thruster performance and thruster behavior at high specific enthalpy and thus high specific impulse. Propelled by simulated hydrazine and ammonia, the thruster's electrode configuration such as constrictor diameter and cathode gap was varied in order to investigate their influence and to optimize these parameters. In addition, test runs with pure hydrogen were performed for both thrusters.

  13. Human PHOSPHO1 exhibits high specific phosphoethanolamine and phosphocholine phosphatase activities

    PubMed Central

    2004-01-01

    Human PHOSPHO1 is a phosphatase enzyme for which expression is upregulated in mineralizing cells. This enzyme has been implicated in the generation of Pi for matrix mineralization, a process central to skeletal development. PHOSPHO1 is a member of the haloacid dehalogenase (HAD) superfamily of Mg2+-dependent hydrolases. However, substrates for PHOSPHO1 are, as yet, unidentified and little is known about its activity. We show here that PHOSPHO1 exhibits high specific activities toward phosphoethanolamine (PEA) and phosphocholine (PCho). Optimal enzymic activity was observed at approx. pH 6.7. The enzyme shows a high specific Mg2+-dependence, with apparent Km values of 3.0 μM for PEA and 11.4 μM for PCho. These results provide a novel mechanism for the generation of Pi in mineralizing cells from PEA and PCho. PMID:15175005

  14. Selective culling of high avidity antigen-specific CD4+ T cells after virulent Salmonella infection.

    PubMed

    Ertelt, James M; Johanns, Tanner M; Mysz, Margaret A; Nanton, Minelva R; Rowe, Jared H; Aguilera, Marijo N; Way, Sing Sing

    2011-12-01

    Typhoid fever is a persistent infection caused by host-adapted Salmonella strains adept at circumventing immune-mediated host defences. Given the importance of T cells in protection, the culling of activated CD4+ T cells after primary infection has been proposed as a potential immune evasion strategy used by this pathogen. We demonstrate that the purging of activated antigen-specific CD4+ T cells after virulent Salmonella infection requires SPI-2 encoded virulence determinants, and is not restricted only to cells with specificity to Salmonella-expressed antigens, but extends to CD4+ T cells primed to expand by co-infection with recombinant Listeria monocytogenes. Unexpectedly, however, the loss of activated CD4+ T cells during Salmonella infection demonstrated using a monoclonal population of adoptively transferred CD4+ T cells was not reproduced among the endogenous repertoire of antigen-specific CD4+ T cells identified with MHC class II tetramer. Analysis of T-cell receptor variable segment usage revealed the selective loss and reciprocal enrichment of defined CD4+ T-cell subsets after Salmonella co-infection that is associated with the purging of antigen-specific cells with the highest intensity of tetramer staining. Hence, virulent Salmonella triggers the selective culling of high avidity activated CD4+ T-cell subsets, which re-shapes the repertoire of antigen-specific T cells that persist later after infection.

  15. Specific binding of eukaryotic ORC to DNA replication origins depends on highly conserved basic residues.

    PubMed

    Kawakami, Hironori; Ohashi, Eiji; Kanamoto, Shota; Tsurimoto, Toshiki; Katayama, Tsutomu

    2015-10-12

    In eukaryotes, the origin recognition complex (ORC) heterohexamer preferentially binds replication origins to trigger initiation of DNA replication. Crystallographic studies using eubacterial and archaeal ORC orthologs suggested that eukaryotic ORC may bind to origin DNA via putative winged-helix DNA-binding domains and AAA+ ATPase domains. However, the mechanisms how eukaryotic ORC recognizes origin DNA remain elusive. Here, we show in budding yeast that Lys-362 and Arg-367 residues of the largest subunit (Orc1), both outside the aforementioned domains, are crucial for specific binding of ORC to origin DNA. These basic residues, which reside in a putative disordered domain, were dispensable for interaction with ATP and non-specific DNA sequences, suggesting a specific role in recognition. Consistent with this, both residues were required for origin binding of Orc1 in vivo. A truncated Orc1 polypeptide containing these residues solely recognizes ARS sequence with low affinity and Arg-367 residue stimulates sequence specific binding mode of the polypeptide. Lys-362 and Arg-367 residues of Orc1 are highly conserved among eukaryotic ORCs, but not in eubacterial and archaeal orthologs, suggesting a eukaryote-specific mechanism underlying recognition of replication origins by ORC.

  16. Record-high specific conductance and temperature in San Francisco Bay during water year 2014

    USGS Publications Warehouse

    Downing-Kunz, Maureen; Work, Paul; Shellenbarger, Gregory

    2015-11-18

    In water year (WY) 2014 (October 1, 2013, through September 30, 2014), our network measured record-high values of specific conductance and water temperature at several stations during a period of very little freshwater inflow from the Sacramento–San Joaquin Delta and other tributaries because of severe drought conditions in California. This report summarizes our observations for WY2014 and compares them to previous years that had different levels of freshwater inflow.

  17. High specificity of a novel Zika virus ELISA in European patients after exposure to different flaviviruses.

    PubMed

    Huzly, Daniela; Hanselmann, Ingeborg; Schmidt-Chanasit, Jonas; Panning, Marcus

    2016-04-21

    The current Zika virus (ZIKV) epidemic in the Americas caused an increase in diagnostic requests in European countries. Here we demonstrate high specificity of the Euroimmun anti-ZIKV IgG and IgM ELISA tests using putative cross-reacting sera of European patients with antibodies against tick-borne encephalitis virus, dengue virus, yellow fever virus and hepatitis C virus. This test may aid in counselling European travellers returning from regions where ZIKV is endemic.

  18. Radioiodination of interleukin 2 to high specific activities by the vapor-phase chloramine T method

    SciTech Connect

    Siekierka, J.J.; DeGudicibus, S.

    1988-08-01

    Recombinant human interleukin 2 (IL-2) was radioiodinated utilizing the vapor phase chloramine T method of iodination. The method is rapid, reproducible, and allows the efficient radioiodination of IL-2 to specific activities higher than those previously attained with full retention of biological activity. IL-2 radioiodinated by this method binds with high affinity to receptors present on phytohemagglutinin-stimulated peripheral blood lymphocytes and should be useful for the study of receptor structure and function.

  19. The use of Element-Specific Detectors Coupled with High-Performance Liquid Chromatographs.

    DTIC Science & Technology

    1981-11-04

    approach to determining the amount of various chelating agents in solution, Jones and Manahan reacted the indicator metal, A 20 copper(II) with a...latter two cases, GFAA was employed as the element specific detector. Jones and Manahan employing a high performance absorption column directly...Chromatogr. Sci., 17: 395 (1979). 35. D. R. Jones, IV and S. E. Manahan , Anal. Chem., 48: 1897 (1976). 36. D. R. Jones, IV and S. E. Manahan , Anal. Chem

  20. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    NASA Astrophysics Data System (ADS)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  1. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  2. A high-speed, high-efficiency phase controller for coherent beam combining based on SPGD algorithm

    SciTech Connect

    Huang, Zh M; Liu, C L; Li, J F; Zhang, D Y

    2014-04-28

    A phase controller for coherent beam combining (CBC) of fibre lasers has been designed and manufactured based on a stochastic parallel gradient descent (SPGD) algorithm and a field programmable gate array (FPGA). The theoretical analysis shows that the iteration rate is higher than 1.9 MHz, and the average compensation bandwidth of CBC for 5 or 20 channels is 50 kHz or 12.5 kHz, respectively. The tests show that the phase controller ensures reliable phase locking of lasers: When the phases of five lasers are locked by the improved control strategy with a variable gain, the energy encircled in the target is increased by 23 times than that in the single output, the phase control accuracy is better than λ/20, and the combining efficiency is 92%. (control of laser radiation parameters)

  3. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  4. Site specific chemoselective labelling of proteins with robust and highly sensitive Ru(II) bathophenanthroline complexes.

    PubMed

    Uzagare, Matthew C; Claussnitzer, Iris; Gerrits, Michael; Bannwarth, Willi

    2012-03-21

    The bioorthogonal and chemoselective fluorescence labelling of several cell-free synthesized proteins containing a site-specifically incorporated azido amino acid was possible using different alkyne-functionalized Ru(II) bathophenanthroline complexes. We were able to achieve a selective labelling even in complex mixtures of proteins despite the fact that ruthenium dyes normally show a high tendency for unspecific interactions with proteins and are commonly used for total staining of proteins. Since the employed Ru complexes are extremely robust, photo-stable and highly sensitive, the approach should be applicable to the production of labelled proteins for single molecule spectroscopy and fluorescence-based interaction studies.

  5. Specification of optical components for a high average-power laser environment

    SciTech Connect

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  6. Theory of specific heat of vortex liquid of high T c superconductors

    NASA Astrophysics Data System (ADS)

    Bai, Chen; Chi, Cheng; Wang, Jiangfan

    2016-10-01

    Superconducting thermal fluctuation (STF) plays an important role in both thermodynamic and transport properties in the vortex liquid phase of high T c superconductors. It was widely observed in the vicinity of the critical transition temperature. In the framework of Ginzburg-Landau-Lawrence-Doniach theory in magnetic field, a self-consistent analysis of STF including all Landau levels is given. Besides that, we calculate the contribution of STF to specific heat in vortex liquid phase for high T c cuprate superconductors, and the fitting results are in good agreement with experimental data. Project supported by the National Natural Science Foundation of China (Grant No. 11274018).

  7. Fast and highly specific DNA-based multiplex detection on a solid support.

    PubMed

    Barišić, Ivan; Kamleithner, Verena; Schönthaler, Silvia; Wiesinger-Mayr, Herbert

    2015-01-01

    Highly specific and fast multiplex detection methods are essential to conduct reasonable DNA-based diagnostics and are especially important to characterise infectious diseases. More than 1000 genetic targets such as antibiotic resistance genes, virulence factors and phylogenetic markers have to be identified as fast as possible to facilitate the correct treatment of a patient. In the present work, we developed a novel ligation-based DNA probe concept that was combined with the microarray technology and used it for the detection of bacterial pathogens. The novel linear chain (LNC) probes identified all tested species correctly within 1 h based on their 16S rRNA gene in a 25-multiplex reaction. Genomic DNA was used directly as template in the ligation reaction identifying as little as 10(7) cells without any pre-amplification. The high specificity was further demonstrated characterising a single nucleotide polymorphism leading to no false positive fluorescence signals of the untargeted single nucleotide polymorphism (SNP) variants. In comparison to conventional microarray probes, the sensitivity of the novel LNC3 probes was higher by a factor of 10 or more. In summary, we present a fast, simple, highly specific and sensitive multiplex detection method adaptable for a wide range of applications.

  8. Molecular inversion probe: a new tool for highly specific detection of plant pathogens.

    PubMed

    Lau, Han Yih; Palanisamy, Ramkumar; Trau, Matt; Botella, Jose R

    2014-01-01

    Highly specific detection methods, capable of reliably identifying plant pathogens are crucial in plant disease management strategies to reduce losses in agriculture by preventing the spread of diseases. We describe a novel molecular inversion probe (MIP) assay that can be potentially developed into a robust multiplex platform to detect and identify plant pathogens. A MIP has been designed for the plant pathogenic fungus Fusarium oxysporum f.sp. conglutinans and the proof of concept for the efficiency of this technology is provided. We demonstrate that this methodology can detect as little as 2.5 ng of pathogen DNA and is highly specific, being able to accurately differentiate Fusarium oxysporum f.sp. conglutinans from other fungal pathogens such as Botrytis cinerea and even pathogens of the same species such as Fusarium oxysporum f.sp. lycopersici. The MIP assay was able to detect the presence of the pathogen in infected Arabidopsis thaliana plants as soon as the tissues contained minimal amounts of pathogen. MIP methods are intrinsically highly multiplexable and future development of specific MIPs could lead to the establishment of a diagnostic method that could potentially screen infected plants for hundreds of pathogens in a single assay.

  9. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement.

    PubMed

    Hwang, Michael T; Landon, Preston B; Lee, Joon; Choi, Duyoung; Mo, Alexander H; Glinsky, Gennadi; Lal, Ratnesh

    2016-06-28

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine.

  10. Pyrosequencing reveals highly diverse and species-specific microbial communities in sponges from the Red Sea.

    PubMed

    Lee, On On; Wang, Yong; Yang, Jiangke; Lafi, Feras F; Al-Suwailem, Abdulaziz; Qian, Pei-Yuan

    2011-04-01

    Marine sponges are associated with a remarkable array of microorganisms. Using a tag pyrosequencing technology, this study was the first to investigate in depth the microbial communities associated with three Red Sea sponges, Hyrtios erectus, Stylissa carteri and Xestospongia testudinaria. We revealed highly diverse sponge-associated bacterial communities with up to 1000 microbial operational taxonomic units (OTUs) and richness estimates of up to 2000 species. Altogether, 26 bacterial phyla were detected from the Red Sea sponges, 11 of which were absent from the surrounding sea water and 4 were recorded in sponges for the first time. Up to 100 OTUs with richness estimates of up to 300 archaeal species were revealed from a single sponge species. This is by far the highest archaeal diversity ever recorded for sponges. A non-negligible proportion of unclassified reads was observed in sponges. Our results demonstrated that the sponge-associated microbial communities remained highly consistent in the same sponge species from different locations, although they varied at different degrees among different sponge species. A significant proportion of the tag sequences from the sponges could be assigned to one of the sponge-specific clusters previously defined. In addition, the sponge-associated microbial communities were consistently divergent from those present in the surrounding sea water. Our results suggest that the Red Sea sponges possess highly sponge-specific or even sponge-species-specific microbial communities that are resistant to environmental disturbance, and much of their microbial diversity remains to be explored.

  11. Highly specific SNP detection using 2D graphene electronics and DNA strand displacement

    PubMed Central

    Hwang, Michael T.; Landon, Preston B.; Lee, Joon; Choi, Duyoung; Mo, Alexander H.; Glinsky, Gennadi; Lal, Ratnesh

    2016-01-01

    Single-nucleotide polymorphisms (SNPs) in a gene sequence are markers for a variety of human diseases. Detection of SNPs with high specificity and sensitivity is essential for effective practical implementation of personalized medicine. Current DNA sequencing, including SNP detection, primarily uses enzyme-based methods or fluorophore-labeled assays that are time-consuming, need laboratory-scale settings, and are expensive. Previously reported electrical charge-based SNP detectors have insufficient specificity and accuracy, limiting their effectiveness. Here, we demonstrate the use of a DNA strand displacement-based probe on a graphene field effect transistor (FET) for high-specificity, single-nucleotide mismatch detection. The single mismatch was detected by measuring strand displacement-induced resistance (and hence current) change and Dirac point shift in a graphene FET. SNP detection in large double-helix DNA strands (e.g., 47 nt) minimize false-positive results. Our electrical sensor-based SNP detection technology, without labeling and without apparent cross-hybridization artifacts, would allow fast, sensitive, and portable SNP detection with single-nucleotide resolution. The technology will have a wide range of applications in digital and implantable biosensors and high-throughput DNA genotyping, with transformative implications for personalized medicine. PMID:27298347

  12. Brachytherapy boost and cancer-specific mortality in favorable high-risk versus other high-risk prostate cancer

    PubMed Central

    Muralidhar, Vinayak; Xiang, Michael; Orio, Peter F.; Martin, Neil E.; Beard, Clair J.; Feng, Felix Y.; Hoffman, Karen E.

    2016-01-01

    Purpose Recent retrospective data suggest that brachytherapy (BT) boost may confer a cancer-specific survival benefit in radiation-managed high-risk prostate cancer. We sought to determine whether this survival benefit would extend to the recently defined favorable high-risk subgroup of prostate cancer patients (T1c, Gleason 4 + 4 = 8, PSA < 10 ng/ml or T1c, Gleason 6, PSA > 20 ng/ml). Material and methods We identified 45,078 patients in the Surveillance, Epidemiology, and End Results database with cT1c-T3aN0M0 intermediate- to high-risk prostate cancer diagnosed 2004-2011 treated with external beam radiation therapy (EBRT) only or EBRT plus BT. We used multivariable competing risks regression to determine differences in the rate of prostate cancer-specific mortality (PCSM) after EBRT + BT or EBRT alone in patients with intermediate-risk, favorable high-risk, or other high-risk disease after adjusting for demographic and clinical factors. Results EBRT + BT was not associated with an improvement in 5-year PCSM compared to EBRT alone among patients with favorable high-risk disease (1.6% vs. 1.8%; adjusted hazard ratio [AHR]: 0.56; 95% confidence interval [CI]: 0.21-1.52, p = 0.258), and intermediate-risk disease (0.8% vs. 1.0%, AHR: 0.83, 95% CI: 0.59-1.16, p = 0.270). Others with high-risk disease had significantly lower 5-year PCSM when treated with EBRT + BT compared with EBRT alone (3.9% vs. 5.3%; AHR: 0.73; 95% CI: 0.55-0.95; p = 0.022). Conclusions Brachytherapy boost is associated with a decreased rate of PCSM in some men with high-risk prostate cancer but not among patients with favorable high-risk disease. Our results suggest that the recently-defined “favorable high-risk” category may be used to personalize therapy for men with high-risk disease. PMID:26985191

  13. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    PubMed

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  14. Fluorine-18-N-methylspiroperidol: radiolytic decomposition as a consequence of high specific activity and high dose levels

    SciTech Connect

    MacGregor, R.R.; Schlyer, D.J.; Fowler, J.S.; Wolf, A.P.; Shiue, C.Y.

    1987-01-01

    High specific activity (/sup 18/F)N-methylspiroperidol(8-(4-(4-(18F)fluorophenyl)-4-oxobutyl)-3-me thyl l-1-phenyl-1,3,8-triazaspiro(4.5)decan-4-one, 5-10 mCi/ml, 4-8 Ci/mumol at EOB) in saline solution undergoes significant radiolytic decomposition resulting in a decrease in radiochemical purity of 10-25% during the first hour. The rate of decomposition is affected by the specific activity, total dose to and chemical composition of the solution. That radiolysis is responsible for the observed decomposition was verified by the observation that unlabeled N-methylspiroperidol is decomposed in the presence of (18F)fluoride.

  15. General Anthropometric and Specific Physical Fitness Profile of High-Level Junior Water Polo Players

    PubMed Central

    Kondrič, Miran; Uljević, Ognjen; Gabrilo, Goran; Kontić, Dean; Sekulić, Damir

    2012-01-01

    The aim of this study was to investigate the status and playing position differences in anthropometric measures and specific physical fitness in high-level junior water polo players. The sample of subjects comprised 110 water polo players (17 to 18 years of age), including one of the world’s best national junior teams for 2010. The subjects were divided according to their playing positions into: Centers (N = 16), Wings (N = 28), perimeter players (Drivers; N = 25), Points (N = 19), and Goalkeepers (N = 18). The variables included body height, body weight, body mass index, arm span, triceps- and subscapular-skinfold. Specific physical fitness tests comprised: four swimming tests, namely: 25m, 100m, 400m and a specific anaerobic 4x50m test (average result achieved in four 50m sprints with a 30 sec pause), vertical body jump (JUMP; maximal vertical jump from the water starting from a water polo defensive position) and a dynamometric power achieved in front crawl swimming (DYN). ANOVA with post-hoc comparison revealed significant differences between positions for most of the anthropometrics, noting that the Centers were the heaviest and had the highest BMI and subscapular skinfold. The Points achieved the best results in most of the swimming capacities and JUMP test. No significant group differences were found for the 100m and 4x50m tests. The Goalkeepers achieved the lowest results for DYN. Given the representativeness of the sample of subjects, the results of this study allow specific insights into the physical fitness and anthropometric features of high-level junior water polo players and allow coaches to design a specific training program aimed at achieving the physical fitness results presented for each playing position. PMID:23487152

  16. General anthropometric and specific physical fitness profile of high-level junior water polo players.

    PubMed

    Kondrič, Miran; Uljević, Ognjen; Gabrilo, Goran; Kontić, Dean; Sekulić, Damir

    2012-05-01

    The aim of this study was to investigate the status and playing position differences in anthropometric measures and specific physical fitness in high-level junior water polo players. The sample of subjects comprised 110 water polo players (17 to 18 years of age), including one of the world's best national junior teams for 2010. The subjects were divided according to their playing positions into: Centers (N = 16), Wings (N = 28), perimeter players (Drivers; N = 25), Points (N = 19), and Goalkeepers (N = 18). The variables included body height, body weight, body mass index, arm span, triceps- and subscapular-skinfold. Specific physical fitness tests comprised: four swimming tests, namely: 25m, 100m, 400m and a specific anaerobic 4x50m test (average result achieved in four 50m sprints with a 30 sec pause), vertical body jump (JUMP; maximal vertical jump from the water starting from a water polo defensive position) and a dynamometric power achieved in front crawl swimming (DYN). ANOVA with post-hoc comparison revealed significant differences between positions for most of the anthropometrics, noting that the Centers were the heaviest and had the highest BMI and subscapular skinfold. The Points achieved the best results in most of the swimming capacities and JUMP test. No significant group differences were found for the 100m and 4x50m tests. The Goalkeepers achieved the lowest results for DYN. Given the representativeness of the sample of subjects, the results of this study allow specific insights into the physical fitness and anthropometric features of high-level junior water polo players and allow coaches to design a specific training program aimed at achieving the physical fitness results presented for each playing position.

  17. Characteristic-based and interface-sharpening algorithm for high-order simulations of immiscible compressible multi-material flows

    NASA Astrophysics Data System (ADS)

    He, Zhiwei; Tian, Baolin; Zhang, Yousheng; Gao, Fujie

    2017-03-01

    The present work focuses on the simulation of immiscible compressible multi-material flows with the Mie-Grüneisen-type equation of state governed by the non-conservative five-equation model [1]. Although low-order single fluid schemes have already been adopted to provide some feasible results, the application of high-order schemes (introducing relatively small numerical dissipation) to these flows may lead to results with severe numerical oscillations. Consequently, attempts to apply any interface-sharpening techniques to stop the progressively more severe smearing interfaces for a longer simulation time may result in an overshoot increase and in some cases convergence to a non-physical solution occurs. This study proposes a characteristic-based interface-sharpening algorithm for performing high-order simulations of such flows by deriving a pressure-equilibrium-consistent intermediate state (augmented with approximations of pressure derivatives) for local characteristic variable reconstruction and constructing a general framework for interface sharpening. First, by imposing a weak form of the jump condition for the non-conservative five-equation model, we analytically derive an intermediate state with pressure derivatives treated as additional parameters of the linearization procedure. Based on this intermediate state, any well-established high-order reconstruction technique can be employed to provide the state at each cell edge. Second, by designing another state with only different reconstructed values of the interface function at each cell edge, the advection term in the equation of the interface function is discretized twice using any common algorithm. The difference between the two discretizations is employed consistently for interface compression, yielding a general framework for interface sharpening. Coupled with the fifth-order improved accurate monotonicity-preserving scheme [2] for local characteristic variable reconstruction and the tangent of hyperbola

  18. High Specific Power Aircraft Turn Maneuvers: Tradeoff of Time-to-Turn versus Change in Specific Energy

    DTIC Science & Technology

    1984-09-01

    necessary conditions for a genetic optimal control formulation with two state-dependent inequality constraints and then applies the formulation to a high...3-13) can now be formed and is H - XXX + Xyi + Xhh + .+ X, + AX.* + p2 gI - Nmax)+ - sinY(1- #V 2 )J (4-10) W 29 For this type of aircraft problem

  19. A Highly Photostable Hyperbranched Polyglycerol-Based NIR Fluorescence Nanoplatform for Mitochondria-Specific Cell Imaging.

    PubMed

    Dong, Chunhong; Liu, Zhongyun; Liu, Junqing; Wu, Changzhu; Neumann, Falko; Wang, Hanjie; Schäfer-Korting, Monika; Kleuser, Burkhard; Chang, Jin; Li, Wenzhong; Ma, Nan; Haag, Rainer

    2016-09-01

    Considering the critical role of mitochondria in the life and death of cells, non-invasive long-term tracking of mitochondria has attracted considerable interest. However, a high-performance mitochondria-specific labeling probe with high photostability is still lacking. Herein a highly photostable hyperbranched polyglycerol (hPG)-based near-infrared (NIR) quantum dots (QDs) nanoplatform is reported for mitochondria-specific cell imaging. Comprising NIR Zn-Cu-In-S/ZnS QDs as extremely photostable fluorescent labels and alkyl chain (C12 )/triphenylphosphonium (TPP)-functionalized hPG derivatives as protective shell, the tailored QDs@hPG-C12 /TPP nanoprobe with a hydrodynamic diameter of about 65 nm exhibits NIR fluorescence, excellent biocompatibility, good stability, and mitochondria-targeted ability. Cell uptake experiments demonstrate that QDs@hPG-C12 /TPP displays a significantly enhanced uptake in HeLa cells compared to nontargeted QDs@hPG-C12 . Further co-localization study indicates that the probe selectively targets mitochondria. Importantly, compared with commercial deep-red mitochondria dyes, QDs@hPG-C12 /TPP possesses superior photostability under continuous laser irradiation, indicating great potential for long-term mitochondria labeling and tracking. Moreover, drug-loaded QDs@hPG-C12 /TPP display an enhanced tumor cell killing efficacy compared to nontargeted drugs. This work could open the door to the construction of organelle-targeted multifunctional nanoplatforms for precise diagnosis and high-efficient tumor therapy.

  20. Specification of minimum short circuit capacity for three-phase unbalance evaluation of high-speed railway power system

    SciTech Connect

    Chen, S.L.; Kao, F.C.; Lee, T.M.

    1995-12-31

    In the paper, firstly, the authors present an efficient computational algorithm to evaluate the short circuit capacity distribution at a substation bus, and on basis of this distribution to specify the minimum short circuit capacity for the year under evaluation. Secondly, the authors estimate the maximum traction load at seven 161kv substations of Taiwan high-speed railway system which is now under planning. Thirdly, using the maximum traction load and the minimum short circuit capacity derived, the authors estimate the maximum unbalance of the 3{phi} voltage at these seven 161kv substations, and compare their results with that by Taipower to demonstrate the effectiveness of their proposed algorithm.

  1. High frequency of malaria-specific T cells in non-exposed humans.

    PubMed

    Zevering, Y; Amante, F; Smillie, A; Currier, J; Smith, G; Houghten, R A; Good, M F

    1992-03-01

    A major goal of current candidate malaria vaccines is to stimulate the expansion of clones of malaria-specific lymphocytes. We have examined the in vitro T cell responses of a group of malaria exposed and non-exposed adult Caucasian donors to recombinant circumsporozoite (CS) proteins, one of which is undergoing clinical trials, to blood-stage parasites, and to synthetic peptides copying the CS protein and defined blood-stage proteins. In nearly all individuals tested, CD4 T cell proliferation or lymphokine production occurred in response to whole parasite or CS protein stimulation, and T cells from many individuals responded to synthetic peptides. T cell responses were major histocompatibility complex-restricted, and stimulation of T cells with malaria parasites or CS protein did not appear to expand a population of T cell receptor gamma/delta cells. Malaria-specific responses were independent of prior malaria exposure, and in some cases exceeded the magnitude of response to tetanus toxoid. Specific T cells are present in high frequency in the peripheral blood of many donors who have never been exposed to malaria. Although malaria-specific CD4 T cells play an important role in immunity, these data question whether vaccines need to stimulate such cells, and focus attention on other aspects of malaria immunity which may be more critical to a successful vaccine.

  2. Quantifying domain-ligand affinities and specificities by high-throughput holdup assay

    PubMed Central

    Vincentelli, Renaud; Luck, Katja; Poirson, Juline; Polanowska, Jolanta; Abdat, Julie; Blémont, Marilyne; Turchetto, Jeremy; Iv, François; Ricquier, Kevin; Straub, Marie-Laure; Forster, Anne; Cassonnet, Patricia; Borg, Jean-Paul; Jacob, Yves; Masson, Murielle; Nominé, Yves; Reboul, Jérôme; Wolff, Nicolas; Charbonnier, Sebastian; Travé, Gilles

    2015-01-01

    Many protein interactions are mediated by small linear motifs interacting specifically with defined families of globular domains. Quantifying the specificity of a motif requires measuring and comparing its binding affinities to all its putative target domains. To this aim, we developed the high-throughput holdup assay, a chromatographic approach that can measure up to a thousand domain-motif equilibrium binding affinities per day. Extracts of overexpressed domains are incubated with peptide-coated resins and subjected to filtration. Binding affinities are deduced from microfluidic capillary electrophoresis of flow-throughs. After benchmarking the approach on 210 PDZ-peptide pairs with known affinities, we determined the affinities of two viral PDZ-binding motifs derived from Human Papillomavirus E6 oncoproteins for 209 PDZ domains covering 79% of the human PDZome. We obtained exquisite sequence-dependent binding profiles, describing quantitatively the PDZome recognition specificity of each motif. This approach, applicable to many categories of domain-ligand interactions, has a wide potential for quantifying the specificities of interactomes. PMID:26053890

  3. Selection of DNA Aptamers against Glioblastoma Cells with High Affinity and Specificity

    PubMed Central

    Kang, Dezhi; Wang, Jiangjie; Zhang, Weiyun; Song, Yanling; Li, Xilan; Zou, Yuan; Zhu, Mingtao; Zhu, Zhi; Chen, Fuyong; Yang, Chaoyong James

    2012-01-01

    Background Glioblastoma is the most common and most lethal form of brain tumor in human. Unfortunately, there is still no effective therapy to this fatal disease and the median survival is generally less than one year from the time of diagnosis. Discovery of ligands that can bind specifically to this type of tumor cells will be of great significance to develop early molecular imaging, targeted delivery and guided surgery methods to battle this type of brain tumor. Methodology/Principal Findings We discovered two target-specific aptamers named GBM128 and GBM131 against cultured human glioblastoma cell line U118-MG after 30 rounds selection by a method called cell-based Systematic Evolution of Ligands by EXponential enrichment (cell-SELEX). These two aptamers have high affinity and specificity against target glioblastoma cells. They neither recognize normal astraglial cells, nor do they recognize other normal and cancer cell lines tested. Clinical tissues were also tested and the results showed that these two aptamers can bind to different clinical glioma tissues but not normal brain tissues. More importantly, binding affinity and selectivity of these two aptamers were retained in complicated biological environment. Conclusion/Significance The selected aptamers could be used to identify specific glioblastoma biomarkers. Methods of molecular imaging, targeted drug delivery, ligand guided surgery can be further developed based on these ligands for early detection, targeted therapy, and guided surgery of glioblastoma leading to effective treatment of glioblastoma. PMID:23056171

  4. Maltodextrin-based imaging probes detect bacteria in vivo with high sensitivity and specificity

    NASA Astrophysics Data System (ADS)

    Ning, Xinghai; Lee, Seungjun; Wang, Zhirui; Kim, Dongin; Stubblefield, Bryan; Gilbert, Eric; Murthy, Niren

    2011-08-01

    The diagnosis of bacterial infections remains a major challenge in medicine. Although numerous contrast agents have been developed to image bacteria, their clinical impact has been minimal because they are unable to detect small numbers of bacteria in vivo, and cannot distinguish infections from other pathologies such as cancer and inflammation. Here, we present a family of contrast agents, termed maltodextrin-based imaging probes (MDPs), which can detect bacteria in vivo with a sensitivity two orders of magnitude higher than previously reported, and can detect bacteria using a bacteria-specific mechanism that is independent of host response and secondary pathologies. MDPs are composed of a fluorescent dye conjugated to maltohexaose, and are rapidly internalized through the bacteria-specific maltodextrin transport pathway, endowing the MDPs with a unique combination of high sensitivity and specificity for bacteria. Here, we show that MDPs selectively accumulate within bacteria at millimolar concentrations, and are a thousand-fold more specific for bacteria than mammalian cells. Furthermore, we demonstrate that MDPs can image as few as 105 colony-forming units in vivo and can discriminate between active bacteria and inflammation induced by either lipopolysaccharides or metabolically inactive bacteria.

  5. Analytical evaluation of the impact of broad specification fuels on high bypass turbofan engine combustors

    NASA Technical Reports Server (NTRS)

    Taylor, J. R.

    1979-01-01

    Six conceptual combustor designs for the CF6-50 high bypass turbofan engine and six conceptual combustor designs for the NASA/GE E3 high bypass turbofan engine were analyzed to provide an assessment of the major problems anticipated in using broad specification fuels in these aircraft engine combustion systems. Each of the conceptual combustor designs, which are representative of both state-of-the-art and advanced state-of-the-art combustion systems, was analyzed to estimate combustor performance, durability, and pollutant emissions when using commercial Jet A aviation fuel and when using experimental referee board specification fuel. Results indicate that lean burning, low emissions double annular combustor concepts can accommodate a wide range of fuel properties without a serious deterioration of performance or durability. However, rich burning, single annular concepts would be less tolerant to a relaxation of fuel properties. As the fuel specifications are relaxed, autoignition delay time becomes much smaller which presents a serious design and development problem for premixing-prevaporizing combustion system concepts.

  6. Design and component specifications for high average power laser optical systems

    SciTech Connect

    O'Neil, R.W.; Sawicki, R.H.; Johnson, S.A.; Sweatt, W.C.

    1987-01-01

    Laser imaging and transport systems are considered in the regime where laser-induced damage and/or thermal distortion have significant design implications. System design and component specifications are discussed and quantified in terms of the net system transport efficiency and phase budget. Optical substrate materials, figure, surface roughness, coatings, and sizing are considered in the context of visible and near-ir optical systems that have been developed at Lawrence Livermore National Laboratory for laser isotope separation applications. In specific examples of general applicability, details of the bulk and/or surface absorption, peak and/or average power damage threshold, coating characteristics and function, substrate properties, or environmental factors will be shown to drive the component size, placement, and shape in high-power systems. To avoid overstressing commercial fabrication capabilities or component design specifications, procedures will be discussed for compensating for aberration buildup, using a few carefully placed adjustable mirrors. By coupling an aggressive measurements program on substrates and coatings to the design effort, an effective technique has been established to project high-power system performance realistically and, in the process, drive technology developments to improve performance or lower cost in large-scale laser optical systems. 13 refs.

  7. Identification of an epigenetic biomarker panel with high sensitivity and specificity for colorectal cancer and adenomas

    PubMed Central

    2011-01-01

    Background The presence of cancer-specific DNA methylation patterns in epithelial colorectal cells in human feces provides the prospect of a simple, non-invasive screening test for colorectal cancer and its precursor, the adenoma. This study investigates a panel of epigenetic markers for the detection of colorectal cancer and adenomas. Methods Candidate biomarkers were subjected to quantitative methylation analysis in test sets of tissue samples from colorectal cancers, adenomas, and normal colonic mucosa. All findings were verified in independent clinical validation series. A total of 523 human samples were included in the study. Receiver operating characteristic (ROC) curve analysis was used to evaluate the performance of the biomarker panel. Results Promoter hypermethylation of the genes CNRIP1, FBN1, INA, MAL, SNCA, and SPG20 was frequent in both colorectal cancers (65-94%) and adenomas (35-91%), whereas normal mucosa samples were rarely (0-5%) methylated. The combined sensitivity of at least two positives among the six markers was 94% for colorectal cancers and 93% for adenoma samples, with a specificity of 98%. The resulting areas under the ROC curve were 0.984 for cancers and 0.968 for adenomas versus normal mucosa. Conclusions The novel epigenetic marker panel shows very high sensitivity and specificity for both colorectal cancers and adenomas. Our findings suggest this biomarker panel to be highly suitable for early tumor detection. PMID:21777459

  8. Microarrays for high-throughput genotyping of MICA alleles using allele-specific primer extension.

    PubMed

    Baek, I C; Jang, J-P; Choi, H-B; Choi, E-J; Ko, W-Y; Kim, T-G

    2013-10-01

    The role of major histocompatibility complex (MHC) class I chain-related gene A (MICA), a ligand of NKG2D, has been defined in human diseases by its allele associations with various autoimmune diseases, hematopoietic stem cell transplantation (HSCT) and cancer. This study describes a practical system to develop MICA genotyping by allele-specific primer extension (ASPE) on microarrays. From the results of 20 control primers, strict and reliable cut-off values of more than 30,000 mean fluorescence intensity (MFI) as positive and less than 3000 MFI as negative, were applied to select high-quality specific extension primers. Among 55 allele-specific primers, 44 primers could be initially selected as optimal primer. Through adjusting the length, six primers were improved. The other failed five primers were corrected by refractory modification. MICA genotypes by ASPE on microarrays showed the same results as those by nucleotide sequencing. On the basis of these results, ASPE on microarrays may provide high-throughput genotyping for MICA alleles for population studies, disease-gene associations and HSCT.

  9. High-resolution specificity from DNA sequencing highlights alternative modes of Lac repressor binding.

    PubMed

    Zuo, Zheng; Stormo, Gary D

    2014-11-01

    Knowing the specificity of transcription factors is critical to understanding regulatory networks in cells. The lac repressor-operator system has been studied for many years, but not with high-throughput methods capable of determining specificity comprehensively. Details of its binding interaction and its selection of an asymmetric binding site have been controversial. We employed a new method to accurately determine relative binding affinities to thousands of sequences simultaneously, requiring only sequencing of bound and unbound fractions. An analysis of 2560 different DNA sequence variants, including both base changes and variations in operator length, provides a detailed view of lac repressor sequence specificity. We find that the protein can bind with nearly equal affinities to operators of three different lengths, but the sequence preference changes depending on the length, demonstrating alternative modes of interaction between the protein and DNA. The wild-type operator has an odd length, causing the two monomers to bind in alternative modes, making the asymmetric operator the preferred binding site. We tested two other members of the LacI/GalR protein family and find that neither can bind with high affinity to sites with alternative lengths or shows evidence of alternative binding modes. A further comparison with known and predicted motifs suggests that the lac repressor may be unique in this ability and that this may contribute to its selection.

  10. High throughput, detailed, cell-specific neuroanatomy of dendritic spines using microinjection and confocal microscopy

    PubMed Central

    Dumitriu, Dani; Rodriguez, Alfredo; Morrison, John H.

    2012-01-01

    Morphological features such as size, shape and density of dendritic spines have been shown to reflect important synaptic functional attributes and potential for plasticity. Here we describe in detail a protocol for obtaining detailed morphometric analysis of spines using microinjection of fluorescent dyes, high resolution confocal microscopy, deconvolution and image analysis using NeuronStudio. Recent technical advancements include better preservation of tissue resulting in prolonged ability to microinject, and algorithmic improvements that compensate for the residual Z-smear inherent in all optical imaging. Confocal imaging parameters were probed systematically for the identification of both optimal resolution as well as highest efficiency. When combined, our methods yield size and density measurements comparable to serial section transmission electron microscopy in a fraction of the time. An experiment containing 3 experimental groups with 8 subjects in each can take as little as one month if optimized for speed, or approximately 4 to 5 months if the highest resolution and morphometric detail is sought. PMID:21886104

  11. Improving production of 11C to achieve high specific labelled radiopharmaceuticals

    NASA Astrophysics Data System (ADS)

    Savio, E.; García, O.; Trindade, V.; Buccino, P.; Giglio, J.; Balter, H.; Engler, H.

    2012-12-01

    Molecular imaging is usually based on the recognition by the radiopharmaceuticals of specific sites which are present in limited number or density in the cells or biological tissues. Thus is of high importance to label the radiopharmaceuticals with high specific activity to be able to achieve a high target to non target ratio. The presence of carbon dioxide (CO2) from the air containing 98,88% of 12C and 1,12% 13C compete with 11CO2 produced at the cyclotron. In order to minimize the presence of these isotopes along the process of irradiation, transferring and synthesis of radiopharmaceuticals labelled with 11C, we applied this method: previous to the irradiation the target was 3-4 times flushed with He (5.7) as a cold cleaning, followed by a similar conditioning of the line, from the target up to the module, and finally a hot cleaning in order to desorb 12CO2 and 13CO2, this was performed by irradiation during 1 min at 5 uA (3 times). In addition, with the aim of improving quality of gases in the target and in the modules, water traps (Agilent) were incorporated in the inlet lines of the target and modules. Target conditioning process (cold and hot flushings) as well as line cleaning, allowing the desorption of unlabelled CO2, together with the increasing of gas purity in the irradiation and in the synthesis, were critical parameters that enable to achieve 11C-radiopharamaceuticals with high specific activity, mainly in the case of 11C-PIB.

  12. Record-high specific conductance and water temperature in San Francisco Bay during water year 2015

    USGS Publications Warehouse

    Work, Paul; Downing-Kunz, Maureen; Livsey, Daniel

    2017-02-22

    The San Francisco estuary is commonly defined to include San Francisco Bay (bay) and the adjacent Sacramento–San Joaquin River Delta (delta). The U.S. Geological Survey (USGS) has operated a high-frequency (15-minute sampling interval) water-quality monitoring network in San Francisco Bay since the late 1980s (Buchanan and others, 2014). This network includes 19 stations at which sustained measurements have been made in the bay; currently, 8 stations are in operation (fig. 1). All eight stations are equipped with specific conductance (which can be related to salinity) and water-temperature sensors. Water quality in the bay constantly changes as ocean tides force seawater in and out of the bay, and river inflows—the most significant coming from the delta—vary on time scales ranging from those associated with storms to multiyear droughts. This monitoring network was designed to observe and characterize some of these changes in the bay across space and over time. The data demonstrate a high degree of variability in both specific conductance and temperature at time scales from tidal to annual and also reveal longer-term changes that are likely to influence overall environmental health in the bay.In water year (WY) 2015 (October 1, 2014, through September 30, 2015), as in the preceding water year (Downing-Kunz and others, 2015), the high-frequency measurements revealed record-high values of specific conductance and water temperature at several stations during a period of reduced freshwater inflow from the delta and other tributaries because of persistent, severe drought conditions in California. This report briefly summarizes observations for WY 2015 and compares them to previous years that had different levels of freshwater inflow.

  13. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG

    PubMed Central

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203

  14. A High-Order Low-Order Algorithm with Exponentially Convergent Monte Carlo for Thermal Radiative Transfer

    SciTech Connect

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    2016-10-21

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used to compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.

  15. A High-Order Low-Order Algorithm with Exponentially Convergent Monte Carlo for Thermal Radiative Transfer

    DOE PAGES

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    2016-10-21

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used to computemore » consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  16. Selection of DNA aptamers against epidermal growth factor receptor with high affinity and specificity

    SciTech Connect

    Wang, Deng-Liang; Song, Yan-Ling; Zhu, Zhi; Li, Xi-Lan; Zou, Yuan; Yang, Hai-Tao; Wang, Jiang-Jie; Yao, Pei-Sen; Pan, Ru-Jun; Yang, Chaoyong James; Kang, De-Zhi

    2014-10-31

    Highlights: • This is the first report of DNA aptamer against EGFR in vitro. • Aptamer can bind targets with high affinity and selectivity. • DNA aptamers are more stable, cheap and efficient than RNA aptamers. • Our selected DNA aptamer against EGFR has high affinity with K{sub d} 56 ± 7.3 nM. • Our selected DNA aptamer against EGFR has high selectivity. - Abstract: Epidermal growth factor receptor (EGFR/HER1/c-ErbB1), is overexpressed in many solid cancers, such as epidermoid carcinomas, malignant gliomas, etc. EGFR plays roles in proliferation, invasion, angiogenesis and metastasis of malignant cancer cells and is the ideal antigen for clinical applications in cancer detection, imaging and therapy. Aptamers, the output of the systematic evolution of ligands by exponential enrichment (SELEX), are DNA/RNA oligonucleotides which can bind protein and other substances with specificity. RNA aptamers are undesirable due to their instability and high cost of production. Conversely, DNA aptamers have aroused researcher’s attention because they are easily synthesized, stable, selective, have high binding affinity and are cost-effective to produce. In this study, we have successfully identified DNA aptamers with high binding affinity and selectivity to EGFR. The aptamer named TuTu22 with K{sub d} 56 ± 7.3 nM was chosen from the identified DNA aptamers for further study. Flow cytometry analysis results indicated that the TuTu22 aptamer was able to specifically recognize a variety of cancer cells expressing EGFR but did not bind to the EGFR-negative cells. With all of the aforementioned advantages, the DNA aptamers reported here against cancer biomarker EGFR will facilitate the development of novel targeted cancer detection, imaging and therapy.

  17. Detection of pork adulteration by highly-specific PCR assay of mitochondrial D-loop.

    PubMed

    Karabasanavar, Nagappa S; Singh, S P; Kumar, Deepak; Shebannavar, Sunil N

    2014-02-15

    We describe a highly specific PCR assay for the authentic identification of pork. Accurate detection of tissues derived from pig (Sus scrofa) was accomplished by using newly designed primers targeting porcine mitochondrial displacement (D-loop) region that yielded an unique amplicon of 712 base pairs (bp). Possibility of cross-amplification was precluded by testing as many as 24 animal species (mammals, birds, rodent and fish). Suitability of PCR assay was confirmed in raw (n = 20), cooked (60, 80 and 100 °C), autoclaved (121 °C) and micro-oven processed pork. Sensitivity of detection of pork in other species meat using unique pig-specific PCR was established to be at 0.1%; limit of detection (LOD) of pig DNA was 10 pg (pico grams). The technique can be used for the authentication of raw, processed and adulterated pork and products under the circumstances of food adulteration related disputes or forensic detection of origin of pig species.

  18. Specific heat and sound velocity at the relevant competing phase of high-temperature superconductors

    PubMed Central

    Varma, Chandra M.; Zhu, Lijun

    2015-01-01

    Recent highly accurate sound velocity measurements reveal a phase transition to a competing phase in YBa2Cu3O6+δ that is not identified in available specific heat measurements. We show that this signature is consistent with the universality class of the loop current-ordered state when the free-energy reduction is similar to the superconducting condensation energy, due to the anomalous fluctuation region of such a transition. We also compare the measured specific heat with some usual types of transitions, which are observed at lower temperatures in some cuprates, and find that the upper limit of the energy reduction due to them is about 1/40th the superconducting condensation energy. PMID:25941376

  19. Mapping specificity landscapes of RNA-protein interactions by high throughput sequencing.

    PubMed

    Jankowsky, Eckhard; Harris, Michael E

    2017-03-02

    To function in a biological setting, RNA binding proteins (RBPs) have to discriminate between alternative binding sites in RNAs. This discrimination can occur in the ground state of an RNA-protein binding reaction, in its transition state, or in both. The extent by which RBPs discriminate at these reaction states defines RBP specificity landscapes. Here, we describe the HiTS-Kin and HiTS-EQ techniques, which combine kinetic and equilibrium binding experiments with high throughput sequencing to quantitatively assess substrate discrimination for large numbers of substrate variants at ground and transition states of RNA-protein binding reactions. We discuss experimental design, practical considerations and data analysis and outline how a combination of HiTS-Kin and HiTS-EQ allows the mapping of RBP specificity landscapes.

  20. Specific heat and sound velocity at the relevant competing phase of high-temperature superconductors.

    PubMed

    Varma, Chandra M; Zhu, Lijun

    2015-05-19

    Recent highly accurate sound velocity measurements reveal a phase transition to a competing phase in YBa2Cu3O6+δ that is not identified in available specific heat measurements. We show that this signature is consistent with the universality class of the loop current-ordered state when the free-energy reduction is similar to the superconducting condensation energy, due to the anomalous fluctuation region of such a transition. We also compare the measured specific heat with some usual types of transitions, which are observed at lower temperatures in some cuprates, and find that the upper limit of the energy reduction due to them is about 1/40th the superconducting condensation energy.

  1. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific

    PubMed Central

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems. PMID:26067836

  2. Endophytic Fungal Communities Associated with Vascular Plants in the High Arctic Zone Are Highly Diverse and Host-Plant Specific.

    PubMed

    Zhang, Tao; Yao, Yi-Feng

    2015-01-01

    This study assessed the diversity and distribution of endophytic fungal communities associated with the leaves and stems of four vascular plant species in the High Arctic using 454 pyrosequencing with fungal-specific primers targeting the ITS region. Endophytic fungal communities showed high diversity. The 76,691 sequences obtained belonged to 250 operational taxonomic units (OTUs). Of these OTUs, 190 belonged to Ascomycota, 50 to Basidiomycota, 1 to Chytridiomycota, and 9 to unknown fungi. The dominant orders were Helotiales, Pleosporales, Capnodiales, and Tremellales, whereas the common known fungal genera were Cryptococcus, Rhizosphaera, Mycopappus, Melampsora, Tetracladium, Phaeosphaeria, Mrakia, Venturia, and Leptosphaeria. Both the climate and host-related factors might shape the fungal communities associated with the four Arctic plant species in this region. These results suggested the presence of an interesting endophytic fungal community and could improve our understanding of fungal evolution and ecology in the Arctic terrestrial ecosystems.

  3. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  4. High Resolution X Chromosome-Specific Array-CGH Detects New CNVs in Infertile Males

    PubMed Central

    Krausz, Csilla; Giachini, Claudia; Lo Giacco, Deborah; Daguin, Fabrice; Chianese, Chiara; Ars, Elisabet; Ruiz-Castane, Eduard; Forti, Gianni; Rossi, Elena

    2012-01-01

    Context The role of CNVs in male infertility is poorly defined, and only those linked to the Y chromosome have been the object of extensive research. Although it has been predicted that the X chromosome is also enriched in spermatogenesis genes, no clinically relevant gene mutations have been identified so far. Objectives In order to advance our understanding of the role of X-linked genetic factors in male infertility, we applied high resolution X chromosome specific array-CGH in 199 men with different sperm count followed by the analysis of selected, patient-specific deletions in large groups of cases and normozoospermic controls. Results We identified 73 CNVs, among which 55 are novel, providing the largest collection of X-linked CNVs in relation to spermatogenesis. We found 12 patient-specific deletions with potential clinical implication. Cancer Testis Antigen gene family members were the most frequently affected genes, and represent new genetic targets in relationship with altered spermatogenesis. One of the most relevant findings of our study is the significantly higher global burden of deletions in patients compared to controls due to an excessive rate of deletions/person (0.57 versus 0.21, respectively; p = 8.785×10−6) and to a higher mean sequence loss/person (11.79 Kb and 8.13 Kb, respectively; p = 3.435×10−4). Conclusions By the analysis of the X chromosome at the highest resolution available to date, in a large group of subjects with known sperm count we observed a deletion burden in relation to spermatogenic impairment and the lack of highly recurrent deletions on the X chromosome. We identified a number of potentially important patient-specific CNVs and candidate spermatogenesis genes, which represent novel targets for future investigations. PMID:23056185

  5. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  6. Specific high-affinity binding of high density lipoproteins to cultured human skin fibroblasts and arterial smooth muscle cells.

    PubMed

    Biesbroeck, R; Oram, J F; Albers, J J; Bierman, E L

    1983-03-01

    Binding of human high density lipoproteins (HDL, d = 1.063-1.21) to cultured human fibroblasts and human arterial smooth muscle cells was studied using HDL subjected to heparin-agarose affinity chromatography to remove apoprotein (apo) E and B. Saturation curves for binding of apo E-free 125I-HDL showed at least two components: low-affinity nonsaturable binding and high-affinity binding that saturated at approximately 20 micrograms HDL protein/ml. Scatchard analysis of high-affinity binding of apo E-free 125I-HDL to normal fibroblasts yielded plots that were significantly linear, indicative of a single class of binding sites. Saturation curves for binding of both 125I-HDL3 (d = 1.125-1.21) and apo E-free 125I-HDL to low density lipoprotein (LDL) receptor-negative fibroblasts also showed high-affinity binding that yielded linear Scatchard plots. On a total protein basis, HDL2 (d = 1.063-1.10), HDL3 and very high density lipoproteins (VHDL, d = 1.21-1.25) competed as effectively as apo E-free HDL for binding of apo E-free 125I-HDL to normal fibroblasts. Also, HDL2, HDL3, and VHDL competed similarly for binding of 125I-HDL3 to LDL receptor-negative fibroblasts. In contrast, LDL was a weak competitor for HDL binding. These results indicate that both human fibroblasts and arterial smooth muscle cells possess specific high affinity HDL binding sites. As indicated by enhanced LDL binding and degradation and increased sterol synthesis, apo E-free HDL3 promoted cholesterol efflux from fibroblasts. These effects also saturated at HDL3 concentrations of 20 micrograms/ml, suggesting that promotion of cholesterol efflux by HDL is mediated by binding to the high-affinity cell surface sites.

  7. Common and specific brain regions in high- versus low-confidence recognition memory.

    PubMed

    Kim, Hongkeun; Cabeza, Roberto

    2009-07-28

    The goal of the present functional magnetic resonance imaging (fMRI) study was to investigate whether and to what extent brain regions involved in high-confidence recognition (HCR) versus low-confidence recognition (LCR) overlap or separate from each other. To this end, we performed conjunction analyses involving activations elicited during high-confidence hit, low-confidence hit, and high-confidence correct rejection responses. The analyses yielded 3 main findings. First, sensory/perceptual and associated posterior regions were common to HCR and LCR, indicating contribution of these regions to both HCR and LCR activity. This finding may help explain why these regions are among the most common in functional neuroimaging studies of episodic retrieval. Second, medial temporal lobe (MTL) and associated midline regions were associated with HCR, possibly reflecting recollection-related processes, whereas specific prefrontal cortex (PFC) regions were associated with LCR, possibly reflecting executive control processes. This finding is consistent with the notion that the MTL and PFC networks play complementary roles during episodic retrieval. Finally, within posterior parietal cortex, a dorsal region was associated with LCR, possibly reflecting top-down attentional processes, whereas a ventral region was associated with HCR, possibly reflecting bottom-up attentional processes. This finding may help explain why functional neuroimaging studies have found diverse parietal effects during episodic retrieval. Taken together, our findings provide strong evidence that HCR versus LCR, and by implication, recollection versus familiarity processes, are represented in common as well as specific brain regions.

  8. Dimeric CRISPR RNA-guided FokI nucleases for highly specific genome editing.

    PubMed

    Tsai, Shengdar Q; Wyvekens, Nicolas; Khayter, Cyd; Foden, Jennifer A; Thapar, Vishal; Reyon, Deepak; Goodwin, Mathew J; Aryee, Martin J; Joung, J Keith

    2014-06-01

    Monomeric CRISPR-Cas9 nucleases are widely used for targeted genome editing but can induce unwanted off-target mutations with high frequencies. Here we describe dimeric RNA-guided FokI nucleases (RFNs) that can recognize extended sequences and edit endogenous genes with high efficiencies in human cells. RFN cleavage activity depends strictly on the binding of two guide RNAs (gRNAs) to DNA with a defined spacing and orientation substantially reducing the likelihood that a suitable target site will occur more than once in the genome and therefore improving specificities relative to wild-type Cas9 monomers. RFNs guided by a single gRNA generally induce lower levels of unwanted mutations than matched monomeric Cas9 nickases. In addition, we describe a simple method for expressing multiple gRNAs bearing any 5' end nucleotide, which gives dimeric RFNs a broad targeting range. RFNs combine the ease of RNA-based targeting with the specificity enhancement inherent to dimerization and are likely to be useful in applications that require highly precise genome editing.

  9. From hydrocolloids to high specific surface area porous supports for catalysis.

    PubMed

    Valentin, Romain; Molvinger, Karine; Viton, Christophe; Domard, Alain; Quignard, Françoise

    2005-01-01

    Polysaccharide hydrogels are effective supports for heterogeneous catalysts. Their use in solvents different from water has been hampered by their instability upon drying. While the freeze-drying process or air-drying of hydrocolloid gels led to compact solids with a low surface area, drying the gel in CO2 beyond the critical point provided mesoporous materials with a high specific surface area. Their effectiveness as a support for catalysis was exemplified in the reaction of substitution of an allyl carbonate with morpholine catalyzed by the hydrosoluble Pd(TPPTS)3 complex. The influence of water on the catalytic activity and the properties of the support was evidenced.

  10. Specific features of diffuse photon migration in highly scattering media with optical properties of biological tissues

    SciTech Connect

    Proskurin, S G; Potlov, A Yu; Frolov, S V

    2015-06-30

    Specific features of motion of photon density normalised maximum (PDNM) of pulsed radiation in highly scattering media with optical properties of biological tissues are described. A numerical simulation has confirmed that, when the object is a homogeneous cylinder, PDNM always moves to its geometric centre. In the presence of an absorbing inhomogeneity, PDNM moves towards the point symmetric to the geometric centre of the inhomogeneity with respect to the centre of the cylindrical object. In the presence of a scattering inhomogeneity, PDNM moves towards its geometric centre. (radiation scattering)

  11. Rapid perceptual adaptation to high gravitoinertial force levels Evidence for context-specific adaptation

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Graybiel, A.

    1982-01-01

    Subjects exposed to periodic variations in gravitoinertial force (2-G peak) in parabolic flight maneuvers quickly come to perceive the peak force level as having decreased in intensity. By the end of a 40-parabola flight, the decrease in apparent force is approximately 40%. On successive flight days, the apparent intensity of the force loads seems to decrease as well, indicating a cumulative adaptive effect. None of the subjects reported feeling abnormally 'light' for more than a minute or two after return to 1-G background force levels. The pattern of findings suggests a context-specific adaptation to high-force levels.

  12. Effect of Advanced Synthetically Enhanced Detector Resolution Algorithm on Specificity and Sensitivity of Portable High Purity Germanium Gamma Detector Spectra

    DTIC Science & Technology

    2009-06-01

    with a 50 mm diameter and 30 mm deep Ge crystal and low power Stirling Cooler . The detector is shown in Figure 9. 28 Figure 9. Ortec...recording some characteristics of their average behavior. The common behavior of particles in the physical system is then concluded from the 14...modeling. With increased computational power, Monte Carlo simulations of detector systems have become a complement to experimental detector work

  13. High Order Accurate Algorithms for Shocks, Rapidly Changing Solutions and Multiscale Problems

    DTIC Science & Technology

    2014-11-13

    for front propagation with obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations ...obstacles, and homotopy method for steady states. Applications include high order simulations for 3D gaseous detonations , sound generation study via... detonation waves, Combustion and Flame, (02 2013): 0. doi: 10.1016/j.combustflame.2012.10.002 Yang Yang, Ishani Roy, Chi-Wang Shu, Li-Zhi Fang. THE

  14. Knee and hip joint biomechanics are gender-specific in runners with high running mileage.

    PubMed

    Gehring, D; Mornieux, G; Fleischmann, J; Gollhofer, A

    2014-02-01

    Female runners are reported to be more prone to develop specific knee joint injuries than males. It has been suggested that increased frontal plane joint loading might be related to the incidence of these knee injuries in running. The purpose of this study was to evaluate if frontal plane knee and hip joint kinematics and kinetics are gender-specific in runners with high mileage. 3D-kinematics and kinetics were recorded from 16 female and 16 male runners at a speed of 3 m/s, 4 m/s, and 5 m/s. Frontal plane joint angles and joint moments were ascertained and compared between genders among speed conditions. Across all speed conditions, females showed increased hip adduction and reduced knee adduction angles compared to males (p < 0.003). The initial peak in the hip adduction moment was enhanced in females (p = 0.003). Additionally, the hip adduction impulse showed a trend towards an increase in females at slow running speed (p = 0.07). Hip and knee frontal plane joint kinematics are gender-specific. In addition, there are indications that frontal plane joint loading is increased in female runners. Future research should focus on the relationship of these observations regarding overuse running injuries.

  15. High-energy water sites determine peptide binding affinity and specificity of PDZ domains.

    PubMed

    Beuming, Thijs; Farid, Ramy; Sherman, Woody

    2009-08-01

    PDZ domains have well known binding preferences for distinct C-terminal peptide motifs. For most PDZ domains, these motifs are of the form [S/T]-W-[I/L/V]. Although the preference for S/T has been explained by a specific hydrogen bond interaction with a histidine in the PDZ domain and the (I/L/V) is buried in a hydrophobic pocket, the mechanism for Trp specificity at the second to last position has thus far remained unknown. Here, we apply a method to compute the free energies of explicit water molecules and predict that potency gained by Trp binding is due to a favorable release of high-energy water molecules into bulk. The affinities of a series of peptides for both wild-type and mutant forms of the PDZ domain of Erbin correlate very well with the computed free energy of binding of displaced waters, suggesting a direct relationship between water displacement and peptide affinity. Finally, we show a correlation between the magnitude of the displaced water free energy and the degree of Trp-sensitivity among subtypes of the HTRA PDZ family, indicating a water-mediated mechanism for specificity of peptide binding.

  16. Selection of a Novel and Highly Specific Tumor Necrosis Factor α (TNFα) Antagonist

    PubMed Central

    Byla, Povilas; Andersen, Mikkel H.; Holtet, Thor L.; Jacobsen, Helle; Munch, Mette; Gad, Hans Henrik; Thøgersen, Hans Christian; Hartmann, Rune

    2010-01-01

    Inhibition of tumor necrosis factor α (TNFα) is a favorable way of treating several important diseases such as rheumatoid arthritis, Crohn disease, and psoriasis. Therefore, an extensive range of TNFα inhibitory proteins, most of them based upon an antibody scaffold, has been developed and used with variable success as therapeutics. We have developed a novel technology platform using C-type lectins as a vehicle for the creation of novel trimeric therapeutic proteins with increased avidity and unique properties as compared with current protein therapeutics. We chose human TNFα as a test target to validate this new technology because of the extensive experience available with protein-based TNFα antagonists. Here, we present a novel and highly specific TNFα antagonist developed using this technology. Furthermore, we have solved the three-dimensional structure of the antagonist-TNFα complex by x-ray crystallography, and this structure is presented here. The structure has given us a unique insight into how the selection procedure works at a molecular level. Surprisingly little change is observed in the C-type lectin-like domain structure outside of the randomized regions, whereas a substantial change is observed within the randomized loops. Thus, the overall integrity of the C-type lectin-like domain is maintained, whereas specificity and binding affinity are changed by the introduction of a number of specific contacts with TNFα. PMID:20179326

  17. Specific Delivery of MiRNA for High Efficient Inhibition of Prostate Cancer by RNA Nanotechnology.

    PubMed

    Binzel, Daniel W; Shu, Yi; Li, Hui; Sun, Meiyan; Zhang, Qunshu; Shu, Dan; Guo, Bin; Guo, Peixuan

    2016-08-01

    Both siRNA and miRNA can serve as powerful gene-silencing reagents but their specific delivery to cancer cells in vivo without collateral damage to healthy cells remains challenging. We report here the application of RNA nanotechnology for specific and efficient delivery of anti-miRNA seed-targeting sequence to block the growth of prostate cancer in mouse models. Utilizing the thermodynamically ultra-stable three-way junction of the pRNA of phi29 DNA packaging motor, RNA nanoparticles were constructed by bottom-up self-assembly containing the anti-prostate-specific membrane antigen (PSMA) RNA aptamer as a targeting ligand and anti-miR17 or anti-miR21 as therapeutic modules. The 16 nm RNase-resistant and thermodynamically stable RNA nanoparticles remained intact after systemic injection in mice and strongly bound to tumors with little or no accumulation in healthy organs 8 hours postinjection, and subsequently repressed tumor growth at low doses with high efficiency.

  18. humMR1, a highly specific humanized single chain antibody for targeting EGFRvIII.

    PubMed

    Safdari, Yaghoub; Farajnia, Safar; Asgharzadeh, Mohammad; Omidfar, Kobra; Khalili, Masoumeh

    2014-02-01

    Production of an efficient humanized single chain antibody is reported here to specifically target EGFRvIII, a truncated receptor expressed in a wide variety of human cancers. CDR loops of MR1, a phage display-derived murine single chain antibody developed against this mutant receptor, were grafted on human frameworks that had been selected based on similarity to MR1 in terms of two distinct parameters, variable domain protein sequence and CDR canonical structures. Moreover, two point mutations were introduced in CDR-H2 and CDR-H3 loops of the humanized antibody to destroy its cross-reactivity to wild-type EGFR. The resultant antibody, referred to as humMR1, was found by MTT assay, ELISA and western blot techniques to be highly specific for EGFRvIII. The affinity of this antibody for EGFRvIII-specific 14-amino acid synthetic peptide and HC2 cells were measured to be 1.87 × 10(10) and 2.17 × 10(10)/M respectively. This humanized antibody leads to 78.5% inhibition in proliferation of EGFRvIII-overexpressing cells.

  19. High throughput peptide mapping method for analysis of site specific monoclonal antibody oxidation.

    PubMed

    Li, Xiaojuan; Xu, Wei; Wang, Yi; Zhao, Jia; Liu, Yan-Hui; Richardson, Daisy; Li, Huijuan; Shameem, Mohammed; Yang, Xiaoyu

    2016-08-19

    Oxidation of therapeutic monoclonal antibodies (mAbs) often occurs on surface exposed methionine and tryptophan residues during their production in cell culture, purification, and storage, and can potentially impact the binding to their targets. Characterization of site specific oxidation is critical for antibody quality control. Antibody oxidation is commonly determined by peptide mapping/LC-MS methods, which normally require a long (up to 24h) digestion step. The prolonged sample preparation procedure could result in oxidation artifacts of susceptible methionine and tryptophan residues. In this paper, we developed a rapid and simple UV based peptide mapping method that incorporates an 8-min trypsin in-solution digestion protocol for analysis of oxidation. This method is able to determine oxidation levels at specific residues of a mAb based on the peptide UV traces within <1h, from either TBHP treated or UV light stressed samples. This is the simplest and fastest method reported thus far for site specific oxidation analysis, and can be applied for routine or high throughput analysis of mAb oxidation during various stability and degradation studies. By using the UV trace, the method allows more accurate measurement than mass spectrometry and can be potentially implemented as a release assay. It has been successfully used to monitor antibody oxidation in real time stability studies.

  20. Modified High-Molecular-Weight Hyaluronan Promotes Allergen-Specific Immune Tolerance.

    PubMed

    Gebe, John A; Yadava, Koshika; Ruppert, Shannon M; Marshall, Payton; Hill, Paul; Falk, Ben A; Sweere, Johanna M; Han, Hongwei; Kaber, Gernot; Medina, Carlos; Mikecz, Katalin; Ziegler, Steven F; Balaji, Swathi; Keswani, Sundeep G; Perez, Vinicio A de Jesus; Butte, Manish J; Nadeau, Kari; Altemeier, William A; Fanger, Neil; Bollyky, Paul L

    2017-01-01

    The extracellular matrix in asthmatic lungs contains abundant low-molecular-weight hyaluronan, and this is known to promote antigen presentation and allergic responses. Conversely, high-molecular-weight hyaluronan (HMW-HA), typical of uninflamed tissues, is known to suppress inflammation. We investigated whether HMW-HA can be adapted to promote tolerance to airway allergens. HMW-HA was thiolated to prevent its catabolism and was tethered to allergens via thiol linkages. This platform, which we call "XHA," delivers antigenic payloads in the context of antiinflammatory costimulation. Allergen/XHA was administered intranasally to mice that had been sensitized previously to these allergens. XHA prevents allergic airway inflammation in mice sensitized previously to either ovalbumin or cockroach proteins. Allergen/XHA treatment reduced inflammatory cell counts, airway hyperresponsiveness, allergen-specific IgE, and T helper type 2 cell cytokine production in comparison with allergen alone. These effects were allergen specific and IL-10 dependent. They were durable for weeks after the last challenge, providing a substantial advantage over the current desensitization protocols. Mechanistically, XHA promoted CD44-dependent inhibition of nuclear factor-κB signaling, diminished dendritic cell maturation, and reduced the induction of allergen-specific CD4 T-helper responses. XHA and other potential strategies that target CD44 are promising alternatives for the treatment of asthma and allergic sinusitis.

  1. Three Recombinant Engineered Antibodies against Recombinant Tags with High Affinity and Specificity.

    PubMed

    Zhao, Hongyu; Shen, Ao; Xiang, Yang K; Corey, David P

    2016-01-01

    We describe three recombinant engineered antibodies against three recombinant epitope tags, constructed with divalent binding arms to recognize divalent epitopes and so achieve high affinity and specificity. In two versions, an epitope is inserted in tandem into a protein of interest, and a homodimeric antibody is constructed by fusing a high-affinity epitope-binding domain to a human or mouse Fc domain. In a third, a heterodimeric antibody is constructed by fusing two different epitope-binding domains which target two different binding sites in GFP, to polarized Fc fragments. These antibody/epitope pairs have affinities in the low picomolar range and are useful tools for many antibody-based applications.

  2. Establishing Specifications for Low Enriched Uranium Fuel Operations Conducted Outside the High Flux Isotope Reactor Site

    SciTech Connect

    Pinkston, Daniel; Primm, Trent; Renfro, David G; Sease, John D

    2010-10-01

    The National Nuclear Security Administration (NNSA) has funded staff at Oak Ridge National Laboratory (ORNL) to study the conversion of the High Flux Isotope Reactor (HFIR) from the current, high enriched uranium fuel to low enriched uranium fuel. The LEU fuel form is a metal alloy that has never been used in HFIR or any HFIR-like reactor. This report provides documentation of a process for the creation of a fuel specification that will meet all applicable regulations and guidelines to which UT-Battelle, LLC (UTB) the operating contractor for ORNL - must adhere. This process will allow UTB to purchase LEU fuel for HFIR and be assured of the quality of the fuel being procured.

  3. Three Recombinant Engineered Antibodies against Recombinant Tags with High Affinity and Specificity

    PubMed Central

    Zhao, Hongyu; Shen, Ao; Xiang, Yang K.; Corey, David P.

    2016-01-01

    We describe three recombinant engineered antibodies against three recombinant epitope tags, constructed with divalent binding arms to recognize divalent epitopes and so achieve high affinity and specificity. In two versions, an epitope is inserted in tandem into a protein of interest, and a homodimeric antibody is constructed by fusing a high-affinity epitope-binding domain to a human or mouse Fc domain. In a third, a heterodimeric antibody is constructed by fusing two different epitope-binding domains which target two different binding sites in GFP, to polarized Fc fragments. These antibody/epitope pairs have affinities in the low picomolar range and are useful tools for many antibody-based applications. PMID:26943906

  4. Higher sensitivity secondary ion mass spectrometry of biological molecules for high resolution, chemically specific imaging.

    PubMed

    McDonnell, Liam A; Heeren, Ron M A; de Lange, Robert P J; Fletcher, Ian W

    2006-09-01

    To expand the role of high spatial resolution secondary ion mass spectrometry (SIMS) in biological studies, numerous developments have been reported in recent years for enhancing the molecular ion yield of high mass molecules. These include both surface modification, including matrix-enhanced SIMS and metal-assisted SIMS, and polyatomic primary ions. Using rat brain tissue sections and a bismuth primary ion gun able to produce atomic and polyatomic primary ions, we report here how the sensitivity enhancements provided by these developments are additive. Combined surface modification and polyatomic primary ions provided approximately 15.8 times more signal than using atomic primary ions on the raw sample, whereas surface modification and polyatomic primary ions yield approximately 3.8 and approximately 8.4 times more signal. This higher sensitivity is used to generate chemically specific images of higher mass biomolecules using a single molecular ion peak.

  5. Specific-heat measurement of single metallic, carbon, and ceramic fibers at very high temperature

    NASA Astrophysics Data System (ADS)

    Pradère, C.; Goyhénèche, J. M.; Batsale, J. C.; Dilhaire, S.; Pailler, R.

    2005-06-01

    The main objective of this work is to present a method for measuring the specific heat of single metallic, carbon, and ceramic fibers at very high temperature. The difficulty of the measurement is due to the microscale of the fiber (≈10μm) and the important range of temperature (700-2700K). An experimental device, a modelization of the thermal behavior, and an analytic model have been developed. A discussion on the measurement accuracy yields a global uncertainty lower than 10%. The characterization of a tungsten filament with thermal properties identical to those of the bulk allows the validation of the device and the thermal estimation method. Finally, measurements on carbon and ceramic fibers have been done at very high temperature.

  6. A novel star identification technique robust to high presence of false objects: The Multi-Poles Algorithm

    NASA Astrophysics Data System (ADS)

    Schiattarella, Vincenzo; Spiller, Dario; Curti, Fabio

    2017-04-01

    This work proposes a novel technique for the star pattern recognition for the Lost in Space, named Multi-Poles Algorithm. This technique is especially designed to ensure a reliable identification of stars when there is a large number of false objects in the image, such as Single Event Upsets, hot pixels or other celestial bodies. The algorithm identifies the stars using three phases: the acceptance phase, the verification phase and the confirmation phase. The acceptance phase uses a polar technique to yield a set of accepted stars. The verification phase performs a cross-check between two sets of accepted stars providing a new set of verified stars. Finally, the confirmation phase introduces an additional check to discard or to keep a verified star. As a result, this procedure guarantees a high robustness to false objects in the acquired images. A reliable simulator is developed to test the algorithm to obtain accurate numerical results. The star tracker is simulated as a 1024 × 1024 Active Pixel Sensor with a 20° Field of View. The sensor noises are added using suitable distribution models. The stars are simulated using the Hipparcos catalog with corrected magnitudes accordingly to the instrumental response of the sensor. The Single Event Upsets are modeled based on typical shapes detected from some missions. The tests are conducted through a Monte Carlo analysis covering the entire celestial sphere. The numerical results are obtained for both a fixed and a variable attitude configuration. In the first case, the angular velocity is zero and the simulations give a success rate of 100% considering a number of false objects up to six times the number of the cataloged stars in the image. The success rate decreases at 66% when the number of false objects is increased to fifteen times the number of cataloged stars. For moderate angular velocities, preliminary results are given for constant rate and direction. By increasing the angular rate, the performances of the

  7. A 3D High-Order Unstructured Finite-Volume Algorithm for Solving Maxwell's Equations

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A three-dimensional finite-volume algorithm based on arbitrary basis functions for time-dependent problems on general unstructured grids is developed. The method is applied to the time-domain Maxwell equations. Discrete unknowns are volume integrals or cell averages of the electric and magnetic field variables. Spatial terms are converted to surface integrals using the Gauss curl theorem. Polynomial basis functions are introduced in constructing local representations of the fields and evaluating the volume and surface integrals. Electric and magnetic fields are approximated by linear combinations of these basis functions. Unlike other unstructured formulations used in Computational Fluid Dynamics, the new formulation actually does not reconstruct the field variables at each time step. Instead, the spatial terms are calculated in terms of unknowns by precomputing weights at the beginning of the computation as functions of cell geometry and basis functions to retain efficiency. Since no assumption is made for cell geometry, this new formulation is suitable for arbitrarily defined grids, either smooth or unsmooth. However, to facilitate the volume and surface integrations, arbitrary polyhedral cells with polygonal faces are used in constructing grids. Both centered and upwind schemes are formulated. It is shown that conventional schemes (second order in Cartesian grids) are equivalent to the new schemes using first degree polynomials as the basis functions and the midpoint quadrature for the integrations. In the new formulation, higher orders of accuracy are achieved by using higher degree polynomial basis functions. Furthermore, all the surface and volume integrations are carried out exactly. Several model electromagnetic scattering problems are calculated and compared with analytical solutions. Examples are given for cases based on 0th to 3rd degree polynomial basis functions. In all calculations, a centered scheme is applied in the interior, while an upwind

  8. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  9. HALO384: a halo-based potency prediction algorithm for high-throughput detection of antimicrobial agents.

    PubMed

    Woehrmann, Marcos H; Gassner, Nadine C; Bray, Walter M; Stuart, Joshua M; Lokey, Scott

    2010-02-01

    A high-throughput (HT) agar-based halo assay is described, which allows for rapid screening of chemical libraries for bioactivity in microorganisms such as yeast and bacteria. A pattern recognition algorithm was developed to identify halo-like shapes in plate reader optical density (OD) measurements. The authors find that the total growth inhibition within a detected halo provides an accurate estimate of a compound's potency measured in terms of its EC(50). The new halo recognition method performs significantly better than an earlier method based on single-point OD readings. An assay based on the halo algorithm was used to screen a 21,120-member library of drug-like compounds in Saccharomyces cerevisiae, leading to the identification of novel bioactive scaffolds containing derivatives of varying potencies. The authors also show that the HT halo assay can be performed with the pathogenic bacterium Vibrio cholerae and that liquid culture EC(50) values and halo scores show a good correlation in this organism. These results suggest that the HT halo assay provides a rapid and inexpensive way to screen for bioactivity in multiple microorganisms.

  10. Conformer generation with OMEGA: algorithm and validation using high quality structures from the Protein Databank and Cambridge Structural Database.

    PubMed

    Hawkins, Paul C D; Skillman, A Geoffrey; Warren, Gregory L; Ellingson, Benjamin A; Stahl, Matthew T

    2010-04-26

    Here, we present the algorithm and validation for OMEGA, a systematic, knowledge-based conformer generator. The algorithm consists of three phases: assembly of an initial 3D structure from a library of fragments; exhaustive enumeration of all rotatable torsions using values drawn from a knowledge-based list of angles, thereby generating a large set of conformations; and sampling of this set by geometric and energy criteria. Validation of conformer generators like OMEGA has often been undertaken by comparing computed conformer sets to experimental molecular conformations from crystallography, usually from the Protein Databank (PDB). Such an approach is fraught with difficulty due to the systematic problems with small molecule structures in the PDB. Methods are presented to identify a diverse set of small molecule structures from cocomplexes in the PDB that has maximal reliability. A challenging set of 197 high quality, carefully selected ligand structures from well-solved models was obtained using these methods. This set will provide a sound basis for comparison and validation of conformer generators in the future. Validation results from this set are compared to the results using structures of a set of druglike molecules extracted from the Cambridge Structural Database (CSD). OMEGA is found to perform very well in reproducing the crystallographic conformations from both these data sets using two complementary metrics of success.

  11. Estimation of sediment transport with an in-situ acoustic retrieval algorithm in the high-turbidity Changjiang Estuary, China

    NASA Astrophysics Data System (ADS)

    Ge, Jian-zhong; Ding, Ping-xing; Li, Cheng; Fan, Zhong-ya; Shen, Fang; Kong, Ya-zhen

    2015-12-01

    A comprehensive acoustic retrieval algorithm to investigate suspended sediment is presented with the combined validations of Acoustic Doppler Current Profiler (ADCP) and Optical Backscattering Sensor (OBS) monitoring along seven cross-channel sections in the high-turbidity North Passage of the Changjiang Estuary, China. The realistic water conditions, horizontal and vertical salinities, and grain size of the suspended sediment are considered in the retrieval algorithm. Relations between net volume scattering of sound attenuation ( S v ) due to sediments and ADCP echo intensity ( E) were obtained with reasonable accuracy after applying the linear regression method. In the river mouth, an intensive vertical stratification and horizontal inhomogeneity were found, with a higher concentration of sediment in the North Passage and a lower concentration in the North Channel and South Passage. Additionally, The North Passage is characterized by higher sediment concentration in the middle region and lower concentration in the entrance and outlet areas. The maximum sediment flux rate, occurred in the middle region, could reach 6.3×105 and 1.5×105 t/h during the spring and neap tide, respectively. Retrieved sediment fluxes in the middle region are significantly larger than that in the upstream and downstream region. This strong sediment imbalance along the main channel indicates potential secondary sediment supply from southern Jiuduansha Shoals.

  12. An RNA-based signature enables high specificity detection of circulating tumor cells in hepatocellular carcinoma

    PubMed Central

    Kalinich, Mark; Bhan, Irun; Kwan, Tanya T.; Miyamoto, David T.; Javaid, Sarah; LiCausi, Joseph A.; Milner, John D.; Hong, Xin; Goyal, Lipika; Sil, Srinjoy; Choz, Melissa; Ho, Uyen; Kapur, Ravi; Muzikansky, Alona; Zhang, Huidan; Weitz, David A.; Sequist, Lecia V.; Ryan, David P.; Chung, Raymond T.; Zhu, Andrew X.; Isselbacher, Kurt J.; Ting, David T.; Toner, Mehmet; Maheswaran, Shyamala; Haber, Daniel A.

    2017-01-01

    Circulating tumor cells (CTCs) are shed into the bloodstream by invasive cancers, but the difficulty inherent in identifying these rare cells by microscopy has precluded their routine use in monitoring or screening for cancer. We recently described a high-throughput microfluidic CTC-iChip, which efficiently depletes hematopoietic cells from blood specimens and enriches for CTCs with well-preserved RNA. Application of RNA-based digital PCR to detect CTC-derived signatures may thus enable highly accurate tissue lineage-based cancer detection in blood specimens. As proof of principle, we examined hepatocellular carcinoma (HCC), a cancer that is derived from liver cells bearing a unique gene expression profile. After identifying a digital signature of 10 liver-specific transcripts, we used a cross-validated logistic regression model to identify the presence of HCC-derived CTCs in nine of 16 (56%) untreated patients with HCC versus one of 31 (3%) patients with nonmalignant liver disease at risk for developing HCC (P < 0.0001). Positive CTC scores declined in treated patients: Nine of 32 (28%) patients receiving therapy and only one of 15 (7%) patients who had undergone curative-intent ablation, surgery, or liver transplantation were positive. RNA-based digital CTC scoring was not correlated with the standard HCC serum protein marker alpha fetoprotein (P = 0.57). Modeling the sequential use of these two orthogonal markers for liver cancer screening in patients with high-risk cirrhosis generates positive and negative predictive values of 80% and 86%, respectively. Thus, digital RNA quantitation constitutes a sensitive and specific CTC readout, enabling high-throughput clinical applications, such as noninvasive screening for HCC in populations where viral hepatitis and cirrhosis are prevalent. PMID:28096363

  13. Domain Specific Changes in Cognition at High Altitude and Its Correlation with Hyperhomocysteinemia

    PubMed Central

    Sharma, Vijay K.; Das, Saroj K.; Dhar, Priyanka; Hota, Kalpana B.; Mahapatra, Bidhu B.; Vashishtha, Vivek; Kumar, Ashish; Hota, Sunil K.; Norboo, Tsering; Srivastava, Ravi B.

    2014-01-01

    Though acute exposure to hypobaric hypoxia is reported to impair cognitive performance, the effects of prolonged exposure on different cognitive domains have been less studied. The present study aimed at investigating the time dependent changes in cognitive performance on prolonged stay at high altitude and its correlation with electroencephalogram (EEG) and plasma homocysteine. The study was conducted on 761 male volunteers of 25–35 years age who had never been to high altitude and baseline data pertaining to domain specific cognitive performance, EEG and homocysteine was acquired at altitude ≤240 m mean sea level (MSL). The volunteers were inducted to an altitude of 4200–4600 m MSL and longitudinal follow-ups were conducted at durations of 03, 12 and 18 months. Neuropsychological assessment was performed for mild cognitive impairment (MCI), attention, information processing rate, visuo-spatial cognition and executive functioning. Total homocysteine (tHcy), vitamin B12 and folic acid were estimated. Mini Mental State Examination (MMSE) showed temporal increase in the percentage prevalence of MCI from 8.17% on 03 months of stay at high altitude to 18.54% on 18 months of stay. Impairment in visuo-spatial executive, attention, delayed recall and procedural memory related cognitive domains were detected following prolonged stay in high altitude. Increase in alpha wave amplitude in the T3, T4 and C3 regions was observed during the follow-ups which was inversely correlated (r = −0.68) to MMSE scores. The tHcy increased proportionately with duration of stay at high altitude and was correlated with MCI. No change in vitamin B12 and folic acid was observed. Our findings suggest that cognitive impairment is progressively associated with duration of stay at high altitude and is correlated with elevated tHcy in the plasma. Moreover, progressive MCI at high altitude occurs despite acclimatization and is independent of vitamin B12 and folic acid. PMID:24988417

  14. Domain specific changes in cognition at high altitude and its correlation with hyperhomocysteinemia.

    PubMed

    Sharma, Vijay K; Das, Saroj K; Dhar, Priyanka; Hota, Kalpana B; Mahapatra, Bidhu B; Vashishtha, Vivek; Kumar, Ashish; Hota, Sunil K; Norboo, Tsering; Srivastava, Ravi B

    2014-01-01

    Though acute exposure to hypobaric hypoxia is reported to impair cognitive performance, the effects of prolonged exposure on different cognitive domains have been less studied. The present study aimed at investigating the time dependent changes in cognitive performance on prolonged stay at high altitude and its correlation with electroencephalogram (EEG) and plasma homocysteine. The study was conducted on 761 male volunteers of 25-35 years age who had never been to high altitude and baseline data pertaining to domain specific cognitive performance, EEG and homocysteine was acquired at altitude ≤240 m mean sea level (MSL). The volunteers were inducted to an altitude of 4200-4600 m MSL and longitudinal follow-ups were conducted at durations of 03, 12 and 18 months. Neuropsychological assessment was performed for mild cognitive impairment (MCI), attention, information processing rate, visuo-spatial cognition and executive functioning. Total homocysteine (tHcy), vitamin B12 and folic acid were estimated. Mini Mental State Examination (MMSE) showed temporal increase in the percentage prevalence of MCI from 8.17% on 03 months of stay at high altitude to 18.54% on 18 months of stay. Impairment in visuo-spatial executive, attention, delayed recall and procedural memory related cognitive domains were detected following prolonged stay in high altitude. Increase in alpha wave amplitude in the T3, T4 and C3 regions was observed during the follow-ups which was inversely correlated (r = -0.68) to MMSE scores. The tHcy increased proportionately with duration of stay at high altitude and was correlated with MCI. No change in vitamin B12 and folic acid was observed. Our findings suggest that cognitive impairment is progressively associated with duration of stay at high altitude and is correlated with elevated tHcy in the plasma. Moreover, progressive MCI at high altitude occurs despite acclimatization and is independent of vitamin B12 and folic acid.

  15. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  16. Very high specific activity ⁶⁶/⁶⁸Ga from zinc targets for PET.

    PubMed

    Engle, J W; Lopez-Rodriguez, V; Gaspar-Carcamo, R E; Valdovinos, H F; Valle-Gonzalez, M; Trejo-Ballado, F; Severin, G W; Barnhart, T E; Nickles, R J; Avila-Rodriguez, M A

    2012-08-01

    This work describes the production of very high specific activity (66/68)Ga from (nat)Zn(p,n) and (66)Zn(p,n) using proton irradiations between 7 and 16 MeV, with emphasis on (66)Ga for use with common bifunctional chelates. Principal radiometallic impurities are (65)Zn from (p,x) and (67)Ga from (p,n). Separation of radiogallium from target material is accomplished with cation exchange chromatography in hydrochloric acid solution. Efficient recycling of Zn target material is possible using electrodeposition of Zn from its chloride form, but these measures are not necessary to achieve high specific activity or near-quantitative radiolabeling yields from natural targets. Inductively coupled plasma mass spectroscopy (ICP-MS) measures less than 2 ppb non-radioactive gallium in the final product, and the reactivity of (66)Ga with common bifunctional chelates, decay corrected to the end of irradiation, is 740 GBq/μmol (20 Ci/μmol) using natural zinc as a target material. Recycling enriched (66)Zn targets increased the reactivity of (66)Ga with common bifunctional chelates.

  17. Boechera Species Exhibit Species-Specific Responses to Combined Heat and High Light Stress

    PubMed Central

    Gallas, Genna; Waters, Elizabeth R.

    2015-01-01

    As sessile organisms, plants must be able to complete their life cycle in place and therefore tolerance to abiotic stress has had a major role in shaping biogeographical patterns. However, much of what we know about plant tolerance to abiotic stresses is based on studies of just a few plant species, most notably the model species Arabidopsis thaliana. In this study we examine natural variation in the stress responses of five diverse Boechera (Brassicaceae) species. Boechera plants were exposed to basal and acquired combined heat and high light stress. Plant response to these stresses was evaluated based on chlorophyll fluorescence measurements, induction of leaf chlorosis, and gene expression. Many of the Boechera species were more tolerant to heat and high light stress than A. thaliana. Gene expression data indicates that two important marker genes for stress responses: APX2 (Ascorbate peroxidase 2) and HsfA2 (Heat shock transcription factor A2) have distinct species-specific expression patterns. The findings of species-specific responses and tolerance to stress indicate that stress pathways are evolutionarily labile even among closely related species. PMID:26030823

  18. A Symmetrical, Planar SOFC Design for NASA's High Specific Power Density Requirements

    NASA Technical Reports Server (NTRS)

    Cable, Thomas L.; Sofie, Stephen W.

    2007-01-01

    Solid oxide fuel cell (SOFC) systems for aircraft applications require an order of magnitude increase in specific power density (1.0 kW/kg) and long life. While significant research is underway to develop anode supported cells which operate at temperatures in the range of 650-800 C, concerns about Cr-contamination from the metal interconnect may drive the operating temperature down further, to 750 C and lower. Higher temperatures, 900-1000 C, are more favorable for SOFC stacks to achieve specific power densities of 1.0 kW/kg. Since metal interconnects are not practical at these high temperatures and can account for up to 75% of the weight of the stack, NASA is pursuing a design that uses a thin, LaCrO3-based ceramic interconnect that incorporates gas channels into the electrodes. The bi-electrode supported cell (BSC) uses porous YSZ scaffolds, on either side of a 10-20 microns electrolyte. The porous support regions are fabricated with graded porosity using the freeze-tape casting process which can be tailored for fuel and air flow. Removing gas channels from the interconnect simplifies the stack design and allows the ceramic interconnect to be kept thin, on the order of 50 -100 microns. The YSZ electrode scaffolds are infiltrated with active electrode materials following the high temperature sintering step. The NASA-BSC is symmetrical and CTE matched, providing balanced stresses and favorable mechanical properties for vibration and thermal cycling.

  19. Highly selective nanocomposite sorbents for the specific recognition of S-ibuprofen from structurally related compounds

    NASA Astrophysics Data System (ADS)

    Sooraj, M. P.; Mathew, Beena

    2016-06-01

    The aim of the present work was to synthesize highly homogeneous synthetic recognition units for the selective and specific separation of S-ibuprofen from its closely related structural analogues using molecular imprinting technology. The molecular imprinted polymer wrapped on functionalized multiwalled carbon nanotubes (MWCNT-MIP) was synthesized using S-ibuprofen as the template in the imprinting process. The characterization of the products and intermediates were done by FT-IR spectroscopy, PXRD, TGA, SEM and TEM techniques. The high regression coefficient value for Langmuir adsorption isotherm ( R 2 = 0.999) showed the homogeneous imprint sites and surface adsorption nature of the prepared polymer sorbent. The nano-MIP followed a second-order kinetics ( R 2 = 0.999) with a rapid adsorption rate which also suggested the formation of recognition sites on the surface of MWCNT-MIP. MWCNT-MIP showed 83.6 % higher rebinding capacity than its non-imprinted counterpart. The higher relative selectivity coefficient ( k') of the imprinted sorbent towards S-ibuprofen than that for its structural analogues evidenced the capability of the nano-MIP to selectively and specifically rebind the template rather than its analogues.

  20. Boechera species exhibit species-specific responses to combined heat and high light stress.

    PubMed

    Gallas, Genna; Waters, Elizabeth R

    2015-01-01

    As sessile organisms, plants must be able to complete their life cycle in place and therefore tolerance to abiotic stress has had a major role in shaping biogeographical patterns. However, much of what we know about plant tolerance to abiotic stresses is based on studies of just a few plant species, most notably the model species Arabidopsis thaliana. In this study we examine natural variation in the stress responses of five diverse Boechera (Brassicaceae) species. Boechera plants were exposed to basal and acquired combined heat and high light stress. Plant response to these stresses was evaluated based on chlorophyll fluorescence measurements, induction of leaf chlorosis, and gene expression. Many of the Boechera species were more tolerant to heat and high light stress than A. thaliana. Gene expression data indicates that two important marker genes for stress responses: APX2 (Ascorbate peroxidase 2) and HsfA2 (Heat shock transcription factor A2) have distinct species-specific expression patterns. The findings of species-specific responses and tolerance to stress indicate that stress pathways are evolutionarily labile even among closely related species.

  1. The adenylate energy charge and specific fermentation rate of brewer's yeasts fermenting high- and very high-gravity worts.

    PubMed

    Guimarães, Pedro M R; Londesborough, John

    2008-01-01

    Intracellular and extracellular ATP, ADP and AMP (i.e. 5'-AMP) were measured during fermentations of high- (15 degrees P) and very high-gravity (VHG, 25 degrees P) worts by two lager yeasts. Little extracellular ATP and ADP but substantial amounts of extracellular AMP were found. Extracellular AMP increased during fermentation and reached higher values (3 microM) in 25 degrees P than 15 degrees P worts (1 microM). More AMP (13 microM at 25 degrees P) was released during fermentation with industrially cropped yeast than with the same strain grown in the laboratory. ATP was the dominant intracellular adenine nucleotide and the adenylate energy charge (EC = ([ATP] + 0.5*[ADP])/([ATP] + [ADP] + [AMP])) remained high (>0.8) until residual sugar concentrations were low and specific rates of ethanol production were < 5% of the maximum values in early fermentation. The high ethanol concentrations (>85 g/l) reached in VHG fermentations did not decrease the EC below values that permit synthesis of new proteins. The results suggest that, during wort fermentations, the ethanol tolerance of brewer's strains is high so long as fermentation continues. Under these conditions, maintenance of the EC seems to depend upon active transport of alpha-glucosides, which in turn depends upon maintenance of the EC. Therefore, the collapse of the EC and cell viability when residual alpha-glucoside concentrations no longer support adequate rates of fermentation can be very abrupt. This emphasizes the importance of early cropping of yeast for recycling.

  2. Acceleration of PIC and CR algorithms for High Fidelity In-Space Propulsion Modeling (Briefing Charts)

    DTIC Science & Technology

    2013-07-29

    RQRS M&S FUTURE WORK Integrate R&D w/ Production TODO: High-Order Fluid/ MHD GPU Models (Le/Cole*/Bilyeu PhD Research) GPU Accelerated Chemical Kinetics...propulsion modeling (Briefing Charts) 5a. CONTRACT NUMBER In-House 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) R. Martin, H. Le, C...August 2013. 14. ABSTRACT We describe enhancements under development for multi-scale methods to be applied to the high-fidelity modeling of spacecraft

  3. Li-Ion Batteries for Space Applications: High Specific Energy and Wide-Operating Temperature

    NASA Technical Reports Server (NTRS)

    Smart, Marshall; Whitacre, Jay; West, William; Manthiram, A.; Prakash, G. K. S; Bugga, Ratnakumar

    2006-01-01

    Compared to the conventional Ni-Co oxides (with or without AI additions), the NMC (1/3:1/3:1/3) cathode provides marginal improvement in specific capacity. However, some of the formulations based on the solid solutions of layered Li2Mn03 and LiM02 (M = Mn0.5Ni0.5} have shown capacities as high as 250 mAh/g, combined with high cell voltages (4.5 V) and with the likelihood of enhanced thermal stability. Multi-component electrolytes with low EC-proportions and selected co-solvents provide significant improvement in the low temperature performance, down to -60 C, combined with the non-flammable attribute from the co-solvents. The NMC cathode shows good compatibility with the carbonate-based low temperature electrolytes. Impressive performances have been realized at low temperatures of <= 30 C. Electrolytes with high salt concentration and high EC content fare well at room temperatures, while the formulations with low EC content and low salt concentration are preferred at low temperatures. DPA studies reveal increased SEI growth on the electrodes, especially anode, upon irradiation. Performance of low temperature electrolytes in prototype cells corroborate the findings from laboratory cells.

  4. Understanding Strongly Correlated Materials thru Theory Algorithms and High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kotliar, Gabriel

    A long standing challenge in condensed matter physics is the prediction of physical properties of materials starting from first principles. In the past two decades, substantial advances have taken place in this area. The combination of modern implementations of electronic structure methods in conjunction with Dynamical Mean Field Theory (DMFT), in combination with advanced impurity solvers, modern computer codes and massively parallel computers, are giving new system specific insights into the properties of strongly correlated electron systems enable the calculations of experimentally measurable correlation functions. The predictions of this ''theoretical spectroscopy'' can be directly compared with experimental results. In this talk I will briefly outline the state of the art of the methodology, and illustrate it with an example the origin of the solid state anomalies of elemental Plutonium.

  5. A high speed 2D time-to-impact algorithm targeted for smart image sensors

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2014-03-01

    In this paper we present a 2D extension of a previously described 1D method for a time-to-impact sensor [5][6]. As in the earlier paper, the approach is based on measuring time instead of the apparent motion of points in the image plane to obtain data similar to the optical flow. The specific properties of the motion field in the time-to-impact application are used, such as using simple feature points which are tracked from frame to frame. Compared to the 1D case, the features will be proportionally fewer which will affect the quality of the estimation. We give a proposal on how to solve this problem. Results obtained are as promising as those obtained from the 1D sensor.

  6. Algorithms, Visualization, and Mental Models: High School Students' Interactions with a Relative Motion Simulation.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    2000-01-01

    Hypothesizes that the construction of visual models, resolution of these visual models with numeric models and, in many cases, rejection of commitments such as the belief in one true velocity, are necessary for students to form integrated mental models of relative motion events. Studies high school students' relative motion problem solving.…

  7. Devices and approaches for generating specific high-affinity nucleic acid aptamers

    NASA Astrophysics Data System (ADS)

    Szeto, Kylan; Craighead, Harold G.

    2014-09-01

    High-affinity and highly specific antibody proteins have played a critical role in biological imaging, medical diagnostics, and therapeutics. Recently, a new class of molecules called aptamers has emerged as an alternative to antibodies. Aptamers are short nucleic acid molecules that can be generated and synthesized in vitro to bind to virtually any target in a wide range of environments. They are, in principal, less expensive and more reproducible than antibodies, and their versatility creates possibilities for new technologies. Aptamers are generated using libraries of nucleic acid molecules with random sequences that are subjected to affinity selections for binding to specific target molecules. This is commonly done through a process called Systematic Evolution of Ligands by EXponential enrichment, in which target-bound nucleic acids are isolated from the pool, amplified to high copy numbers, and then reselected against the desired target. This iterative process is continued until the highest affinity nucleic acid sequences dominate the enriched pool. Traditional selections require a dozen or more laborious cycles to isolate strongly binding aptamers, which can take months to complete and consume large quantities of reagents. However, new devices and insights from engineering and the physical sciences have contributed to a reduction in the time and effort needed to generate aptamers. As the demand for these new molecules increases, more efficient and sensitive selection technologies will be needed. These new technologies will need to use smaller samples, exploit a wider range of chemistries and techniques for manipulating binding, and integrate and automate the selection steps. Here, we review new methods and technologies that are being developed towards this goal, and we discuss their roles in accelerating the availability of novel aptamers.

  8. Development of a highly specific enzyme immunoassay for oxytocin and its use in plasma samples.

    PubMed

    Haraya, Shiomi; Karasawa, Koji; Sano, Yoshihiro; Ozawa, Kimiko; Kato, Nobumasa; Arakawa, Hidetoshi

    2017-01-01

    Background The peptide hormone oxytocin acts in the central nervous system and plays an important role in various complex social behaviours. We report the production of a high affinity and specificity antibody for oxytocin and its use in a highly sensitive enzyme immunoassay. Biotin that was chemically bound to oxytocin derivative containing zero to six lysines as bridge was the labelled antigen. Seven labelled antigens were used to develop a highly sensitive enzyme immunoassay. Methods Antioxytocin antiserum was obtained by immunization of oxytocin-bovine thyrogloblin conjugate to rabbit. Oxytocin sample was added to the second antibody-coated microtitre plate and allowed to react overnight at 4℃, then biotinylated oxytocin was added 1 h at 4℃, and horseradish peroxidase-labelled avidin was added and incubated for 1 h at room temperature. The plate was then washed. Horseradish peroxidase activity was measured by a colorimetric method using o-phenylenediamine (490 nm). Results The sensitivity of the enzyme immunoassay improved as the number of lysine residues increased; consequently, biotinylated oxytocin bridged with five lysines was used. A standard curve for oxytocin ranged from 1.0 to 1000 pg/assay. The detection limit of the assay was 2.36 pg, and the reproducibility was 3.6% as CV% ( n = 6). Cross-reactivity with vasopressin and vasotocin was less than 0.01%. Conclusion The sensitivity of the enzyme immunoassay could be improved by increasing the number of lysine residues on the biotin-labelled antigen. The proposed method is sensitive and more specific than conventional immunoassays for oxytocin and can be used to determine plasma oxytocin concentrations.

  9. Differential membrane-based nanocalorimeter for high-resolution measurements of low-temperature specific heat.

    PubMed

    Tagliati, S; Krasnov, V M; Rydh, A

    2012-05-01

    A differential, membrane-based nanocalorimeter for general specific heat studies of very small samples, ranging from 0.5 mg to sub-μg in mass, is described. The calorimeter operates over the temperature range from above room temperature down to 0.5 K. It consists of a pair of cells, each of which is a stack of heaters and thermometer in the center of a silicon nitride membrane, in total giving a background heat capacity less than 100 nJ/K at 300 K, decreasing to 10 pJ/K at 1 K. The device has several distinctive features: (i) The resistive thermometer, made of a Ge(1 - x)Au(x) alloy, displays a high dimensionless sensitivity ∣dlnR∕dlnT∣ ≳ 1 over the entire temperature range. (ii) The sample is placed in direct contact with the thermometer, which is allowed to self-heat. The thermometer can thus be operated at high dc current to increase the resolution. (iii) Data are acquired with a set of eight synchronized lock-in amplifiers measuring dc, 1st and 2nd harmonic signals of heaters and thermometer. This gives high resolution and allows continuous output adjustments without additional noise. (iv) Absolute accuracy is achieved via a variable-frequency-fixed-phase technique in which the measurement frequency is automatically adjusted during the measurements to account for the temperature variation of the sample heat capacity and the device thermal conductance. The performance of the calorimeter is illustrated by studying the heat capacity of a small Au sample and the specific heat of a 2.6 μg piece of superconducting Pb in various magnetic fields.

  10. How Novel Algorithms and Access to High Performance Computing Platforms are Enabling Scientific Progress in Atomic and Molecular Physics

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.

    2016-10-01

    Over the past 40 years there has been remarkable progress in the quantitative treatment of complex many-body problems in atomic and molecular physics (AMP). This has happened as a consequence of the development of new and powerful numerical methods, translating these algorithms into practical software and the associated evolution of powerful computing platforms ranging from desktops to high performance computational instruments capable of massively parallel computation. We are taking the opportunity afforded by this CCP2015 to review computational progress in scattering theory and the interaction of strong electromagnetic fields with atomic and molecular systems from the early 1960’s until the present time to show how these advances have revealed a remarkable array of interesting and in many cases unexpected features. The article is by no means complete and certainly reflects the views and experiences of the author.

  11. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  12. Mapping an Atlas of Tissue-Specific Drosophila melanogaster Metabolomes by High Resolution Mass Spectrometry

    PubMed Central

    Chintapalli, Venkateswara R.; Al Bratty, Mohammed; Korzekwa, Dominika; Watson, David G.; Dow, Julian A. T.

    2013-01-01

    Metabolomics can provide exciting insights into organismal function, but most work on simple models has focussed on the whole organism metabolome, so missing the contributions of individual tissues. Comprehensive metabolite profiles for ten tissues from adult Drosophila melanogaster were obtained here by two chromatographic methods, a hydrophilic interaction (HILIC) method for polar metabolites and a lipid profiling method also based on HILIC, in combination with an Orbitrap Exactive instrument. Two hundred and forty two polar metabolites were putatively identified in the various tissues, and 251 lipids were observed in positive ion mode and 61 in negative ion mode. Although many metabolites were detected in all tissues, every tissue showed characteristically abundant metabolites which could be rationalised against specific tissue functions. For example, the cuticle contained high levels of glutathione, reflecting a role in oxidative defence; the alimentary canal (like vertebrate gut) had high levels of acylcarnitines for fatty acid metabolism, and the head contained high levels of ether lipids. The male accessory gland uniquely contained decarboxylated S-adenosylmethionine. These data thus both provide valuable insights into tissue function, and a reference baseline, compatible with the FlyAtlas.org transcriptomic resource, for further metabolomic analysis of this important model organism, for example in the modelling of human inborn errors of metabolism, aging or metabolic imbalances such as diabetes. PMID:24205093

  13. High throughput screen identifies small molecule inhibitors specific for Mycobacterium tuberculosis phosphoserine phosphatase.

    PubMed

    Arora, Garima; Tiwari, Prabhakar; Mandal, Rahul Shubhra; Gupta, Arpit; Sharma, Deepak; Saha, Sudipto; Singh, Ramandeep

    2014-09-05

    The emergence of drug-resistant strains of Mycobacterium tuberculosis makes identification and validation of newer drug targets a global priority. Phosphoserine phosphatase (PSP), a key essential metabolic enzyme involved in conversion of O-phospho-l-serine to l-serine, was characterized in this study. The M. tuberculosis genome harbors all enzymes involved in l-serine biosynthesis including two PSP homologs: Rv0505c (SerB1) and Rv3042c (SerB2). In the present study, we have biochemically characterized SerB2 enzyme and developed malachite green-based high throughput assay system to identify SerB2 inhibitors. We have identified 10 compounds that were structurally different from known PSP inhibitors, and few of these scaffolds were highly specific in their ability to inhibit SerB2 enzyme, were noncytotoxic against mammalian cell lines, and inhibited M. tuberculosis growth in vitro. Surface plasmon resonance experiments demonstrated the relative binding for these inhibitors. The two best hits identified in our screen, clorobiocin and rosaniline, were bactericidal in activity and killed intracellular bacteria in a dose-dependent manner. We have also identified amino acid residues critical for these SerB2-small molecule interactions. This is the first study where we validate that M. tuberculosis SerB2 is a druggable and suitable target to pursue for further high throughput assay system screening.

  14. The gastric/pancreatic amylase ratio predicts postoperative pancreatic fistula with high sensitivity and specificity.

    PubMed

    Jin, Shuo; Shi, Xiao-Ju; Sun, Xiao-Dong; Zhang, Ping; Lv, Guo-Yue; Du, Xiao-Hong; Wang, Si-Yuan; Wang, Guang-Yi

    2015-01-01

    This article aims to identify risk factors for postoperative pancreatic fistula (POPF) and evaluate the gastric/pancreatic amylase ratio (GPAR) on postoperative day (POD) 3 as a POPF predictor in patients who undergo pancreaticoduodenectomy (PD).POPF significantly contributes to mortality and morbidity in patients who undergo PD. Previously identified predictors for POPF often have low predictive accuracy. Therefore, accurate POPF predictors are needed.In this prospective cohort study, we measured the clinical and biochemical factors of 61 patients who underwent PD and diagnosed POPF according to the definition of the International Study Group of Pancreatic Fistula. We analyzed the association between POPF and various factors, identified POPF risk factors, and evaluated the predictive power of the GPAR on POD3 and the levels of serum and ascites amylase.Of the 61 patients, 21 developed POPF. The color of the pancreatic drain fluid, POD1 serum, POD1 median output of pancreatic drain fluid volume, and GPAR were significantly associated with POPF. The color of the pancreatic drain fluid and high GPAR were independent risk factors. Although serum and ascites amylase did not predict POPF accurately, the cutoff value was 1.24, and GPAR predicted POPF with high sensitivity and specificity.This is the first report demonstrating that high GPAR on POD3 is a risk factor for POPF and showing that GPAR is a more accurate predictor of POPF than the previously reported amylase markers.

  15. High Throughput Screen Identifies Small Molecule Inhibitors Specific for Mycobacterium tuberculosis Phosphoserine Phosphatase*

    PubMed Central

    Arora, Garima; Tiwari, Prabhakar; Mandal, Rahul Shubhra; Gupta, Arpit; Sharma, Deepak; Saha, Sudipto; Singh, Ramandeep

    2014-01-01

    The emergence of drug-resistant strains of Mycobacterium tuberculosis makes identification and validation of newer drug targets a global priority. Phosphoserine phosphatase (PSP), a key essential metabolic enzyme involved in conversion of O-phospho-l-serine to l-serine, was characterized in this study. The M. tuberculosis genome harbors all enzymes involved in l-serine biosynthesis including two PSP homologs: Rv0505c (SerB1) and Rv3042c (SerB2). In the present study, we have biochemically characterized SerB2 enzyme and developed malachite green-based high throughput assay system to identify SerB2 inhibitors. We have identified 10 compounds that were structurally different from known PSP inhibitors, and few of these scaffolds were highly specific in their ability to inhibit SerB2 enzyme, were noncytotoxic against mammalian cell lines, and inhibited M. tuberculosis growth in vitro. Surface plasmon resonance experiments demonstrated the relative binding for these inhibitors. The two best hits identified in our screen, clorobiocin and rosaniline, were bactericidal in activity and killed intracellular bacteria in a dose-dependent manner. We have also identified amino acid residues critical for these SerB2-small molecule interactions. This is the first study where we validate that M. tuberculosis SerB2 is a druggable and suitable target to pursue for further high throughput assay system screening. PMID:25037224

  16. A simple facile approach to large scale synthesis of high specific surface area silicon nanoparticles

    SciTech Connect

    Epur, Rigved; Minardi, Luke; Datta, Moni K.; Chung, Sung Jae; Kumta, Prashant N.

    2013-12-15

    An inexpensive, facile, and high throughput synthesis of silicon nanoparticles was achieved by the mechano-chemical reduction reaction of magnesium silicide (Mg{sub 2}Si) and silicon monoxide (SiO) using a high energy mechanical milling (HEMM) technique followed by acid leaching. Characterization of the resultant product using X-Ray diffraction, Raman spectroscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and surface area analyses was performed at various stages of the synthesis process. XRD patterns show that the final product formed is single phase silicon and the nanocrystalline nature was confirmed by the shifted transverse optical (TO) band, characteristic of nc-Si determined by Raman analysis. SEM and TEM shows the presence of particles of different sizes in the range of few nanometers to agglomerates of few microns which is consistent with products obtained from mechanical milling. BET measurements show a very high specific surface area (SSA) of ∼190 m{sup 2}/g obtained due to acid leaching which is also validated by the porous nature of the particles confirmed by the SEM images. - Graphical abstract: Schematic showing the large scale production of nanosized silicon and BET surface area of the product formed at various stages.

  17. Preparation of hydrazine functionalized polymer brushes hybrid magnetic nanoparticles for highly specific enrichment of glycopeptides.

    PubMed

    Huang, Guang; Sun, Zhen; Qin, Hongqiang; Zhao, Liang; Xiong, Zhichao; Peng, Xiaojun; Ou, Junjie; Zou, Hanfa

    2014-05-07

    Hydrazide chemistry is a powerful technique in glycopeptides enrichment. However, the low density of the monolayer hydrazine groups on the conventional hydrazine-functionalized magnetic nanoparticles limits the efficiency of glycopeptides enrichment. Herein, a novel magnetic nanoparticle grafted with poly(glycidyl methacrylate) (GMA) brushes was fabricated via reversible addition-fragmentation chain transfer (RAFT) polymerization, and a large amount of hydrazine groups were further introduced to the GMA brushes by ring-opening the epoxy groups with hydrazine hydrate. The resulting magnetic nanoparticles (denoted as Fe3O4@SiO2@GMA-NHNH2) demonstrated the high specificity of capturing glycopeptides from a tryptic digest of the sample comprising a standard non-glycosylated protein bovine serum albumin (BSA) and four standard glycoproteins with a weight ratio of 50 : 1, and the detection limit was as low as 130 fmol. In the analysis of a real complex biological sample, the tryptic digest of hepatocellular carcinoma, 179 glycosites were identified by the Fe3O4@SiO2@GMA-NHNH2 nanoparticles, surpassing that of 68 glycosites by Fe3O4@SiO2-single-NHNH2 (with monolayer hydrazine groups on the surface). It can be expected that the magnetic nanoparticles modified with hydrazine functionalized polymer brushes via RAFT technique will improve the specificity and the binding capacity of glycopeptides from complex samples, and show great potential in the analysis of protein glycosylation in biological samples.

  18. High-specificity quantification method for almond-by-products, based on differential proteomic analysis.

    PubMed

    Zhang, Shiwei; Wang, Shifeng; Huang, Jingmin; Lai, Xintian; Du, Yegang; Liu, Xiaoqing; Li, Bifang; Feng, Ronghu; Yang, Guowu

    2016-03-01

    A highly specific competitive enzyme-linked immunosorbent assay (ELISA) protocol has been developed to identify and classify almond products based on differential proteomic analysis. We applied two-dimensional electrophoresis to compare the differences between almond and apricot kernels to search for almond-specific proteins. The amino acid of apricot Pru-1 was sequenced and aligned to almond Pru-1. One peptide, RQGRQQGRQQQEEGR, which exists in almond but not in apricot, was used as hapten to prepare monoclonal antibody against almond Pru-1. An optimized ELISA method was established using this antibody. The assay did not exhibit cross-reactivity with the tested apricot kernels and other edible plant seeds. The limit of detection (LOD) was 2.5-100μg/g based on different food samples. The recoveries of fortified samples at levels of twofold and eightfold LOD ranged from 82% to 96%. The coefficients of variation were less than 13.0%. Using 7M urea as extracting solution, the heat-treated protein loss ratios were 2%, 5% and 15% under pasteurization (65°C for 30min), baking (150°C for 30min) and autoclaved sterilization (120°C for 15min), respectively.

  19. High and stable substrate specificities of microorganisms in enhanced biological phosphorus removal plants.

    PubMed

    Kindaichi, Tomonori; Nierychlo, Marta; Kragelund, Caroline; Nielsen, Jeppe Lund; Nielsen, Per Halkjaer

    2013-06-01

    Microbial communities are typically characterized by conditions of nutrient limitation so the availability of the resources is likely a key factor in the niche differentiation across all species and in the regulation of the community structure. In this study we have investigated whether four species exhibit any in situ short-term changes in substrate uptake pattern when exposed to variations in substrate and growth conditions. Microautoradiography was combined with fluorescence in situ hybridization to investigate in situ cell-specific substrate uptake profiles of four probe-defined coexisting species in a wastewater treatment plant with enhanced biological phosphorus removal. These were the filamentous 'Candidatus Microthrix' and Caldilinea (type 0803), the polyphosphate-accumulating organism 'Candidatus Accumulibacter', and the denitrifying Azoarcus. The experimental conditions mimicked the conditions potentially encountered in the respective environment (starvation, high/low substrate concentration, induction with specific substrates, and single/multiple substrates). The results showed that each probe-defined species exhibited very distinct and constant substrate uptake profile in time and space, which hardly changed under any of the conditions tested. Such niche partitioning implies that a significant change in substrate composition will be reflected in a changed community structure rather than the substrate uptake response from the different species.

  20. Highly specific electronic signal transduction mediated by DNA/metal self-assembly.

    SciTech Connect

    Dentinger, Paul M.; Pathak, Srikant

    2003-11-01

    Highly specific interactions between DNA could potentially be amplified if the DNA interactions were utilized to assemble large scale parts. Fluidic assembly of microsystem parts has the potential for rapid and accurate placement of otherwise difficult to handle pieces. Ideally, each part would have a different chemical interaction that allowed it to interact with the substrate only in specific areas. One easy way to obtain a multiple chemical permutations is to use synthetic DNA oligomers. Si parts were prepared using silicon-on-insulator technology microfabrication techniques. Several surface chemistry protocols were developed to react commercial oligonucleotides to the parts. However, no obvious assembly was achieved. It was thought that small defects on the surface did not allow the microparts to be in close enough proximity for DNA hybridization, and this was. in part, confirmed by interferometry. To assist in the hybridization, plastic, pliable parts were manufactured and a new chemistry was developed. However, assembly was still absent even with the application of force. It is presently thought that one of three mechanisms is preventing the assembly. The surfaces of the two solid substrates can not get in close enough proximity, the surface chemistry lacks sufficient density to keep the parts from separating, or DNA interactions in close proximity on solid substrates are forbidden. These possibilities are discussed in detail.